commit_title
stringlengths 13
124
| commit_body
stringlengths 0
1.9k
| release_summary
stringclasses 52
values | changes_summary
stringlengths 1
758
| release_affected_domains
stringclasses 33
values | release_affected_drivers
stringclasses 51
values | domain_of_changes
stringlengths 2
571
| language_set
stringclasses 983
values | diffstat_files
int64 1
300
| diffstat_insertions
int64 0
309k
| diffstat_deletions
int64 0
168k
| commit_diff
stringlengths 92
23.4M
| category
stringclasses 108
values | commit_hash
stringlengths 34
40
| related_people
stringlengths 0
370
| domain
stringclasses 21
values | subdomain
stringclasses 241
values | leaf_module
stringlengths 0
912
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
crypto: hisilicon - add zip device using mode parameter
|
add 'uacce_mode' parameter for zip, which can be set as 0(default) or 1. '0' means zip is only registered to kernel crypto, and '1' means it's registered to both kernel crypto and uacce.
|
this release allows to map an uid to a different one in a mount; it also adds support for selecting the preemption model at runtime; support for a low-overhead memory error detector designed to be used in production; support for the acrn hypervisor designed for embedded systems; btrfs initial support for zoned devices, subpage blocks sizes and performance improvements; support for eager nfs writes; support for a thermal power management to control the surface temperature of embedded devices in an unified way; the napi polling can be moved to a kernel thread; and support for non-blocking path lookups. as always, there are many other features, new drivers, improvements and fixes.
|
add zip device using mode parameter
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'cryptography', 'security', 'networking', 'architectures x86 arm risc-v powerpc mips csky s390 pa-risc c6x']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'device voltage and frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'non-transparent bridge (ntb)', 'thunderbolt', 'clock', 'phy ("physical layer" framework)', 'cxl (compute express link)', 'various']
|
['hisilicon']
|
['h', 'c']
| 3
| 42
| 1
|
--- diff --git a/drivers/crypto/hisilicon/qm.c b/drivers/crypto/hisilicon/qm.c --- a/drivers/crypto/hisilicon/qm.c +++ b/drivers/crypto/hisilicon/qm.c - if (uacce->flags & uacce_dev_sva) { + if (uacce->flags & uacce_dev_sva && qm->mode == uacce_mode_sva) { diff --git a/drivers/crypto/hisilicon/qm.h b/drivers/crypto/hisilicon/qm.h --- a/drivers/crypto/hisilicon/qm.h +++ b/drivers/crypto/hisilicon/qm.h +/* uacce mode of the driver */ +#define uacce_mode_nouacce 0 /* don't use uacce */ +#define uacce_mode_sva 1 /* use uacce sva mode */ +#define uacce_mode_desc "0(default) means only register to crypto, 1 means both register to crypto and uacce" + + int mode; +static inline int mode_set(const char *val, const struct kernel_param *kp) +{ + u32 n; + int ret; + + if (!val) + return -einval; + + ret = kstrtou32(val, 10, &n); + if (ret != 0 || (n != uacce_mode_sva && + n != uacce_mode_nouacce)) + return -einval; + + return param_set_int(val, kp); +} + +static inline int uacce_mode_set(const char *val, const struct kernel_param *kp) +{ + return mode_set(val, kp); +} + diff --git a/drivers/crypto/hisilicon/zip/zip_main.c b/drivers/crypto/hisilicon/zip/zip_main.c --- a/drivers/crypto/hisilicon/zip/zip_main.c +++ b/drivers/crypto/hisilicon/zip/zip_main.c +static const struct kernel_param_ops zip_uacce_mode_ops = { + .set = uacce_mode_set, + .get = param_get_int, +}; + +/* + * uacce_mode = 0 means zip only register to crypto, + * uacce_mode = 1 means zip both register to crypto and uacce. + */ +static u32 uacce_mode = uacce_mode_nouacce; +module_param_cb(uacce_mode, &zip_uacce_mode_ops, &uacce_mode, 0444); +module_parm_desc(uacce_mode, uacce_mode_desc); + + qm->mode = uacce_mode;
|
Cryptography hardware acceleration
|
f8408d2b79b834f79b6c578817e84f74a85d2190
|
kai ye zhou wang wangzhou hisilicon com zaibo xu xuzaibo huawei com
|
drivers
|
crypto
|
hisilicon, zip
|
crypto: hisilicon/hpre - enable elliptic curve cryptography
|
enable x25519/x448/ecdh/ecdsa/sm2 algorithm on kunpeng 930.
|
this release allows to map an uid to a different one in a mount; it also adds support for selecting the preemption model at runtime; support for a low-overhead memory error detector designed to be used in production; support for the acrn hypervisor designed for embedded systems; btrfs initial support for zoned devices, subpage blocks sizes and performance improvements; support for eager nfs writes; support for a thermal power management to control the surface temperature of embedded devices in an unified way; the napi polling can be moved to a kernel thread; and support for non-blocking path lookups. as always, there are many other features, new drivers, improvements and fixes.
|
enable elliptic curve cryptography
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'cryptography', 'security', 'networking', 'architectures x86 arm risc-v powerpc mips csky s390 pa-risc c6x']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'device voltage and frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'non-transparent bridge (ntb)', 'thunderbolt', 'clock', 'phy ("physical layer" framework)', 'cxl (compute express link)', 'various']
|
['hisilicon/hpre']
|
['c']
| 1
| 8
| 1
|
--- diff --git a/drivers/crypto/hisilicon/hpre/hpre_main.c b/drivers/crypto/hisilicon/hpre/hpre_main.c --- a/drivers/crypto/hisilicon/hpre/hpre_main.c +++ b/drivers/crypto/hisilicon/hpre/hpre_main.c +#define hpre_rsa_enb bit(0) +#define hpre_ecc_enb bit(1) - writel(0x1, hpre_addr(qm, hpre_types_enb)); + if (qm->ver >= qm_hw_v3) + writel(hpre_rsa_enb | hpre_ecc_enb, + hpre_addr(qm, hpre_types_enb)); + else + writel(hpre_rsa_enb, hpre_addr(qm, hpre_types_enb)); +
|
Cryptography hardware acceleration
|
fbc75d03fda048bc821cb27f724ff367d5591ce8
|
hui tang
|
drivers
|
crypto
|
hisilicon, hpre
|
dt-bindings: crypto: add keem bay ocs hcu bindings
|
add device-tree bindings for the intel keem bay offload crypto subsystem (ocs) hashing control unit (hcu) crypto driver.
|
this release allows to map an uid to a different one in a mount; it also adds support for selecting the preemption model at runtime; support for a low-overhead memory error detector designed to be used in production; support for the acrn hypervisor designed for embedded systems; btrfs initial support for zoned devices, subpage blocks sizes and performance improvements; support for eager nfs writes; support for a thermal power management to control the surface temperature of embedded devices in an unified way; the napi polling can be moved to a kernel thread; and support for non-blocking path lookups. as always, there are many other features, new drivers, improvements and fixes.
|
add keem bay ocs hcu driver
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'cryptography', 'security', 'networking', 'architectures x86 arm risc-v powerpc mips csky s390 pa-risc c6x']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'device voltage and frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'non-transparent bridge (ntb)', 'thunderbolt', 'clock', 'phy ("physical layer" framework)', 'cxl (compute express link)', 'various']
|
[]
|
['yaml']
| 1
| 46
| 0
|
--- diff --git a/documentation/devicetree/bindings/crypto/intel,keembay-ocs-hcu.yaml b/documentation/devicetree/bindings/crypto/intel,keembay-ocs-hcu.yaml --- /dev/null +++ b/documentation/devicetree/bindings/crypto/intel,keembay-ocs-hcu.yaml +# spdx-license-identifier: (gpl-2.0-only or bsd-2-clause) +%yaml 1.2 +--- +$id: http://devicetree.org/schemas/crypto/intel,keembay-ocs-hcu.yaml# +$schema: http://devicetree.org/meta-schemas/core.yaml# + +title: intel keem bay ocs hcu device tree bindings + +maintainers: + - declan murphy <declan.murphy@intel.com> + - daniele alessandrelli <daniele.alessandrelli@intel.com> + +description: + the intel keem bay offload and crypto subsystem (ocs) hash control unit (hcu) + provides hardware-accelerated hashing and hmac. + +properties: + compatible: + const: intel,keembay-ocs-hcu + + reg: + maxitems: 1 + + interrupts: + maxitems: 1 + + clocks: + maxitems: 1 + +required: + - compatible + - reg + - interrupts + - clocks + +additionalproperties: false + +examples: + - | + #include <dt-bindings/interrupt-controller/arm-gic.h> + crypto@3000b000 { + compatible = "intel,keembay-ocs-hcu"; + reg = <0x3000b000 0x1000>; + interrupts = <gic_spi 121 irq_type_level_high>; + clocks = <&scmi_clk 94>; + };
|
Cryptography hardware acceleration
|
33ff64884c4e5ffcac1c4aa767e38bf4b3f443a0
|
declan murphy mark gross mgross linux intel com rob herring robh kernel org
|
documentation
|
devicetree
|
bindings, crypto
|
crypto: keembay - add keem bay ocs hcu driver
|
add support for the hashing control unit (hcu) included in the offload crypto subsystem (ocs) of the intel keem bay soc, thus enabling hardware-accelerated hashing on the keem bay soc for the following algorithms: - sha256 - sha384 - sha512 - sm3
|
this release allows to map an uid to a different one in a mount; it also adds support for selecting the preemption model at runtime; support for a low-overhead memory error detector designed to be used in production; support for the acrn hypervisor designed for embedded systems; btrfs initial support for zoned devices, subpage blocks sizes and performance improvements; support for eager nfs writes; support for a thermal power management to control the surface temperature of embedded devices in an unified way; the napi polling can be moved to a kernel thread; and support for non-blocking path lookups. as always, there are many other features, new drivers, improvements and fixes.
|
add keem bay ocs hcu driver
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'cryptography', 'security', 'networking', 'architectures x86 arm risc-v powerpc mips csky s390 pa-risc c6x']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'device voltage and frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'non-transparent bridge (ntb)', 'thunderbolt', 'clock', 'phy ("physical layer" framework)', 'cxl (compute express link)', 'various']
|
[]
|
['h', 'kconfig', 'c', 'makefile']
| 5
| 1,632
| 0
|
- sha256 - sha384 - sha512 - sm3 - 'ocs-hcu.c' which interacts with the hardware and abstracts it by - 'keembay-ocs-hcu-core.c' which exports the functionality provided by --- diff --git a/drivers/crypto/keembay/kconfig b/drivers/crypto/keembay/kconfig --- a/drivers/crypto/keembay/kconfig +++ b/drivers/crypto/keembay/kconfig + +config crypto_dev_keembay_ocs_hcu + tristate "support for intel keem bay ocs hcu hw acceleration" + select crypto_hash + select crypto_engine + depends on of || compile_test + help + support for intel keem bay offload and crypto subsystem (ocs) hash + control unit (hcu) hardware acceleration for use with crypto api. + + provides ocs hcu hardware acceleration of sha256, sha384, sha512, and + sm3. + + say y or m if you're building for the intel keem bay soc. if compiled + as a module, the module will be called keembay-ocs-hcu. + + if unsure, say n. diff --git a/drivers/crypto/keembay/makefile b/drivers/crypto/keembay/makefile --- a/drivers/crypto/keembay/makefile +++ b/drivers/crypto/keembay/makefile + +obj-$(config_crypto_dev_keembay_ocs_hcu) += keembay-ocs-hcu.o +keembay-ocs-hcu-objs := keembay-ocs-hcu-core.o ocs-hcu.o diff --git a/drivers/crypto/keembay/keembay-ocs-hcu-core.c b/drivers/crypto/keembay/keembay-ocs-hcu-core.c --- /dev/null +++ b/drivers/crypto/keembay/keembay-ocs-hcu-core.c +// spdx-license-identifier: gpl-2.0-only +/* + * intel keem bay ocs hcu crypto driver. + * + * copyright (c) 2018-2020 intel corporation + */ + +#include <linux/completion.h> +#include <linux/delay.h> +#include <linux/dma-mapping.h> +#include <linux/interrupt.h> +#include <linux/module.h> +#include <linux/of_device.h> + +#include <crypto/engine.h> +#include <crypto/scatterwalk.h> +#include <crypto/sha2.h> +#include <crypto/sm3.h> +#include <crypto/internal/hash.h> + +#include "ocs-hcu.h" + +#define drv_name "keembay-ocs-hcu" + +/* flag marking a final request. */ +#define req_final bit(0) + +/** + * struct ocs_hcu_ctx: ocs hcu transform context. + * @engine_ctx: crypto engine context. + * @hcu_dev: the ocs hcu device used by the transformation. + * @is_sm3_tfm: whether or not this is an sm3 transformation. + */ +struct ocs_hcu_ctx { + struct crypto_engine_ctx engine_ctx; + struct ocs_hcu_dev *hcu_dev; + bool is_sm3_tfm; +}; + +/** + * struct ocs_hcu_rctx - context for the request. + * @hcu_dev: ocs hcu device to be used to service the request. + * @flags: flags tracking request status. + * @algo: algorithm to use for the request. + * @blk_sz: block size of the transformation / request. + * @dig_sz: digest size of the transformation / request. + * @dma_list: ocs dma linked list. + * @hash_ctx: ocs hcu hashing context. + * @buffer: buffer to store partial block of data. + * @buf_cnt: number of bytes currently stored in the buffer. + * @buf_dma_addr: the dma address of @buffer (when mapped). + * @buf_dma_count: the number of bytes in @buffer currently dma-mapped. + * @sg: head of the scatterlist entries containing data. + * @sg_data_total: total data in the sg list at any time. + * @sg_data_offset: offset into the data of the current individual sg node. + * @sg_dma_nents: number of sg entries mapped in dma_list. + */ +struct ocs_hcu_rctx { + struct ocs_hcu_dev *hcu_dev; + u32 flags; + enum ocs_hcu_algo algo; + size_t blk_sz; + size_t dig_sz; + struct ocs_hcu_dma_list *dma_list; + struct ocs_hcu_hash_ctx hash_ctx; + u8 buffer[sha512_block_size]; + size_t buf_cnt; + dma_addr_t buf_dma_addr; + size_t buf_dma_count; + struct scatterlist *sg; + unsigned int sg_data_total; + unsigned int sg_data_offset; + unsigned int sg_dma_nents; +}; + +/** + * struct ocs_hcu_drv - driver data + * @dev_list: the list of hcu devices. + * @lock: the lock protecting dev_list. + */ +struct ocs_hcu_drv { + struct list_head dev_list; + spinlock_t lock; /* protects dev_list. */ +}; + +static struct ocs_hcu_drv ocs_hcu = { + .dev_list = list_head_init(ocs_hcu.dev_list), + .lock = __spin_lock_unlocked(ocs_hcu.lock), +}; + +/* + * return the total amount of data in the request; that is: the data in the + * request buffer + the data in the sg list. + */ +static inline unsigned int kmb_get_total_data(struct ocs_hcu_rctx *rctx) +{ + return rctx->sg_data_total + rctx->buf_cnt; +} + +/* move remaining content of scatter-gather list to context buffer. */ +static int flush_sg_to_ocs_buffer(struct ocs_hcu_rctx *rctx) +{ + size_t count; + + if (rctx->sg_data_total > (sizeof(rctx->buffer) - rctx->buf_cnt)) { + warn(1, "%s: sg data does not fit in buffer ", __func__); + return -einval; + } + + while (rctx->sg_data_total) { + if (!rctx->sg) { + warn(1, "%s: unexpected null sg ", __func__); + return -einval; + } + /* + * if current sg has been fully processed, skip to the next + * one. + */ + if (rctx->sg_data_offset == rctx->sg->length) { + rctx->sg = sg_next(rctx->sg); + rctx->sg_data_offset = 0; + continue; + } + /* + * determine the maximum data available to copy from the node. + * minimum of the length left in the sg node, or the total data + * in the request. + */ + count = min(rctx->sg->length - rctx->sg_data_offset, + rctx->sg_data_total); + /* copy from scatter-list entry to context buffer. */ + scatterwalk_map_and_copy(&rctx->buffer[rctx->buf_cnt], + rctx->sg, rctx->sg_data_offset, + count, 0); + + rctx->sg_data_offset += count; + rctx->sg_data_total -= count; + rctx->buf_cnt += count; + } + + return 0; +} + +static struct ocs_hcu_dev *kmb_ocs_hcu_find_dev(struct ahash_request *req) +{ + struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); + struct ocs_hcu_ctx *tctx = crypto_ahash_ctx(tfm); + + /* if the hcu device for the request was previously set, return it. */ + if (tctx->hcu_dev) + return tctx->hcu_dev; + + /* + * otherwise, get the first hcu device available (there should be one + * and only one device). + */ + spin_lock_bh(&ocs_hcu.lock); + tctx->hcu_dev = list_first_entry_or_null(&ocs_hcu.dev_list, + struct ocs_hcu_dev, + list); + spin_unlock_bh(&ocs_hcu.lock); + + return tctx->hcu_dev; +} + +/* free ocs dma linked list and dma-able context buffer. */ +static void kmb_ocs_hcu_dma_cleanup(struct ahash_request *req, + struct ocs_hcu_rctx *rctx) +{ + struct ocs_hcu_dev *hcu_dev = rctx->hcu_dev; + struct device *dev = hcu_dev->dev; + + /* unmap rctx->buffer (if mapped). */ + if (rctx->buf_dma_count) { + dma_unmap_single(dev, rctx->buf_dma_addr, rctx->buf_dma_count, + dma_to_device); + rctx->buf_dma_count = 0; + } + + /* unmap req->src (if mapped). */ + if (rctx->sg_dma_nents) { + dma_unmap_sg(dev, req->src, rctx->sg_dma_nents, dma_to_device); + rctx->sg_dma_nents = 0; + } + + /* free dma_list (if allocated). */ + if (rctx->dma_list) { + ocs_hcu_dma_list_free(hcu_dev, rctx->dma_list); + rctx->dma_list = null; + } +} + +/* + * prepare for dma operation: + * - dma-map request context buffer (if needed) + * - dma-map sg list (only the entries to be processed, see note below) + * - allocate ocs hcu dma linked list (number of elements = sg entries to + * process + context buffer (if not empty)). + * - add dma-mapped request context buffer to ocs hcu dma list. + * - add sg entries to dma list. + * + * note: if this is a final request, we process all the data in the sg list, + * otherwise we can only process up to the maximum amount of block-aligned data + * (the remainder will be put into the context buffer and processed in the next + * request). + */ +static int kmb_ocs_dma_prepare(struct ahash_request *req) +{ + struct ocs_hcu_rctx *rctx = ahash_request_ctx(req); + struct device *dev = rctx->hcu_dev->dev; + unsigned int remainder = 0; + unsigned int total; + size_t nents; + size_t count; + int rc; + int i; + + /* this function should be called only when there is data to process. */ + total = kmb_get_total_data(rctx); + if (!total) + return -einval; + + /* + * if this is not a final dma (terminated dma), the data passed to the + * hcu must be aligned to the block size; compute the remainder data to + * be processed in the next request. + */ + if (!(rctx->flags & req_final)) + remainder = total % rctx->blk_sz; + + /* determine the number of scatter gather list entries to process. */ + nents = sg_nents_for_len(req->src, rctx->sg_data_total - remainder); + + /* if there are entries to process, map them. */ + if (nents) { + rctx->sg_dma_nents = dma_map_sg(dev, req->src, nents, + dma_to_device); + if (!rctx->sg_dma_nents) { + dev_err(dev, "failed to map sg "); + rc = -enomem; + goto cleanup; + } + /* + * the value returned by dma_map_sg() can be < nents; so update + * nents accordingly. + */ + nents = rctx->sg_dma_nents; + } + + /* + * if context buffer is not empty, map it and add extra dma entry for + * it. + */ + if (rctx->buf_cnt) { + rctx->buf_dma_addr = dma_map_single(dev, rctx->buffer, + rctx->buf_cnt, + dma_to_device); + if (dma_mapping_error(dev, rctx->buf_dma_addr)) { + dev_err(dev, "failed to map request context buffer "); + rc = -enomem; + goto cleanup; + } + rctx->buf_dma_count = rctx->buf_cnt; + /* increase number of dma entries. */ + nents++; + } + + /* allocate ocs hcu dma list. */ + rctx->dma_list = ocs_hcu_dma_list_alloc(rctx->hcu_dev, nents); + if (!rctx->dma_list) { + rc = -enomem; + goto cleanup; + } + + /* add request context buffer (if previously dma-mapped) */ + if (rctx->buf_dma_count) { + rc = ocs_hcu_dma_list_add_tail(rctx->hcu_dev, rctx->dma_list, + rctx->buf_dma_addr, + rctx->buf_dma_count); + if (rc) + goto cleanup; + } + + /* add the sg nodes to be processed to the dma linked list. */ + for_each_sg(req->src, rctx->sg, rctx->sg_dma_nents, i) { + /* + * the number of bytes to add to the list entry is the minimum + * between: + * - the dma length of the sg entry. + * - the data left to be processed. + */ + count = min(rctx->sg_data_total - remainder, + sg_dma_len(rctx->sg) - rctx->sg_data_offset); + /* + * do not create a zero length dma descriptor. check in case of + * zero length sg node. + */ + if (count == 0) + continue; + /* add sg to hcu dma list. */ + rc = ocs_hcu_dma_list_add_tail(rctx->hcu_dev, + rctx->dma_list, + rctx->sg->dma_address, + count); + if (rc) + goto cleanup; + + /* update amount of data remaining in sg list. */ + rctx->sg_data_total -= count; + + /* + * if remaining data is equal to remainder (note: 'less than' + * case should never happen in practice), we are done: update + * offset and exit the loop. + */ + if (rctx->sg_data_total <= remainder) { + warn_on(rctx->sg_data_total < remainder); + rctx->sg_data_offset += count; + break; + } + + /* + * if we get here is because we need to process the next sg in + * the list; set offset within the sg to 0. + */ + rctx->sg_data_offset = 0; + } + + return 0; +cleanup: + dev_err(dev, "failed to prepare dma. "); + kmb_ocs_hcu_dma_cleanup(req, rctx); + + return rc; +} + +static void kmb_ocs_hcu_secure_cleanup(struct ahash_request *req) +{ + struct ocs_hcu_rctx *rctx = ahash_request_ctx(req); + + /* clear buffer of any data. */ + memzero_explicit(rctx->buffer, sizeof(rctx->buffer)); +} + +static int kmb_ocs_hcu_handle_queue(struct ahash_request *req) +{ + struct ocs_hcu_dev *hcu_dev = kmb_ocs_hcu_find_dev(req); + + if (!hcu_dev) + return -enoent; + + return crypto_transfer_hash_request_to_engine(hcu_dev->engine, req); +} + +static int kmb_ocs_hcu_do_one_request(struct crypto_engine *engine, void *areq) +{ + struct ahash_request *req = container_of(areq, struct ahash_request, + base); + struct ocs_hcu_dev *hcu_dev = kmb_ocs_hcu_find_dev(req); + struct ocs_hcu_rctx *rctx = ahash_request_ctx(req); + int rc; + + if (!hcu_dev) { + rc = -enoent; + goto error; + } + + /* handle update request case. */ + if (!(rctx->flags & req_final)) { + /* update should always have input data. */ + if (!kmb_get_total_data(rctx)) + return -einval; + + /* map input data into the hcu dma linked list. */ + rc = kmb_ocs_dma_prepare(req); + if (rc) + goto error; + + /* do hashing step. */ + rc = ocs_hcu_hash_update(hcu_dev, &rctx->hash_ctx, + rctx->dma_list); + + /* unmap data and free dma list regardless of return code. */ + kmb_ocs_hcu_dma_cleanup(req, rctx); + + /* process previous return code. */ + if (rc) + goto error; + + /* + * reset request buffer count (data in the buffer was just + * processed). + */ + rctx->buf_cnt = 0; + /* + * move remaining sg data into the request buffer, so that it + * will be processed during the next request. + * + * note: we have remaining data if kmb_get_total_data() was not + * a multiple of block size. + */ + rc = flush_sg_to_ocs_buffer(rctx); + if (rc) + goto error; + + goto done; + } + + /* if we get here, this is a final request. */ + + /* if there is data to process, use finup. */ + if (kmb_get_total_data(rctx)) { + /* map input data into the hcu dma linked list. */ + rc = kmb_ocs_dma_prepare(req); + if (rc) + goto error; + + /* do hashing step. */ + rc = ocs_hcu_hash_finup(hcu_dev, &rctx->hash_ctx, + rctx->dma_list, + req->result, rctx->dig_sz); + /* free dma list regardless of return code. */ + kmb_ocs_hcu_dma_cleanup(req, rctx); + + /* process previous return code. */ + if (rc) + goto error; + + } else { /* otherwise (if we have no data), use final. */ + rc = ocs_hcu_hash_final(hcu_dev, &rctx->hash_ctx, req->result, + rctx->dig_sz); + if (rc) + goto error; + } + + /* perform secure clean-up. */ + kmb_ocs_hcu_secure_cleanup(req); +done: + crypto_finalize_hash_request(hcu_dev->engine, req, 0); + + return 0; + +error: + kmb_ocs_hcu_secure_cleanup(req); + return rc; +} + +static int kmb_ocs_hcu_init(struct ahash_request *req) +{ + struct ocs_hcu_dev *hcu_dev = kmb_ocs_hcu_find_dev(req); + struct ocs_hcu_rctx *rctx = ahash_request_ctx(req); + struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); + struct ocs_hcu_ctx *ctx = crypto_ahash_ctx(tfm); + + if (!hcu_dev) + return -enoent; + + /* initialize entire request context to zero. */ + memset(rctx, 0, sizeof(*rctx)); + + rctx->hcu_dev = hcu_dev; + rctx->dig_sz = crypto_ahash_digestsize(tfm); + + switch (rctx->dig_sz) { + case sha256_digest_size: + rctx->blk_sz = sha256_block_size; + /* + * sha256 and sm3 have the same digest size: use info from tfm + * context to find out which one we should use. + */ + rctx->algo = ctx->is_sm3_tfm ? ocs_hcu_algo_sm3 : + ocs_hcu_algo_sha256; + break; + case sha384_digest_size: + rctx->blk_sz = sha384_block_size; + rctx->algo = ocs_hcu_algo_sha384; + break; + case sha512_digest_size: + rctx->blk_sz = sha512_block_size; + rctx->algo = ocs_hcu_algo_sha512; + break; + default: + return -einval; + } + + /* initialize intermediate data. */ + ocs_hcu_hash_init(&rctx->hash_ctx, rctx->algo); + + return 0; +} + +static int kmb_ocs_hcu_update(struct ahash_request *req) +{ + struct ocs_hcu_rctx *rctx = ahash_request_ctx(req); + + if (!req->nbytes) + return 0; + + rctx->sg_data_total = req->nbytes; + rctx->sg_data_offset = 0; + rctx->sg = req->src; + + /* + * if remaining sg_data fits into ctx buffer, just copy it there; we'll + * process it at the next update() or final(). + */ + if (rctx->sg_data_total <= (sizeof(rctx->buffer) - rctx->buf_cnt)) + return flush_sg_to_ocs_buffer(rctx); + + return kmb_ocs_hcu_handle_queue(req); +} + +static int kmb_ocs_hcu_final(struct ahash_request *req) +{ + struct ocs_hcu_rctx *rctx = ahash_request_ctx(req); + + rctx->sg_data_total = 0; + rctx->sg_data_offset = 0; + rctx->sg = null; + + rctx->flags |= req_final; + + return kmb_ocs_hcu_handle_queue(req); +} + +static int kmb_ocs_hcu_finup(struct ahash_request *req) +{ + struct ocs_hcu_rctx *rctx = ahash_request_ctx(req); + + rctx->sg_data_total = req->nbytes; + rctx->sg_data_offset = 0; + rctx->sg = req->src; + + rctx->flags |= req_final; + + return kmb_ocs_hcu_handle_queue(req); +} + +static int kmb_ocs_hcu_digest(struct ahash_request *req) +{ + int rc = 0; + struct ocs_hcu_dev *hcu_dev = kmb_ocs_hcu_find_dev(req); + + if (!hcu_dev) + return -enoent; + + rc = kmb_ocs_hcu_init(req); + if (rc) + return rc; + + rc = kmb_ocs_hcu_finup(req); + + return rc; +} + +static int kmb_ocs_hcu_export(struct ahash_request *req, void *out) +{ + struct ocs_hcu_rctx *rctx = ahash_request_ctx(req); + + /* intermediate data is always stored and applied per request. */ + memcpy(out, rctx, sizeof(*rctx)); + + return 0; +} + +static int kmb_ocs_hcu_import(struct ahash_request *req, const void *in) +{ + struct ocs_hcu_rctx *rctx = ahash_request_ctx(req); + + /* intermediate data is always stored and applied per request. */ + memcpy(rctx, in, sizeof(*rctx)); + + return 0; +} + +/* set request size and initialize tfm context. */ +static void __cra_init(struct crypto_tfm *tfm, struct ocs_hcu_ctx *ctx) +{ + crypto_ahash_set_reqsize(__crypto_ahash_cast(tfm), + sizeof(struct ocs_hcu_rctx)); + + /* init context to 0. */ + memzero_explicit(ctx, sizeof(*ctx)); + /* set engine ops. */ + ctx->engine_ctx.op.do_one_request = kmb_ocs_hcu_do_one_request; +} + +static int kmb_ocs_hcu_sha_cra_init(struct crypto_tfm *tfm) +{ + struct ocs_hcu_ctx *ctx = crypto_tfm_ctx(tfm); + + __cra_init(tfm, ctx); + + return 0; +} + +static int kmb_ocs_hcu_sm3_cra_init(struct crypto_tfm *tfm) +{ + struct ocs_hcu_ctx *ctx = crypto_tfm_ctx(tfm); + + __cra_init(tfm, ctx); + + ctx->is_sm3_tfm = true; + + return 0; +} + +static struct ahash_alg ocs_hcu_algs[] = { +{ + .init = kmb_ocs_hcu_init, + .update = kmb_ocs_hcu_update, + .final = kmb_ocs_hcu_final, + .finup = kmb_ocs_hcu_finup, + .digest = kmb_ocs_hcu_digest, + .export = kmb_ocs_hcu_export, + .import = kmb_ocs_hcu_import, + .halg = { + .digestsize = sha256_digest_size, + .statesize = sizeof(struct ocs_hcu_rctx), + .base = { + .cra_name = "sha256", + .cra_driver_name = "sha256-keembay-ocs", + .cra_priority = 255, + .cra_flags = crypto_alg_async, + .cra_blocksize = sha256_block_size, + .cra_ctxsize = sizeof(struct ocs_hcu_ctx), + .cra_alignmask = 0, + .cra_module = this_module, + .cra_init = kmb_ocs_hcu_sha_cra_init, + } + } +}, +{ + .init = kmb_ocs_hcu_init, + .update = kmb_ocs_hcu_update, + .final = kmb_ocs_hcu_final, + .finup = kmb_ocs_hcu_finup, + .digest = kmb_ocs_hcu_digest, + .export = kmb_ocs_hcu_export, + .import = kmb_ocs_hcu_import, + .halg = { + .digestsize = sm3_digest_size, + .statesize = sizeof(struct ocs_hcu_rctx), + .base = { + .cra_name = "sm3", + .cra_driver_name = "sm3-keembay-ocs", + .cra_priority = 255, + .cra_flags = crypto_alg_async, + .cra_blocksize = sm3_block_size, + .cra_ctxsize = sizeof(struct ocs_hcu_ctx), + .cra_alignmask = 0, + .cra_module = this_module, + .cra_init = kmb_ocs_hcu_sm3_cra_init, + } + } +}, +{ + .init = kmb_ocs_hcu_init, + .update = kmb_ocs_hcu_update, + .final = kmb_ocs_hcu_final, + .finup = kmb_ocs_hcu_finup, + .digest = kmb_ocs_hcu_digest, + .export = kmb_ocs_hcu_export, + .import = kmb_ocs_hcu_import, + .halg = { + .digestsize = sha384_digest_size, + .statesize = sizeof(struct ocs_hcu_rctx), + .base = { + .cra_name = "sha384", + .cra_driver_name = "sha384-keembay-ocs", + .cra_priority = 255, + .cra_flags = crypto_alg_async, + .cra_blocksize = sha384_block_size, + .cra_ctxsize = sizeof(struct ocs_hcu_ctx), + .cra_alignmask = 0, + .cra_module = this_module, + .cra_init = kmb_ocs_hcu_sha_cra_init, + } + } +}, +{ + .init = kmb_ocs_hcu_init, + .update = kmb_ocs_hcu_update, + .final = kmb_ocs_hcu_final, + .finup = kmb_ocs_hcu_finup, + .digest = kmb_ocs_hcu_digest, + .export = kmb_ocs_hcu_export, + .import = kmb_ocs_hcu_import, + .halg = { + .digestsize = sha512_digest_size, + .statesize = sizeof(struct ocs_hcu_rctx), + .base = { + .cra_name = "sha512", + .cra_driver_name = "sha512-keembay-ocs", + .cra_priority = 255, + .cra_flags = crypto_alg_async, + .cra_blocksize = sha512_block_size, + .cra_ctxsize = sizeof(struct ocs_hcu_ctx), + .cra_alignmask = 0, + .cra_module = this_module, + .cra_init = kmb_ocs_hcu_sha_cra_init, + } + } +}, +}; + +/* device tree driver match. */ +static const struct of_device_id kmb_ocs_hcu_of_match[] = { + { + .compatible = "intel,keembay-ocs-hcu", + }, + {} +}; + +static int kmb_ocs_hcu_remove(struct platform_device *pdev) +{ + struct ocs_hcu_dev *hcu_dev; + int rc; + + hcu_dev = platform_get_drvdata(pdev); + if (!hcu_dev) + return -enodev; + + crypto_unregister_ahashes(ocs_hcu_algs, array_size(ocs_hcu_algs)); + + rc = crypto_engine_exit(hcu_dev->engine); + + spin_lock_bh(&ocs_hcu.lock); + list_del(&hcu_dev->list); + spin_unlock_bh(&ocs_hcu.lock); + + return rc; +} + +static int kmb_ocs_hcu_probe(struct platform_device *pdev) +{ + struct device *dev = &pdev->dev; + struct ocs_hcu_dev *hcu_dev; + struct resource *hcu_mem; + int rc; + + hcu_dev = devm_kzalloc(dev, sizeof(*hcu_dev), gfp_kernel); + if (!hcu_dev) + return -enomem; + + hcu_dev->dev = dev; + + platform_set_drvdata(pdev, hcu_dev); + rc = dma_set_mask_and_coherent(&pdev->dev, ocs_hcu_dma_bit_mask); + if (rc) + return rc; + + /* get the memory address and remap. */ + hcu_mem = platform_get_resource(pdev, ioresource_mem, 0); + if (!hcu_mem) { + dev_err(dev, "could not retrieve io mem resource. "); + return -enodev; + } + + hcu_dev->io_base = devm_ioremap_resource(dev, hcu_mem); + if (is_err(hcu_dev->io_base)) { + dev_err(dev, "could not io-remap mem resource. "); + return ptr_err(hcu_dev->io_base); + } + + init_completion(&hcu_dev->irq_done); + + /* get and request irq. */ + hcu_dev->irq = platform_get_irq(pdev, 0); + if (hcu_dev->irq < 0) + return hcu_dev->irq; + + rc = devm_request_threaded_irq(&pdev->dev, hcu_dev->irq, + ocs_hcu_irq_handler, null, 0, + "keembay-ocs-hcu", hcu_dev); + if (rc < 0) { + dev_err(dev, "could not request irq. "); + return rc; + } + + init_list_head(&hcu_dev->list); + + spin_lock_bh(&ocs_hcu.lock); + list_add_tail(&hcu_dev->list, &ocs_hcu.dev_list); + spin_unlock_bh(&ocs_hcu.lock); + + /* initialize crypto engine */ + hcu_dev->engine = crypto_engine_alloc_init(dev, 1); + if (!hcu_dev->engine) + goto list_del; + + rc = crypto_engine_start(hcu_dev->engine); + if (rc) { + dev_err(dev, "could not start engine. "); + goto cleanup; + } + + /* security infrastructure guarantees ocs clock is enabled. */ + + rc = crypto_register_ahashes(ocs_hcu_algs, array_size(ocs_hcu_algs)); + if (rc) { + dev_err(dev, "could not register algorithms. "); + goto cleanup; + } + + return 0; + +cleanup: + crypto_engine_exit(hcu_dev->engine); +list_del: + spin_lock_bh(&ocs_hcu.lock); + list_del(&hcu_dev->list); + spin_unlock_bh(&ocs_hcu.lock); + + return rc; +} + +/* the ocs driver is a platform device. */ +static struct platform_driver kmb_ocs_hcu_driver = { + .probe = kmb_ocs_hcu_probe, + .remove = kmb_ocs_hcu_remove, + .driver = { + .name = drv_name, + .of_match_table = kmb_ocs_hcu_of_match, + }, +}; + +module_platform_driver(kmb_ocs_hcu_driver); + +module_license("gpl"); diff --git a/drivers/crypto/keembay/ocs-hcu.c b/drivers/crypto/keembay/ocs-hcu.c --- /dev/null +++ b/drivers/crypto/keembay/ocs-hcu.c +// spdx-license-identifier: gpl-2.0-only +/* + * intel keem bay ocs hcu crypto driver. + * + * copyright (c) 2018-2020 intel corporation + */ + +#include <linux/delay.h> +#include <linux/device.h> +#include <linux/iopoll.h> +#include <linux/irq.h> +#include <linux/module.h> + +#include <crypto/sha2.h> + +#include "ocs-hcu.h" + +/* registers. */ +#define ocs_hcu_mode 0x00 +#define ocs_hcu_chain 0x04 +#define ocs_hcu_operation 0x08 +#define ocs_hcu_key_0 0x0c +#define ocs_hcu_isr 0x50 +#define ocs_hcu_ier 0x54 +#define ocs_hcu_status 0x58 +#define ocs_hcu_msg_len_lo 0x60 +#define ocs_hcu_msg_len_hi 0x64 +#define ocs_hcu_key_byte_order_cfg 0x80 +#define ocs_hcu_dma_src_addr 0x400 +#define ocs_hcu_dma_src_size 0x408 +#define ocs_hcu_dma_dst_size 0x40c +#define ocs_hcu_dma_dma_mode 0x410 +#define ocs_hcu_dma_next_src_descr 0x418 +#define ocs_hcu_dma_msi_isr 0x480 +#define ocs_hcu_dma_msi_ier 0x484 +#define ocs_hcu_dma_msi_mask 0x488 + +/* register bit definitions. */ +#define hcu_mode_algo_shift 16 +#define hcu_mode_hmac_shift 22 + +#define hcu_status_busy bit(0) + +#define hcu_byte_order_swap bit(0) + +#define hcu_irq_hash_done bit(2) +#define hcu_irq_hash_err_mask (bit(3) | bit(1) | bit(0)) + +#define hcu_dma_irq_src_done bit(0) +#define hcu_dma_irq_sai_err bit(2) +#define hcu_dma_irq_bad_comp_err bit(3) +#define hcu_dma_irq_inbuf_rd_err bit(4) +#define hcu_dma_irq_inbuf_wd_err bit(5) +#define hcu_dma_irq_outbuf_wr_err bit(6) +#define hcu_dma_irq_outbuf_rd_err bit(7) +#define hcu_dma_irq_crd_err bit(8) +#define hcu_dma_irq_err_mask (hcu_dma_irq_sai_err | \ + hcu_dma_irq_bad_comp_err | \ + hcu_dma_irq_inbuf_rd_err | \ + hcu_dma_irq_inbuf_wd_err | \ + hcu_dma_irq_outbuf_wr_err | \ + hcu_dma_irq_outbuf_rd_err | \ + hcu_dma_irq_crd_err) + +#define hcu_dma_snoop_mask (0x7 << 28) +#define hcu_dma_src_ll_en bit(25) +#define hcu_dma_en bit(31) + +#define ocs_hcu_endianness_value 0x2a + +#define hcu_dma_msi_unmask bit(0) +#define hcu_dma_msi_disable 0 +#define hcu_irq_disable 0 + +#define ocs_hcu_start bit(0) +#define ocs_hcu_terminate bit(1) + +#define ocs_ll_dma_flag_terminate bit(31) + +#define ocs_hcu_hw_key_len_u32 (ocs_hcu_hw_key_len / sizeof(u32)) + +#define hcu_data_write_endianness_offset 26 + +#define ocs_hcu_num_chains_sha256_224_sm3 (sha256_digest_size / sizeof(u32)) +#define ocs_hcu_num_chains_sha384_512 (sha512_digest_size / sizeof(u32)) + +/* + * while polling on a busy hcu, wait maximum 200us between one check and the + * other. + */ +#define ocs_hcu_wait_busy_retry_delay_us 200 +/* wait on a busy hcu for maximum 1 second. */ +#define ocs_hcu_wait_busy_timeout_us 1000000 + +/** + * struct ocs_hcu_dma_list - an entry in an ocs dma linked list. + * @src_addr: source address of the data. + * @src_len: length of data to be fetched. + * @nxt_desc: next descriptor to fetch. + * @ll_flags: flags (freeze @ terminate) for the dma engine. + */ +struct ocs_hcu_dma_entry { + u32 src_addr; + u32 src_len; + u32 nxt_desc; + u32 ll_flags; +}; + +/** + * struct ocs_dma_list - ocs-specific dma linked list. + * @head: the head of the list (points to the array backing the list). + * @tail: the current tail of the list; null if the list is empty. + * @dma_addr: the dma address of @head (i.e., the dma address of the backing + * array). + * @max_nents: maximum number of entries in the list (i.e., number of elements + * in the backing array). + * + * the ocs dma list is an array-backed list of ocs dma descriptors. the array + * backing the list is allocated with dma_alloc_coherent() and pointed by + * @head. + */ +struct ocs_hcu_dma_list { + struct ocs_hcu_dma_entry *head; + struct ocs_hcu_dma_entry *tail; + dma_addr_t dma_addr; + size_t max_nents; +}; + +static inline u32 ocs_hcu_num_chains(enum ocs_hcu_algo algo) +{ + switch (algo) { + case ocs_hcu_algo_sha224: + case ocs_hcu_algo_sha256: + case ocs_hcu_algo_sm3: + return ocs_hcu_num_chains_sha256_224_sm3; + case ocs_hcu_algo_sha384: + case ocs_hcu_algo_sha512: + return ocs_hcu_num_chains_sha384_512; + default: + return 0; + }; +} + +static inline u32 ocs_hcu_digest_size(enum ocs_hcu_algo algo) +{ + switch (algo) { + case ocs_hcu_algo_sha224: + return sha224_digest_size; + case ocs_hcu_algo_sha256: + case ocs_hcu_algo_sm3: + /* sm3 shares the same block size. */ + return sha256_digest_size; + case ocs_hcu_algo_sha384: + return sha384_digest_size; + case ocs_hcu_algo_sha512: + return sha512_digest_size; + default: + return 0; + } +} + +/** + * ocs_hcu_wait_busy() - wait for hcu ocs hardware to became usable. + * @hcu_dev: ocs hcu device to wait for. + * + * return: 0 if device free, -etimeout if device busy and internal timeout has + * expired. + */ +static int ocs_hcu_wait_busy(struct ocs_hcu_dev *hcu_dev) +{ + long val; + + return readl_poll_timeout(hcu_dev->io_base + ocs_hcu_status, val, + !(val & hcu_status_busy), + ocs_hcu_wait_busy_retry_delay_us, + ocs_hcu_wait_busy_timeout_us); +} + +static void ocs_hcu_done_irq_en(struct ocs_hcu_dev *hcu_dev) +{ + /* clear any pending interrupts. */ + writel(0xffffffff, hcu_dev->io_base + ocs_hcu_isr); + hcu_dev->irq_err = false; + /* enable error and hcu done interrupts. */ + writel(hcu_irq_hash_done | hcu_irq_hash_err_mask, + hcu_dev->io_base + ocs_hcu_ier); +} + +static void ocs_hcu_dma_irq_en(struct ocs_hcu_dev *hcu_dev) +{ + /* clear any pending interrupts. */ + writel(0xffffffff, hcu_dev->io_base + ocs_hcu_dma_msi_isr); + hcu_dev->irq_err = false; + /* only operating on dma source completion and error interrupts. */ + writel(hcu_dma_irq_err_mask | hcu_dma_irq_src_done, + hcu_dev->io_base + ocs_hcu_dma_msi_ier); + /* unmask */ + writel(hcu_dma_msi_unmask, hcu_dev->io_base + ocs_hcu_dma_msi_mask); +} + +static void ocs_hcu_irq_dis(struct ocs_hcu_dev *hcu_dev) +{ + writel(hcu_irq_disable, hcu_dev->io_base + ocs_hcu_ier); + writel(hcu_dma_msi_disable, hcu_dev->io_base + ocs_hcu_dma_msi_ier); +} + +static int ocs_hcu_wait_and_disable_irq(struct ocs_hcu_dev *hcu_dev) +{ + int rc; + + rc = wait_for_completion_interruptible(&hcu_dev->irq_done); + if (rc) + goto exit; + + if (hcu_dev->irq_err) { + /* unset flag and return error. */ + hcu_dev->irq_err = false; + rc = -eio; + goto exit; + } + +exit: + ocs_hcu_irq_dis(hcu_dev); + + return rc; +} + +/** + * ocs_hcu_get_intermediate_data() - get intermediate data. + * @hcu_dev: the target hcu device. + * @data: where to store the intermediate. + * @algo: the algorithm being used. + * + * this function is used to save the current hashing process state in order to + * continue it in the future. + * + * note: once all data has been processed, the intermediate data actually + * contains the hashing result. so this function is also used to retrieve the + * final result of a hashing process. + * + * return: 0 on success, negative error code otherwise. + */ +static int ocs_hcu_get_intermediate_data(struct ocs_hcu_dev *hcu_dev, + struct ocs_hcu_idata *data, + enum ocs_hcu_algo algo) +{ + const int n = ocs_hcu_num_chains(algo); + u32 *chain; + int rc; + int i; + + /* data not requested. */ + if (!data) + return -einval; + + chain = (u32 *)data->digest; + + /* ensure that the ocs is no longer busy before reading the chains. */ + rc = ocs_hcu_wait_busy(hcu_dev); + if (rc) + return rc; + + /* + * this loops is safe because data->digest is an array of + * sha512_digest_size bytes and the maximum value returned by + * ocs_hcu_num_chains() is ocs_hcu_num_chains_sha384_512 which is equal + * to sha512_digest_size / sizeof(u32). + */ + for (i = 0; i < n; i++) + chain[i] = readl(hcu_dev->io_base + ocs_hcu_chain); + + data->msg_len_lo = readl(hcu_dev->io_base + ocs_hcu_msg_len_lo); + data->msg_len_hi = readl(hcu_dev->io_base + ocs_hcu_msg_len_hi); + + return 0; +} + +/** + * ocs_hcu_set_intermediate_data() - set intermediate data. + * @hcu_dev: the target hcu device. + * @data: the intermediate data to be set. + * @algo: the algorithm being used. + * + * this function is used to continue a previous hashing process. + */ +static void ocs_hcu_set_intermediate_data(struct ocs_hcu_dev *hcu_dev, + const struct ocs_hcu_idata *data, + enum ocs_hcu_algo algo) +{ + const int n = ocs_hcu_num_chains(algo); + u32 *chain = (u32 *)data->digest; + int i; + + /* + * this loops is safe because data->digest is an array of + * sha512_digest_size bytes and the maximum value returned by + * ocs_hcu_num_chains() is ocs_hcu_num_chains_sha384_512 which is equal + * to sha512_digest_size / sizeof(u32). + */ + for (i = 0; i < n; i++) + writel(chain[i], hcu_dev->io_base + ocs_hcu_chain); + + writel(data->msg_len_lo, hcu_dev->io_base + ocs_hcu_msg_len_lo); + writel(data->msg_len_hi, hcu_dev->io_base + ocs_hcu_msg_len_hi); +} + +static int ocs_hcu_get_digest(struct ocs_hcu_dev *hcu_dev, + enum ocs_hcu_algo algo, u8 *dgst, size_t dgst_len) +{ + u32 *chain; + int rc; + int i; + + if (!dgst) + return -einval; + + /* length of the output buffer must match the algo digest size. */ + if (dgst_len != ocs_hcu_digest_size(algo)) + return -einval; + + /* ensure that the ocs is no longer busy before reading the chains. */ + rc = ocs_hcu_wait_busy(hcu_dev); + if (rc) + return rc; + + chain = (u32 *)dgst; + for (i = 0; i < dgst_len / sizeof(u32); i++) + chain[i] = readl(hcu_dev->io_base + ocs_hcu_chain); + + return 0; +} + +/** + * ocs_hcu_hw_cfg() - configure the hcu hardware. + * @hcu_dev: the hcu device to configure. + * @algo: the algorithm to be used by the hcu device. + * @use_hmac: whether or not hw hmac should be used. + * + * return: 0 on success, negative error code otherwise. + */ +static int ocs_hcu_hw_cfg(struct ocs_hcu_dev *hcu_dev, enum ocs_hcu_algo algo, + bool use_hmac) +{ + u32 cfg; + int rc; + + if (algo != ocs_hcu_algo_sha256 && algo != ocs_hcu_algo_sha224 && + algo != ocs_hcu_algo_sha384 && algo != ocs_hcu_algo_sha512 && + algo != ocs_hcu_algo_sm3) + return -einval; + + rc = ocs_hcu_wait_busy(hcu_dev); + if (rc) + return rc; + + /* ensure interrupts are disabled. */ + ocs_hcu_irq_dis(hcu_dev); + + /* configure endianness, hashing algorithm and hw hmac (if needed) */ + cfg = ocs_hcu_endianness_value << hcu_data_write_endianness_offset; + cfg |= algo << hcu_mode_algo_shift; + if (use_hmac) + cfg |= bit(hcu_mode_hmac_shift); + + writel(cfg, hcu_dev->io_base + ocs_hcu_mode); + + return 0; +} + +/** + * ocs_hcu_ll_dma_start() - start ocs hcu hashing via dma + * @hcu_dev: the ocs hcu device to use. + * @dma_list: the ocs dma list mapping the data to hash. + * @finalize: whether or not this is the last hashing operation and therefore + * the final hash should be compute even if data is not + * block-aligned. + * + * return: 0 on success, negative error code otherwise. + */ +static int ocs_hcu_ll_dma_start(struct ocs_hcu_dev *hcu_dev, + const struct ocs_hcu_dma_list *dma_list, + bool finalize) +{ + u32 cfg = hcu_dma_snoop_mask | hcu_dma_src_ll_en | hcu_dma_en; + int rc; + + if (!dma_list) + return -einval; + + /* + * for final requests we use hcu_done irq to be notified when all input + * data has been processed by the hcu; however, we cannot do so for + * non-final requests, because we don't get a hcu_done irq when we + * don't terminate the operation. + * + * therefore, for non-final requests, we use the dma irq, which + * triggers when dma has finishing feeding all the input data to the + * hcu, but the hcu may still be processing it. this is fine, since we + * will wait for the hcu processing to be completed when we try to read + * intermediate results, in ocs_hcu_get_intermediate_data(). + */ + if (finalize) + ocs_hcu_done_irq_en(hcu_dev); + else + ocs_hcu_dma_irq_en(hcu_dev); + + reinit_completion(&hcu_dev->irq_done); + writel(dma_list->dma_addr, hcu_dev->io_base + ocs_hcu_dma_next_src_descr); + writel(0, hcu_dev->io_base + ocs_hcu_dma_src_size); + writel(0, hcu_dev->io_base + ocs_hcu_dma_dst_size); + + writel(ocs_hcu_start, hcu_dev->io_base + ocs_hcu_operation); + + writel(cfg, hcu_dev->io_base + ocs_hcu_dma_dma_mode); + + if (finalize) + writel(ocs_hcu_terminate, hcu_dev->io_base + ocs_hcu_operation); + + rc = ocs_hcu_wait_and_disable_irq(hcu_dev); + if (rc) + return rc; + + return 0; +} + +struct ocs_hcu_dma_list *ocs_hcu_dma_list_alloc(struct ocs_hcu_dev *hcu_dev, + int max_nents) +{ + struct ocs_hcu_dma_list *dma_list; + + dma_list = kmalloc(sizeof(*dma_list), gfp_kernel); + if (!dma_list) + return null; + + /* total size of the dma list to allocate. */ + dma_list->head = dma_alloc_coherent(hcu_dev->dev, + sizeof(*dma_list->head) * max_nents, + &dma_list->dma_addr, gfp_kernel); + if (!dma_list->head) { + kfree(dma_list); + return null; + } + dma_list->max_nents = max_nents; + dma_list->tail = null; + + return dma_list; +} + +void ocs_hcu_dma_list_free(struct ocs_hcu_dev *hcu_dev, + struct ocs_hcu_dma_list *dma_list) +{ + if (!dma_list) + return; + + dma_free_coherent(hcu_dev->dev, + sizeof(*dma_list->head) * dma_list->max_nents, + dma_list->head, dma_list->dma_addr); + + kfree(dma_list); +} + +/* add a new dma entry at the end of the ocs dma list. */ +int ocs_hcu_dma_list_add_tail(struct ocs_hcu_dev *hcu_dev, + struct ocs_hcu_dma_list *dma_list, + dma_addr_t addr, u32 len) +{ + struct device *dev = hcu_dev->dev; + struct ocs_hcu_dma_entry *old_tail; + struct ocs_hcu_dma_entry *new_tail; + + if (!len) + return 0; + + if (!dma_list) + return -einval; + + if (addr & ~ocs_hcu_dma_bit_mask) { + dev_err(dev, + "unexpected error: invalid dma address for ocs hcu "); + return -einval; + } + + old_tail = dma_list->tail; + new_tail = old_tail ? old_tail + 1 : dma_list->head; + + /* check if list is full. */ + if (new_tail - dma_list->head >= dma_list->max_nents) + return -enomem; + + /* + * if there was an old tail (i.e., this is not the first element we are + * adding), un-terminate the old tail and make it point to the new one. + */ + if (old_tail) { + old_tail->ll_flags &= ~ocs_ll_dma_flag_terminate; + /* + * the old tail 'nxt_desc' must point to the dma address of the + * new tail. + */ + old_tail->nxt_desc = dma_list->dma_addr + + sizeof(*dma_list->tail) * (new_tail - + dma_list->head); + } + + new_tail->src_addr = (u32)addr; + new_tail->src_len = (u32)len; + new_tail->ll_flags = ocs_ll_dma_flag_terminate; + new_tail->nxt_desc = 0; + + /* update list tail with new tail. */ + dma_list->tail = new_tail; + + return 0; +} + +/** + * ocs_hcu_hash_init() - initialize hash operation context. + * @ctx: the context to initialize. + * @algo: the hashing algorithm to use. + * + * return: 0 on success, negative error code otherwise. + */ +int ocs_hcu_hash_init(struct ocs_hcu_hash_ctx *ctx, enum ocs_hcu_algo algo) +{ + if (!ctx) + return -einval; + + ctx->algo = algo; + ctx->idata.msg_len_lo = 0; + ctx->idata.msg_len_hi = 0; + /* no need to set idata.digest to 0. */ + + return 0; +} + +/** + * ocs_hcu_digest() - perform a hashing iteration. + * @hcu_dev: the ocs hcu device to use. + * @ctx: the ocs hcu hashing context. + * @dma_list: the ocs dma list mapping the input data to process. + * + * return: 0 on success; negative error code otherwise. + */ +int ocs_hcu_hash_update(struct ocs_hcu_dev *hcu_dev, + struct ocs_hcu_hash_ctx *ctx, + const struct ocs_hcu_dma_list *dma_list) +{ + int rc; + + if (!hcu_dev || !ctx) + return -einval; + + /* configure the hardware for the current request. */ + rc = ocs_hcu_hw_cfg(hcu_dev, ctx->algo, false); + if (rc) + return rc; + + /* if we already processed some data, idata needs to be set. */ + if (ctx->idata.msg_len_lo || ctx->idata.msg_len_hi) + ocs_hcu_set_intermediate_data(hcu_dev, &ctx->idata, ctx->algo); + + /* start linked-list dma hashing. */ + rc = ocs_hcu_ll_dma_start(hcu_dev, dma_list, false); + if (rc) + return rc; + + /* update idata and return. */ + return ocs_hcu_get_intermediate_data(hcu_dev, &ctx->idata, ctx->algo); +} + +/** + * ocs_hcu_hash_final() - update and finalize hash computation. + * @hcu_dev: the ocs hcu device to use. + * @ctx: the ocs hcu hashing context. + * @dma_list: the ocs dma list mapping the input data to process. + * @dgst: the buffer where to save the computed digest. + * @dgst_len: the length of @dgst. + * + * return: 0 on success; negative error code otherwise. + */ +int ocs_hcu_hash_finup(struct ocs_hcu_dev *hcu_dev, + const struct ocs_hcu_hash_ctx *ctx, + const struct ocs_hcu_dma_list *dma_list, + u8 *dgst, size_t dgst_len) +{ + int rc; + + if (!hcu_dev || !ctx) + return -einval; + + /* configure the hardware for the current request. */ + rc = ocs_hcu_hw_cfg(hcu_dev, ctx->algo, false); + if (rc) + return rc; + + /* if we already processed some data, idata needs to be set. */ + if (ctx->idata.msg_len_lo || ctx->idata.msg_len_hi) + ocs_hcu_set_intermediate_data(hcu_dev, &ctx->idata, ctx->algo); + + /* start linked-list dma hashing. */ + rc = ocs_hcu_ll_dma_start(hcu_dev, dma_list, true); + if (rc) + return rc; + + /* get digest and return. */ + return ocs_hcu_get_digest(hcu_dev, ctx->algo, dgst, dgst_len); +} + +/** + * ocs_hcu_hash_final() - finalize hash computation. + * @hcu_dev: the ocs hcu device to use. + * @ctx: the ocs hcu hashing context. + * @dgst: the buffer where to save the computed digest. + * @dgst_len: the length of @dgst. + * + * return: 0 on success; negative error code otherwise. + */ +int ocs_hcu_hash_final(struct ocs_hcu_dev *hcu_dev, + const struct ocs_hcu_hash_ctx *ctx, u8 *dgst, + size_t dgst_len) +{ + int rc; + + if (!hcu_dev || !ctx) + return -einval; + + /* configure the hardware for the current request. */ + rc = ocs_hcu_hw_cfg(hcu_dev, ctx->algo, false); + if (rc) + return rc; + + /* if we already processed some data, idata needs to be set. */ + if (ctx->idata.msg_len_lo || ctx->idata.msg_len_hi) + ocs_hcu_set_intermediate_data(hcu_dev, &ctx->idata, ctx->algo); + + /* + * enable hcu interrupts, so that hcu_done will be triggered once the + * final hash is computed. + */ + ocs_hcu_done_irq_en(hcu_dev); + reinit_completion(&hcu_dev->irq_done); + writel(ocs_hcu_terminate, hcu_dev->io_base + ocs_hcu_operation); + + rc = ocs_hcu_wait_and_disable_irq(hcu_dev); + if (rc) + return rc; + + /* get digest and return. */ + return ocs_hcu_get_digest(hcu_dev, ctx->algo, dgst, dgst_len); +} + +irqreturn_t ocs_hcu_irq_handler(int irq, void *dev_id) +{ + struct ocs_hcu_dev *hcu_dev = dev_id; + u32 hcu_irq; + u32 dma_irq; + + /* read and clear the hcu interrupt. */ + hcu_irq = readl(hcu_dev->io_base + ocs_hcu_isr); + writel(hcu_irq, hcu_dev->io_base + ocs_hcu_isr); + + /* read and clear the hcu dma interrupt. */ + dma_irq = readl(hcu_dev->io_base + ocs_hcu_dma_msi_isr); + writel(dma_irq, hcu_dev->io_base + ocs_hcu_dma_msi_isr); + + /* check for errors. */ + if (hcu_irq & hcu_irq_hash_err_mask || dma_irq & hcu_dma_irq_err_mask) { + hcu_dev->irq_err = true; + goto complete; + } + + /* check for done irqs. */ + if (hcu_irq & hcu_irq_hash_done || dma_irq & hcu_dma_irq_src_done) + goto complete; + + return irq_none; + +complete: + complete(&hcu_dev->irq_done); + + return irq_handled; +} + +module_license("gpl"); diff --git a/drivers/crypto/keembay/ocs-hcu.h b/drivers/crypto/keembay/ocs-hcu.h --- /dev/null +++ b/drivers/crypto/keembay/ocs-hcu.h +/* spdx-license-identifier: gpl-2.0-only */ +/* + * intel keem bay ocs hcu crypto driver. + * + * copyright (c) 2018-2020 intel corporation + */ + +#include <linux/dma-mapping.h> + +#ifndef _crypto_ocs_hcu_h +#define _crypto_ocs_hcu_h + +#define ocs_hcu_dma_bit_mask dma_bit_mask(32) + +#define ocs_hcu_hw_key_len 64 + +struct ocs_hcu_dma_list; + +enum ocs_hcu_algo { + ocs_hcu_algo_sha256 = 2, + ocs_hcu_algo_sha224 = 3, + ocs_hcu_algo_sha384 = 4, + ocs_hcu_algo_sha512 = 5, + ocs_hcu_algo_sm3 = 6, +}; + +/** + * struct ocs_hcu_dev - ocs hcu device context. + * @list: list of device contexts. + * @dev: ocs hcu device. + * @io_base: base address of ocs hcu registers. + * @engine: crypto engine for the device. + * @irq: irq number. + * @irq_done: completion for irq. + * @irq_err: flag indicating an irq error has happened. + */ +struct ocs_hcu_dev { + struct list_head list; + struct device *dev; + void __iomem *io_base; + struct crypto_engine *engine; + int irq; + struct completion irq_done; + bool irq_err; +}; + +/** + * struct ocs_hcu_idata - intermediate data generated by the hcu. + * @msg_len_lo: length of data the hcu has operated on in bits, low 32b. + * @msg_len_hi: length of data the hcu has operated on in bits, high 32b. + * @digest: the digest read from the hcu. if the hcu is terminated, it will + * contain the actual hash digest. otherwise it is the intermediate + * state. + */ +struct ocs_hcu_idata { + u32 msg_len_lo; + u32 msg_len_hi; + u8 digest[sha512_digest_size]; +}; + +/** + * struct ocs_hcu_hash_ctx - context for ocs hcu hashing operation. + * @algo: the hashing algorithm being used. + * @idata: the current intermediate data. + */ +struct ocs_hcu_hash_ctx { + enum ocs_hcu_algo algo; + struct ocs_hcu_idata idata; +}; + +irqreturn_t ocs_hcu_irq_handler(int irq, void *dev_id); + +struct ocs_hcu_dma_list *ocs_hcu_dma_list_alloc(struct ocs_hcu_dev *hcu_dev, + int max_nents); + +void ocs_hcu_dma_list_free(struct ocs_hcu_dev *hcu_dev, + struct ocs_hcu_dma_list *dma_list); + +int ocs_hcu_dma_list_add_tail(struct ocs_hcu_dev *hcu_dev, + struct ocs_hcu_dma_list *dma_list, + dma_addr_t addr, u32 len); + +int ocs_hcu_hash_init(struct ocs_hcu_hash_ctx *ctx, enum ocs_hcu_algo algo); + +int ocs_hcu_hash_update(struct ocs_hcu_dev *hcu_dev, + struct ocs_hcu_hash_ctx *ctx, + const struct ocs_hcu_dma_list *dma_list); + +int ocs_hcu_hash_finup(struct ocs_hcu_dev *hcu_dev, + const struct ocs_hcu_hash_ctx *ctx, + const struct ocs_hcu_dma_list *dma_list, + u8 *dgst, size_t dgst_len); + +int ocs_hcu_hash_final(struct ocs_hcu_dev *hcu_dev, + const struct ocs_hcu_hash_ctx *ctx, u8 *dgst, + size_t dgst_len); + +#endif /* _crypto_ocs_hcu_h */
|
Cryptography hardware acceleration
|
472b04444cd39e16ba54987b2e901a79cf175463
|
declan murphy
|
drivers
|
crypto
|
keembay
|
crypto: keembay-ocs-hcu - add hmac support
|
add hmac support to the keem bay ocs hcu driver, thus making it provide the following additional transformations: - hmac(sha256) - hmac(sha384) - hmac(sha512) - hmac(sm3)
|
this release allows to map an uid to a different one in a mount; it also adds support for selecting the preemption model at runtime; support for a low-overhead memory error detector designed to be used in production; support for the acrn hypervisor designed for embedded systems; btrfs initial support for zoned devices, subpage blocks sizes and performance improvements; support for eager nfs writes; support for a thermal power management to control the surface temperature of embedded devices in an unified way; the napi polling can be moved to a kernel thread; and support for non-blocking path lookups. as always, there are many other features, new drivers, improvements and fixes.
|
add keem bay ocs hcu driver
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'cryptography', 'security', 'networking', 'architectures x86 arm risc-v powerpc mips csky s390 pa-risc c6x']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'device voltage and frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'non-transparent bridge (ntb)', 'thunderbolt', 'clock', 'phy ("physical layer" framework)', 'cxl (compute express link)', 'various']
|
[]
|
['h', 'kconfig', 'c']
| 4
| 544
| 9
|
- hmac(sha256) - hmac(sha384) - hmac(sha512) - hmac(sm3) --- diff --git a/drivers/crypto/keembay/kconfig b/drivers/crypto/keembay/kconfig --- a/drivers/crypto/keembay/kconfig +++ b/drivers/crypto/keembay/kconfig - sm3. + sm3, as well as the hmac variant of these algorithms. diff --git a/drivers/crypto/keembay/keembay-ocs-hcu-core.c b/drivers/crypto/keembay/keembay-ocs-hcu-core.c --- a/drivers/crypto/keembay/keembay-ocs-hcu-core.c +++ b/drivers/crypto/keembay/keembay-ocs-hcu-core.c +#include <crypto/hmac.h> +/* flag marking a hmac request. */ +#define req_flags_hmac bit(1) +/* flag set when hw hmac is being used. */ +#define req_flags_hmac_hw bit(2) +/* flag set when sw hmac is being used. */ +#define req_flags_hmac_sw bit(3) + * @key: the key (used only for hmac transformations). + * @key_len: the length of the key. + * @is_hmac_tfm: whether or not this is a hmac transformation. + u8 key[sha512_block_size]; + size_t key_len; + bool is_hmac_tfm; - * @buffer: buffer to store partial block of data. + * @buffer: buffer to store: partial block of data and sw hmac + * artifacts (ipad, opad, etc.). - u8 buffer[sha512_block_size]; + /* + * buffer is double the block size because we need space for sw hmac + * artifacts, i.e: + * - ipad (1 block) + a possible partial block of data. + * - opad (1 block) + digest of h(k ^ ipad || m) + */ + u8 buffer[2 * sha512_block_size]; +static int prepare_ipad(struct ahash_request *req) +{ + struct ocs_hcu_rctx *rctx = ahash_request_ctx(req); + struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); + struct ocs_hcu_ctx *ctx = crypto_ahash_ctx(tfm); + int i; + + warn(rctx->buf_cnt, "%s: context buffer is not empty ", __func__); + warn(!(rctx->flags & req_flags_hmac_sw), + "%s: hmac_sw flag is not set ", __func__); + /* + * key length must be equal to block size. if key is shorter, + * we pad it with zero (note: key cannot be longer, since + * longer keys are hashed by kmb_ocs_hcu_setkey()). + */ + if (ctx->key_len > rctx->blk_sz) { + warn("%s: invalid key length in tfm context ", __func__); + return -einval; + } + memzero_explicit(&ctx->key[ctx->key_len], + rctx->blk_sz - ctx->key_len); + ctx->key_len = rctx->blk_sz; + /* + * prepare ipad for hmac. only done for first block. + * hmac(k,m) = h(k ^ opad || h(k ^ ipad || m)) + * k ^ ipad will be first hashed block. + * k ^ opad will be calculated in the final request. + * only needed if not using hw hmac. + */ + for (i = 0; i < rctx->blk_sz; i++) + rctx->buffer[i] = ctx->key[i] ^ hmac_ipad_value; + rctx->buf_cnt = rctx->blk_sz; + + return 0; +} + + struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); + struct ocs_hcu_ctx *tctx = crypto_ahash_ctx(tfm); + int i; + /* + * if hardware hmac flag is set, perform hmac in hardware. + * + * note: this flag implies req_final && kmb_get_total_data(rctx) + */ + if (rctx->flags & req_flags_hmac_hw) { + /* map input data into the hcu dma linked list. */ + rc = kmb_ocs_dma_prepare(req); + if (rc) + goto error; + + rc = ocs_hcu_hmac(hcu_dev, rctx->algo, tctx->key, tctx->key_len, + rctx->dma_list, req->result, rctx->dig_sz); + + /* unmap data and free dma list regardless of return code. */ + kmb_ocs_hcu_dma_cleanup(req, rctx); + + /* process previous return code. */ + if (rc) + goto error; + + goto done; + } + + /* + * if we are finalizing a sw hmac request, we just computed the result + * of: h(k ^ ipad || m). + * + * we now need to complete the hmac calculation with the opad step, + * that is, we need to compute h(k ^ opad || digest), where digest is + * the digest we just obtained, i.e., h(k ^ ipad || m). + */ + if (rctx->flags & req_flags_hmac_sw) { + /* + * compute k ^ opad and store it in the request buffer (which + * is not used anymore at this point). + * note: key has been padded / hashed already (so keylen == + * blksz) . + */ + warn_on(tctx->key_len != rctx->blk_sz); + for (i = 0; i < rctx->blk_sz; i++) + rctx->buffer[i] = tctx->key[i] ^ hmac_opad_value; + /* now append the digest to the rest of the buffer. */ + for (i = 0; (i < rctx->dig_sz); i++) + rctx->buffer[rctx->blk_sz + i] = req->result[i]; + + /* now hash the buffer to obtain the final hmac. */ + rc = ocs_hcu_digest(hcu_dev, rctx->algo, rctx->buffer, + rctx->blk_sz + rctx->dig_sz, req->result, + rctx->dig_sz); + if (rc) + goto error; + } + + /* if this a hmac request, set hmac flag. */ + if (ctx->is_hmac_tfm) + rctx->flags |= req_flags_hmac; + + int rc; + /* + * if we are doing hmac, then we must use sw-assisted hmac, since hw + * hmac does not support context switching (there it can only be used + * with finup() or digest()). + */ + if (rctx->flags & req_flags_hmac && + !(rctx->flags & req_flags_hmac_sw)) { + rctx->flags |= req_flags_hmac_sw; + rc = prepare_ipad(req); + if (rc) + return rc; + } + +/* common logic for kmb_ocs_hcu_final() and kmb_ocs_hcu_finup(). */ +static int kmb_ocs_hcu_fin_common(struct ahash_request *req) +{ + struct ocs_hcu_rctx *rctx = ahash_request_ctx(req); + struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); + struct ocs_hcu_ctx *ctx = crypto_ahash_ctx(tfm); + int rc; + + rctx->flags |= req_final; + + /* + * if this is a hmac request and, so far, we didn't have to switch to + * sw hmac, check if we can use hw hmac. + */ + if (rctx->flags & req_flags_hmac && + !(rctx->flags & req_flags_hmac_sw)) { + /* + * if we are here, it means we never processed any data so far, + * so we can use hw hmac, but only if there is some data to + * process (since ocs hw mac does not support zero-length + * messages) and the key length is supported by the hardware + * (ocs hcu hw only supports length <= 64); if hw hmac cannot + * be used, fall back to sw-assisted hmac. + */ + if (kmb_get_total_data(rctx) && + ctx->key_len <= ocs_hcu_hw_key_len) { + rctx->flags |= req_flags_hmac_hw; + } else { + rctx->flags |= req_flags_hmac_sw; + rc = prepare_ipad(req); + if (rc) + return rc; + } + } + + return kmb_ocs_hcu_handle_queue(req); +} + - rctx->flags |= req_final; - - return kmb_ocs_hcu_handle_queue(req); + return kmb_ocs_hcu_fin_common(req); - rctx->flags |= req_final; - - return kmb_ocs_hcu_handle_queue(req); + return kmb_ocs_hcu_fin_common(req); +static int kmb_ocs_hcu_setkey(struct crypto_ahash *tfm, const u8 *key, + unsigned int keylen) +{ + unsigned int digestsize = crypto_ahash_digestsize(tfm); + struct ocs_hcu_ctx *ctx = crypto_ahash_ctx(tfm); + size_t blk_sz = crypto_ahash_blocksize(tfm); + struct crypto_ahash *ahash_tfm; + struct ahash_request *req; + struct crypto_wait wait; + struct scatterlist sg; + const char *alg_name; + int rc; + + /* + * key length must be equal to block size: + * - if key is shorter, we are done for now (the key will be padded + * later on); this is to maximize the use of hw hmac (which works + * only for keys <= 64 bytes). + * - if key is longer, we hash it. + */ + if (keylen <= blk_sz) { + memcpy(ctx->key, key, keylen); + ctx->key_len = keylen; + return 0; + } + + switch (digestsize) { + case sha256_digest_size: + alg_name = ctx->is_sm3_tfm ? "sm3-keembay-ocs" : + "sha256-keembay-ocs"; + break; + case sha384_digest_size: + alg_name = "sha384-keembay-ocs"; + break; + case sha512_digest_size: + alg_name = "sha512-keembay-ocs"; + break; + default: + return -einval; + } + + ahash_tfm = crypto_alloc_ahash(alg_name, 0, 0); + if (is_err(ahash_tfm)) + return ptr_err(ahash_tfm); + + req = ahash_request_alloc(ahash_tfm, gfp_kernel); + if (!req) { + rc = -enomem; + goto err_free_ahash; + } + + crypto_init_wait(&wait); + ahash_request_set_callback(req, crypto_tfm_req_may_backlog, + crypto_req_done, &wait); + crypto_ahash_clear_flags(ahash_tfm, ~0); + + sg_init_one(&sg, key, keylen); + ahash_request_set_crypt(req, &sg, ctx->key, keylen); + + rc = crypto_wait_req(crypto_ahash_digest(req), &wait); + if (rc == 0) + ctx->key_len = digestsize; + + ahash_request_free(req); +err_free_ahash: + crypto_free_ahash(ahash_tfm); + + return rc; +} + +static int kmb_ocs_hcu_hmac_sm3_cra_init(struct crypto_tfm *tfm) +{ + struct ocs_hcu_ctx *ctx = crypto_tfm_ctx(tfm); + + __cra_init(tfm, ctx); + + ctx->is_sm3_tfm = true; + ctx->is_hmac_tfm = true; + + return 0; +} + +static int kmb_ocs_hcu_hmac_cra_init(struct crypto_tfm *tfm) +{ + struct ocs_hcu_ctx *ctx = crypto_tfm_ctx(tfm); + + __cra_init(tfm, ctx); + + ctx->is_hmac_tfm = true; + + return 0; +} + +/* function called when 'tfm' is de-initialized. */ +static void kmb_ocs_hcu_hmac_cra_exit(struct crypto_tfm *tfm) +{ + struct ocs_hcu_ctx *ctx = crypto_tfm_ctx(tfm); + + /* clear the key. */ + memzero_explicit(ctx->key, sizeof(ctx->key)); +} + +{ + .init = kmb_ocs_hcu_init, + .update = kmb_ocs_hcu_update, + .final = kmb_ocs_hcu_final, + .finup = kmb_ocs_hcu_finup, + .digest = kmb_ocs_hcu_digest, + .export = kmb_ocs_hcu_export, + .import = kmb_ocs_hcu_import, + .setkey = kmb_ocs_hcu_setkey, + .halg = { + .digestsize = sha256_digest_size, + .statesize = sizeof(struct ocs_hcu_rctx), + .base = { + .cra_name = "hmac(sha256)", + .cra_driver_name = "hmac-sha256-keembay-ocs", + .cra_priority = 255, + .cra_flags = crypto_alg_async, + .cra_blocksize = sha256_block_size, + .cra_ctxsize = sizeof(struct ocs_hcu_ctx), + .cra_alignmask = 0, + .cra_module = this_module, + .cra_init = kmb_ocs_hcu_hmac_cra_init, + .cra_exit = kmb_ocs_hcu_hmac_cra_exit, + } + } +}, +{ + .init = kmb_ocs_hcu_init, + .update = kmb_ocs_hcu_update, + .final = kmb_ocs_hcu_final, + .finup = kmb_ocs_hcu_finup, + .digest = kmb_ocs_hcu_digest, + .export = kmb_ocs_hcu_export, + .import = kmb_ocs_hcu_import, + .setkey = kmb_ocs_hcu_setkey, + .halg = { + .digestsize = sm3_digest_size, + .statesize = sizeof(struct ocs_hcu_rctx), + .base = { + .cra_name = "hmac(sm3)", + .cra_driver_name = "hmac-sm3-keembay-ocs", + .cra_priority = 255, + .cra_flags = crypto_alg_async, + .cra_blocksize = sm3_block_size, + .cra_ctxsize = sizeof(struct ocs_hcu_ctx), + .cra_alignmask = 0, + .cra_module = this_module, + .cra_init = kmb_ocs_hcu_hmac_sm3_cra_init, + .cra_exit = kmb_ocs_hcu_hmac_cra_exit, + } + } +}, +{ + .init = kmb_ocs_hcu_init, + .update = kmb_ocs_hcu_update, + .final = kmb_ocs_hcu_final, + .finup = kmb_ocs_hcu_finup, + .digest = kmb_ocs_hcu_digest, + .export = kmb_ocs_hcu_export, + .import = kmb_ocs_hcu_import, + .setkey = kmb_ocs_hcu_setkey, + .halg = { + .digestsize = sha384_digest_size, + .statesize = sizeof(struct ocs_hcu_rctx), + .base = { + .cra_name = "hmac(sha384)", + .cra_driver_name = "hmac-sha384-keembay-ocs", + .cra_priority = 255, + .cra_flags = crypto_alg_async, + .cra_blocksize = sha384_block_size, + .cra_ctxsize = sizeof(struct ocs_hcu_ctx), + .cra_alignmask = 0, + .cra_module = this_module, + .cra_init = kmb_ocs_hcu_hmac_cra_init, + .cra_exit = kmb_ocs_hcu_hmac_cra_exit, + } + } +}, +{ + .init = kmb_ocs_hcu_init, + .update = kmb_ocs_hcu_update, + .final = kmb_ocs_hcu_final, + .finup = kmb_ocs_hcu_finup, + .digest = kmb_ocs_hcu_digest, + .export = kmb_ocs_hcu_export, + .import = kmb_ocs_hcu_import, + .setkey = kmb_ocs_hcu_setkey, + .halg = { + .digestsize = sha512_digest_size, + .statesize = sizeof(struct ocs_hcu_rctx), + .base = { + .cra_name = "hmac(sha512)", + .cra_driver_name = "hmac-sha512-keembay-ocs", + .cra_priority = 255, + .cra_flags = crypto_alg_async, + .cra_blocksize = sha512_block_size, + .cra_ctxsize = sizeof(struct ocs_hcu_ctx), + .cra_alignmask = 0, + .cra_module = this_module, + .cra_init = kmb_ocs_hcu_hmac_cra_init, + .cra_exit = kmb_ocs_hcu_hmac_cra_exit, + } + } +}, diff --git a/drivers/crypto/keembay/ocs-hcu.c b/drivers/crypto/keembay/ocs-hcu.c --- a/drivers/crypto/keembay/ocs-hcu.c +++ b/drivers/crypto/keembay/ocs-hcu.c +/** + * ocs_hcu_clear_key() - clear key stored in ocs hmac key registers. + * @hcu_dev: the ocs hcu device whose key registers should be cleared. + */ +static void ocs_hcu_clear_key(struct ocs_hcu_dev *hcu_dev) +{ + int reg_off; + + /* clear ocs_hcu_key_[0..15] */ + for (reg_off = 0; reg_off < ocs_hcu_hw_key_len; reg_off += sizeof(u32)) + writel(0, hcu_dev->io_base + ocs_hcu_key_0 + reg_off); +} + +/** + * ocs_hcu_write_key() - write key to ocs hmac key registers. + * @hcu_dev: the ocs hcu device the key should be written to. + * @key: the key to be written. + * @len: the size of the key to write. it must be ocs_hcu_hw_key_len. + * + * return: 0 on success, negative error code otherwise. + */ +static int ocs_hcu_write_key(struct ocs_hcu_dev *hcu_dev, const u8 *key, size_t len) +{ + u32 key_u32[ocs_hcu_hw_key_len_u32]; + int i; + + if (len > ocs_hcu_hw_key_len) + return -einval; + + /* copy key into temporary u32 array. */ + memcpy(key_u32, key, len); + + /* + * hardware requires all the bytes of the hw key vector to be + * written. so pad with zero until we reach ocs_hcu_hw_key_len. + */ + memzero_explicit((u8 *)key_u32 + len, ocs_hcu_hw_key_len - len); + + /* + * ocs hardware expects the msb of the key to be written at the highest + * address of the hcu key vector; in other word, the key must be + * written in reverse order. + * + * therefore, we first enable byte swapping for the hcu key vector; + * so that bytes of 32-bit word written to ocs_hcu_key_[0..15] will be + * swapped: + * 3 <---> 0, 2 <---> 1. + */ + writel(hcu_byte_order_swap, + hcu_dev->io_base + ocs_hcu_key_byte_order_cfg); + /* + * and then we write the 32-bit words composing the key starting from + * the end of the key. + */ + for (i = 0; i < ocs_hcu_hw_key_len_u32; i++) + writel(key_u32[ocs_hcu_hw_key_len_u32 - 1 - i], + hcu_dev->io_base + ocs_hcu_key_0 + (sizeof(u32) * i)); + + memzero_explicit(key_u32, ocs_hcu_hw_key_len); + + return 0; +} + +/** + * ocs_hcu_digest() - compute hash digest. + * @hcu_dev: the ocs hcu device to use. + * @algo: the hash algorithm to use. + * @data: the input data to process. + * @data_len: the length of @data. + * @dgst: the buffer where to save the computed digest. + * @dgst_len: the length of @dgst. + * + * return: 0 on success; negative error code otherwise. + */ +int ocs_hcu_digest(struct ocs_hcu_dev *hcu_dev, enum ocs_hcu_algo algo, + void *data, size_t data_len, u8 *dgst, size_t dgst_len) +{ + struct device *dev = hcu_dev->dev; + dma_addr_t dma_handle; + u32 reg; + int rc; + + /* configure the hardware for the current request. */ + rc = ocs_hcu_hw_cfg(hcu_dev, algo, false); + if (rc) + return rc; + + dma_handle = dma_map_single(dev, data, data_len, dma_to_device); + if (dma_mapping_error(dev, dma_handle)) + return -eio; + + reg = hcu_dma_snoop_mask | hcu_dma_en; + + ocs_hcu_done_irq_en(hcu_dev); + + reinit_completion(&hcu_dev->irq_done); + + writel(dma_handle, hcu_dev->io_base + ocs_hcu_dma_src_addr); + writel(data_len, hcu_dev->io_base + ocs_hcu_dma_src_size); + writel(ocs_hcu_start, hcu_dev->io_base + ocs_hcu_operation); + writel(reg, hcu_dev->io_base + ocs_hcu_dma_dma_mode); + + writel(ocs_hcu_terminate, hcu_dev->io_base + ocs_hcu_operation); + + rc = ocs_hcu_wait_and_disable_irq(hcu_dev); + if (rc) + return rc; + + dma_unmap_single(dev, dma_handle, data_len, dma_to_device); + + return ocs_hcu_get_digest(hcu_dev, algo, dgst, dgst_len); +} + +/** + * ocs_hcu_hmac() - compute hmac. + * @hcu_dev: the ocs hcu device to use. + * @algo: the hash algorithm to use with hmac. + * @key: the key to use. + * @dma_list: the ocs dma list mapping the input data to process. + * @key_len: the length of @key. + * @dgst: the buffer where to save the computed hmac. + * @dgst_len: the length of @dgst. + * + * return: 0 on success; negative error code otherwise. + */ +int ocs_hcu_hmac(struct ocs_hcu_dev *hcu_dev, enum ocs_hcu_algo algo, + const u8 *key, size_t key_len, + const struct ocs_hcu_dma_list *dma_list, + u8 *dgst, size_t dgst_len) +{ + int rc; + + /* ensure 'key' is not null. */ + if (!key || key_len == 0) + return -einval; + + /* configure the hardware for the current request. */ + rc = ocs_hcu_hw_cfg(hcu_dev, algo, true); + if (rc) + return rc; + + rc = ocs_hcu_write_key(hcu_dev, key, key_len); + if (rc) + return rc; + + rc = ocs_hcu_ll_dma_start(hcu_dev, dma_list, true); + + /* clear hw key before processing return code. */ + ocs_hcu_clear_key(hcu_dev); + + if (rc) + return rc; + + return ocs_hcu_get_digest(hcu_dev, algo, dgst, dgst_len); +} + diff --git a/drivers/crypto/keembay/ocs-hcu.h b/drivers/crypto/keembay/ocs-hcu.h --- a/drivers/crypto/keembay/ocs-hcu.h +++ b/drivers/crypto/keembay/ocs-hcu.h +int ocs_hcu_digest(struct ocs_hcu_dev *hcu_dev, enum ocs_hcu_algo algo, + void *data, size_t data_len, u8 *dgst, size_t dgst_len); + +int ocs_hcu_hmac(struct ocs_hcu_dev *hcu_dev, enum ocs_hcu_algo algo, + const u8 *key, size_t key_len, + const struct ocs_hcu_dma_list *dma_list, + u8 *dgst, size_t dgst_len); +
|
Cryptography hardware acceleration
|
ae832e329a8d17144e5ae625e1704901f0e0b024
|
daniele alessandrelli
|
drivers
|
crypto
|
keembay
|
crypto: keembay-ocs-hcu - add optional support for sha224
|
add optional support of sha224 and hmac(sha224).
|
this release allows to map an uid to a different one in a mount; it also adds support for selecting the preemption model at runtime; support for a low-overhead memory error detector designed to be used in production; support for the acrn hypervisor designed for embedded systems; btrfs initial support for zoned devices, subpage blocks sizes and performance improvements; support for eager nfs writes; support for a thermal power management to control the surface temperature of embedded devices in an unified way; the napi polling can be moved to a kernel thread; and support for non-blocking path lookups. as always, there are many other features, new drivers, improvements and fixes.
|
add keem bay ocs hcu driver
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'cryptography', 'security', 'networking', 'architectures x86 arm risc-v powerpc mips csky s390 pa-risc c6x']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'device voltage and frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'non-transparent bridge (ntb)', 'thunderbolt', 'clock', 'phy ("physical layer" framework)', 'cxl (compute express link)', 'various']
|
[]
|
['kconfig', 'c']
| 2
| 75
| 0
|
--- diff --git a/drivers/crypto/keembay/kconfig b/drivers/crypto/keembay/kconfig --- a/drivers/crypto/keembay/kconfig +++ b/drivers/crypto/keembay/kconfig + +config crypto_dev_keembay_ocs_hcu_hmac_sha224 + bool "enable sha224 and hmac(sha224) support in intel keem bay ocs hcu" + depends on crypto_dev_keembay_ocs_hcu + help + enables support for sha224 and hmac(sha224) algorithms in the intel + keem bay ocs hcu driver. intel recommends not to use these + algorithms. + + provides ocs hcu hardware acceleration of sha224 and hmac(224). + + if unsure, say n. diff --git a/drivers/crypto/keembay/keembay-ocs-hcu-core.c b/drivers/crypto/keembay/keembay-ocs-hcu-core.c --- a/drivers/crypto/keembay/keembay-ocs-hcu-core.c +++ b/drivers/crypto/keembay/keembay-ocs-hcu-core.c +#ifdef config_crypto_dev_keembay_ocs_hcu_hmac_sha224 + case sha224_digest_size: + rctx->blk_sz = sha224_block_size; + rctx->algo = ocs_hcu_algo_sha224; + break; +#endif /* config_crypto_dev_keembay_ocs_hcu_hmac_sha224 */ +#ifdef config_crypto_dev_keembay_ocs_hcu_hmac_sha224 + case sha224_digest_size: + alg_name = "sha224-keembay-ocs"; + break; +#endif /* config_crypto_dev_keembay_ocs_hcu_hmac_sha224 */ +#ifdef config_crypto_dev_keembay_ocs_hcu_hmac_sha224 +{ + .init = kmb_ocs_hcu_init, + .update = kmb_ocs_hcu_update, + .final = kmb_ocs_hcu_final, + .finup = kmb_ocs_hcu_finup, + .digest = kmb_ocs_hcu_digest, + .export = kmb_ocs_hcu_export, + .import = kmb_ocs_hcu_import, + .halg = { + .digestsize = sha224_digest_size, + .statesize = sizeof(struct ocs_hcu_rctx), + .base = { + .cra_name = "sha224", + .cra_driver_name = "sha224-keembay-ocs", + .cra_priority = 255, + .cra_flags = crypto_alg_async, + .cra_blocksize = sha224_block_size, + .cra_ctxsize = sizeof(struct ocs_hcu_ctx), + .cra_alignmask = 0, + .cra_module = this_module, + .cra_init = kmb_ocs_hcu_sha_cra_init, + } + } +}, +{ + .init = kmb_ocs_hcu_init, + .update = kmb_ocs_hcu_update, + .final = kmb_ocs_hcu_final, + .finup = kmb_ocs_hcu_finup, + .digest = kmb_ocs_hcu_digest, + .export = kmb_ocs_hcu_export, + .import = kmb_ocs_hcu_import, + .setkey = kmb_ocs_hcu_setkey, + .halg = { + .digestsize = sha224_digest_size, + .statesize = sizeof(struct ocs_hcu_rctx), + .base = { + .cra_name = "hmac(sha224)", + .cra_driver_name = "hmac-sha224-keembay-ocs", + .cra_priority = 255, + .cra_flags = crypto_alg_async, + .cra_blocksize = sha224_block_size, + .cra_ctxsize = sizeof(struct ocs_hcu_ctx), + .cra_alignmask = 0, + .cra_module = this_module, + .cra_init = kmb_ocs_hcu_hmac_cra_init, + .cra_exit = kmb_ocs_hcu_hmac_cra_exit, + } + } +}, +#endif /* config_crypto_dev_keembay_ocs_hcu_hmac_sha224 */
|
Cryptography hardware acceleration
|
b46f80368869cf46dbfe97ca8dfaf02e6be4510e
|
daniele alessandrelli
|
drivers
|
crypto
|
keembay
|
maintainers: add maintainers for keem bay ocs hcu driver
|
add maintainers for the intel keem bay offload crypto subsystem (ocs) hash control unit (hcu) crypto driver.
|
this release allows to map an uid to a different one in a mount; it also adds support for selecting the preemption model at runtime; support for a low-overhead memory error detector designed to be used in production; support for the acrn hypervisor designed for embedded systems; btrfs initial support for zoned devices, subpage blocks sizes and performance improvements; support for eager nfs writes; support for a thermal power management to control the surface temperature of embedded devices in an unified way; the napi polling can be moved to a kernel thread; and support for non-blocking path lookups. as always, there are many other features, new drivers, improvements and fixes.
|
add keem bay ocs hcu driver
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'cryptography', 'security', 'networking', 'architectures x86 arm risc-v powerpc mips csky s390 pa-risc c6x']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'device voltage and frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'non-transparent bridge (ntb)', 'thunderbolt', 'clock', 'phy ("physical layer" framework)', 'cxl (compute express link)', 'various']
|
[]
|
['maintainers']
| 1
| 11
| 0
|
--- diff --git a/maintainers b/maintainers --- a/maintainers +++ b/maintainers +intel keem bay ocs hcu crypto driver +m: daniele alessandrelli <daniele.alessandrelli@intel.com> +m: declan murphy <declan.murphy@intel.com> +s: maintained +f: documentation/devicetree/bindings/crypto/intel,keembay-ocs-hcu.yaml +f: drivers/crypto/keembay/kconfig +f: drivers/crypto/keembay/makefile +f: drivers/crypto/keembay/keembay-ocs-hcu-core.c +f: drivers/crypto/keembay/ocs-hcu.c +f: drivers/crypto/keembay/ocs-hcu.h +
|
Cryptography hardware acceleration
|
5a5a27b3e1577dbd63b0ac114d784bc3695e245b
|
daniele alessandrelli declan murphy declan murphy intel com
| |||
crypto: marvell - add marvell octeontx2 cpt pf driver
|
adds skeleton for the marvell octeontx2 cpt physical function driver which includes probe, pci specific initialization and hardware register defines. rvu defines are present in af driver (drivers/net/ethernet/marvell/octeontx2/af), header files from af driver are included here to avoid duplication.
|
this release allows to map an uid to a different one in a mount; it also adds support for selecting the preemption model at runtime; support for a low-overhead memory error detector designed to be used in production; support for the acrn hypervisor designed for embedded systems; btrfs initial support for zoned devices, subpage blocks sizes and performance improvements; support for eager nfs writes; support for a thermal power management to control the surface temperature of embedded devices in an unified way; the napi polling can be moved to a kernel thread; and support for non-blocking path lookups. as always, there are many other features, new drivers, improvements and fixes.
|
add support for marvell octeontx2 cpt engine
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'cryptography', 'security', 'networking', 'architectures x86 arm risc-v powerpc mips csky s390 pa-risc c6x']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'device voltage and frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'non-transparent bridge (ntb)', 'thunderbolt', 'clock', 'phy ("physical layer" framework)', 'cxl (compute express link)', 'various']
|
['octeontx2']
|
['h', 'kconfig', 'c', 'makefile']
| 7
| 633
| 0
|
--- diff --git a/drivers/crypto/marvell/kconfig b/drivers/crypto/marvell/kconfig --- a/drivers/crypto/marvell/kconfig +++ b/drivers/crypto/marvell/kconfig + +config crypto_dev_octeontx2_cpt + tristate "marvell octeontx2 cpt driver" + depends on arm64 || compile_test + depends on pci_msi && 64bit + select octeontx2_mbox + select crypto_dev_marvell + help + this driver allows you to utilize the marvell cryptographic + accelerator unit(cpt) found in octeontx2 series of processors. diff --git a/drivers/crypto/marvell/makefile b/drivers/crypto/marvell/makefile --- a/drivers/crypto/marvell/makefile +++ b/drivers/crypto/marvell/makefile +obj-$(config_crypto_dev_octeontx2_cpt) += octeontx2/ diff --git a/drivers/crypto/marvell/octeontx2/makefile b/drivers/crypto/marvell/octeontx2/makefile --- /dev/null +++ b/drivers/crypto/marvell/octeontx2/makefile +# spdx-license-identifier: gpl-2.0-only +obj-$(config_crypto_dev_octeontx2_cpt) += octeontx2-cpt.o + +octeontx2-cpt-objs := otx2_cptpf_main.o + +ccflags-y += -i$(srctree)/drivers/net/ethernet/marvell/octeontx2/af diff --git a/drivers/crypto/marvell/octeontx2/otx2_cpt_common.h b/drivers/crypto/marvell/octeontx2/otx2_cpt_common.h --- /dev/null +++ b/drivers/crypto/marvell/octeontx2/otx2_cpt_common.h +/* spdx-license-identifier: gpl-2.0-only + * copyright (c) 2020 marvell. + */ + +#ifndef __otx2_cpt_common_h +#define __otx2_cpt_common_h + +#include <linux/pci.h> +#include <linux/types.h> +#include <linux/module.h> +#include <linux/delay.h> +#include <linux/crypto.h> +#include "otx2_cpt_hw_types.h" +#include "rvu.h" + +#define otx2_cpt_rvu_func_addr_s(blk, slot, offs) \ + (((blk) << 20) | ((slot) << 12) | (offs)) + +static inline void otx2_cpt_write64(void __iomem *reg_base, u64 blk, u64 slot, + u64 offs, u64 val) +{ + writeq_relaxed(val, reg_base + + otx2_cpt_rvu_func_addr_s(blk, slot, offs)); +} + +static inline u64 otx2_cpt_read64(void __iomem *reg_base, u64 blk, u64 slot, + u64 offs) +{ + return readq_relaxed(reg_base + + otx2_cpt_rvu_func_addr_s(blk, slot, offs)); +} +#endif /* __otx2_cpt_common_h */ diff --git a/drivers/crypto/marvell/octeontx2/otx2_cpt_hw_types.h b/drivers/crypto/marvell/octeontx2/otx2_cpt_hw_types.h --- /dev/null +++ b/drivers/crypto/marvell/octeontx2/otx2_cpt_hw_types.h +/* spdx-license-identifier: gpl-2.0-only + * copyright (c) 2020 marvell. + */ + +#ifndef __otx2_cpt_hw_types_h +#define __otx2_cpt_hw_types_h + +#include <linux/types.h> + +/* device ids */ +#define otx2_cpt_pci_pf_device_id 0xa0fd +#define otx2_cpt_pci_vf_device_id 0xa0fe + +/* mailbox interrupts offset */ +#define otx2_cpt_pf_mbox_int 6 +#define otx2_cpt_pf_int_vec_e_mboxx(x, a) ((x) + (a)) + +/* maximum supported microcode groups */ +#define otx2_cpt_max_engine_groups 8 + +/* cpt instruction size in bytes */ +#define otx2_cpt_inst_size 64 +/* + * cpt vf msix vectors and their offsets + */ +#define otx2_cpt_vf_msix_vectors 1 +#define otx2_cpt_vf_intr_mbox_mask bit(0) + +/* cpt lf msix vectors */ +#define otx2_cpt_lf_msix_vectors 2 + +/* octeontx2 cpt pf registers */ +#define otx2_cpt_pf_constants (0x0) +#define otx2_cpt_pf_reset (0x100) +#define otx2_cpt_pf_diag (0x120) +#define otx2_cpt_pf_bist_status (0x160) +#define otx2_cpt_pf_ecc0_ctl (0x200) +#define otx2_cpt_pf_ecc0_flip (0x210) +#define otx2_cpt_pf_ecc0_int (0x220) +#define otx2_cpt_pf_ecc0_int_w1s (0x230) +#define otx2_cpt_pf_ecc0_ena_w1s (0x240) +#define otx2_cpt_pf_ecc0_ena_w1c (0x250) +#define otx2_cpt_pf_mbox_intx(b) (0x400 | (b) << 3) +#define otx2_cpt_pf_mbox_int_w1sx(b) (0x420 | (b) << 3) +#define otx2_cpt_pf_mbox_ena_w1cx(b) (0x440 | (b) << 3) +#define otx2_cpt_pf_mbox_ena_w1sx(b) (0x460 | (b) << 3) +#define otx2_cpt_pf_exec_int (0x500) +#define otx2_cpt_pf_exec_int_w1s (0x520) +#define otx2_cpt_pf_exec_ena_w1c (0x540) +#define otx2_cpt_pf_exec_ena_w1s (0x560) +#define otx2_cpt_pf_gx_en(b) (0x600 | (b) << 3) +#define otx2_cpt_pf_exec_info (0x700) +#define otx2_cpt_pf_exec_busy (0x800) +#define otx2_cpt_pf_exec_info0 (0x900) +#define otx2_cpt_pf_exec_info1 (0x910) +#define otx2_cpt_pf_inst_req_pc (0x10000) +#define otx2_cpt_pf_inst_latency_pc (0x10020) +#define otx2_cpt_pf_rd_req_pc (0x10040) +#define otx2_cpt_pf_rd_latency_pc (0x10060) +#define otx2_cpt_pf_rd_uc_pc (0x10080) +#define otx2_cpt_pf_active_cycles_pc (0x10100) +#define otx2_cpt_pf_exe_ctl (0x4000000) +#define otx2_cpt_pf_exe_status (0x4000008) +#define otx2_cpt_pf_exe_clk (0x4000010) +#define otx2_cpt_pf_exe_dbg_ctl (0x4000018) +#define otx2_cpt_pf_exe_dbg_data (0x4000020) +#define otx2_cpt_pf_exe_bist_status (0x4000028) +#define otx2_cpt_pf_exe_req_timer (0x4000030) +#define otx2_cpt_pf_exe_mem_ctl (0x4000038) +#define otx2_cpt_pf_exe_perf_ctl (0x4001000) +#define otx2_cpt_pf_exe_dbg_cntx(b) (0x4001100 | (b) << 3) +#define otx2_cpt_pf_exe_perf_event_cnt (0x4001180) +#define otx2_cpt_pf_exe_epci_inbx_cnt(b) (0x4001200 | (b) << 3) +#define otx2_cpt_pf_exe_epci_outbx_cnt(b) (0x4001240 | (b) << 3) +#define otx2_cpt_pf_engx_ucode_base(b) (0x4002000 | (b) << 3) +#define otx2_cpt_pf_qx_ctl(b) (0x8000000 | (b) << 20) +#define otx2_cpt_pf_qx_gmctl(b) (0x8000020 | (b) << 20) +#define otx2_cpt_pf_qx_ctl2(b) (0x8000100 | (b) << 20) +#define otx2_cpt_pf_vfx_mboxx(b, c) (0x8001000 | (b) << 20 | \ + (c) << 8) + +/* octeontx2 cpt lf registers */ +#define otx2_cpt_lf_ctl (0x10) +#define otx2_cpt_lf_done_wait (0x30) +#define otx2_cpt_lf_inprog (0x40) +#define otx2_cpt_lf_done (0x50) +#define otx2_cpt_lf_done_ack (0x60) +#define otx2_cpt_lf_done_int_ena_w1s (0x90) +#define otx2_cpt_lf_done_int_ena_w1c (0xa0) +#define otx2_cpt_lf_misc_int (0xb0) +#define otx2_cpt_lf_misc_int_w1s (0xc0) +#define otx2_cpt_lf_misc_int_ena_w1s (0xd0) +#define otx2_cpt_lf_misc_int_ena_w1c (0xe0) +#define otx2_cpt_lf_q_base (0xf0) +#define otx2_cpt_lf_q_size (0x100) +#define otx2_cpt_lf_q_inst_ptr (0x110) +#define otx2_cpt_lf_q_grp_ptr (0x120) +#define otx2_cpt_lf_nqx(a) (0x400 | (a) << 3) +#define otx2_cpt_rvu_func_blkaddr_shift 20 +/* lmt lf registers */ +#define otx2_cpt_lmt_lfbase bit_ull(otx2_cpt_rvu_func_blkaddr_shift) +#define otx2_cpt_lmt_lf_lmtlinex(a) (otx2_cpt_lmt_lfbase | 0x000 | \ + (a) << 12) +/* rvu vf registers */ +#define otx2_rvu_vf_int (0x20) +#define otx2_rvu_vf_int_w1s (0x28) +#define otx2_rvu_vf_int_ena_w1s (0x30) +#define otx2_rvu_vf_int_ena_w1c (0x38) + +/* + * enumeration otx2_cpt_ucode_error_code_e + * + * enumerates ucode errors + */ +enum otx2_cpt_ucode_comp_code_e { + otx2_cpt_ucc_success = 0x00, + otx2_cpt_ucc_invalid_opcode = 0x01, + + /* scatter gather */ + otx2_cpt_ucc_sg_write_length = 0x02, + otx2_cpt_ucc_sg_list = 0x03, + otx2_cpt_ucc_sg_not_supported = 0x04, + +}; + +/* + * enumeration otx2_cpt_comp_e + * + * octeontx2 cpt completion enumeration + * enumerates the values of cpt_res_s[compcode]. + */ +enum otx2_cpt_comp_e { + otx2_cpt_comp_e_notdone = 0x00, + otx2_cpt_comp_e_good = 0x01, + otx2_cpt_comp_e_fault = 0x02, + otx2_cpt_comp_e_hwerr = 0x04, + otx2_cpt_comp_e_insterr = 0x05, + otx2_cpt_comp_e_last_entry = 0x06 +}; + +/* + * enumeration otx2_cpt_vf_int_vec_e + * + * octeontx2 cpt vf msi-x vector enumeration + * enumerates the msi-x interrupt vectors. + */ +enum otx2_cpt_vf_int_vec_e { + otx2_cpt_vf_int_vec_e_mbox = 0x00 +}; + +/* + * enumeration otx2_cpt_lf_int_vec_e + * + * octeontx2 cpt lf msi-x vector enumeration + * enumerates the msi-x interrupt vectors. + */ +enum otx2_cpt_lf_int_vec_e { + otx2_cpt_lf_int_vec_e_misc = 0x00, + otx2_cpt_lf_int_vec_e_done = 0x01 +}; + +/* + * structure otx2_cpt_inst_s + * + * cpt instruction structure + * this structure specifies the instruction layout. instructions are + * stored in memory as little-endian unless cpt()_pf_q()_ctl[inst_be] is set. + * cpt_inst_s_s + * word 0 + * doneint:1 done interrupt. + * 0 = no interrupts related to this instruction. + * 1 = when the instruction completes, cpt()_vq()_done[done] will be + * incremented,and based on the rules described there an interrupt may + * occur. + * word 1 + * res_addr [127: 64] result iova. + * if nonzero, specifies where to write cpt_res_s. + * if zero, no result structure will be written. + * address must be 16-byte aligned. + * bits <63:49> are ignored by hardware; software should use a + * sign-extended bit <48> for forward compatibility. + * word 2 + * grp:10 [171:162] if [wq_ptr] is nonzero, the sso guest-group to use when + * cpt submits work sso. + * for the sso to not discard the add-work request, fpa_pf_map() must map + * [grp] and cpt()_pf_q()_gmctl[gmid] as valid. + * tt:2 [161:160] if [wq_ptr] is nonzero, the sso tag type to use when cpt + * submits work to sso + * tag:32 [159:128] if [wq_ptr] is nonzero, the sso tag to use when cpt + * submits work to sso. + * word 3 + * wq_ptr [255:192] if [wq_ptr] is nonzero, it is a pointer to a + * work-queue entry that cpt submits work to sso after all context, + * output data, and result write operations are visible to other + * cnxxxx units and the cores. bits <2:0> must be zero. + * bits <63:49> are ignored by hardware; software should + * use a sign-extended bit <48> for forward compatibility. + * internal: + * bits <63:49>, <2:0> are ignored by hardware, treated as always 0x0. + * word 4 + * ei0; [319:256] engine instruction word 0. passed to the ae/se. + * word 5 + * ei1; [383:320] engine instruction word 1. passed to the ae/se. + * word 6 + * ei2; [447:384] engine instruction word 1. passed to the ae/se. + * word 7 + * ei3; [511:448] engine instruction word 1. passed to the ae/se. + * + */ +union otx2_cpt_inst_s { + u64 u[8]; + + struct { + /* word 0 */ + u64 nixtxl:3; + u64 doneint:1; + u64 nixtx_addr:60; + /* word 1 */ + u64 res_addr; + /* word 2 */ + u64 tag:32; + u64 tt:2; + u64 grp:10; + u64 reserved_172_175:4; + u64 rvu_pf_func:16; + /* word 3 */ + u64 qord:1; + u64 reserved_194_193:2; + u64 wq_ptr:61; + /* word 4 */ + u64 ei0; + /* word 5 */ + u64 ei1; + /* word 6 */ + u64 ei2; + /* word 7 */ + u64 ei3; + } s; +}; + +/* + * structure otx2_cpt_res_s + * + * cpt result structure + * the cpt coprocessor writes the result structure after it completes a + * cpt_inst_s instruction. the result structure is exactly 16 bytes, and + * each instruction completion produces exactly one result structure. + * + * this structure is stored in memory as little-endian unless + * cpt()_pf_q()_ctl[inst_be] is set. + * cpt_res_s_s + * word 0 + * doneint:1 [16:16] done interrupt. this bit is copied from the + * corresponding instruction's cpt_inst_s[doneint]. + * compcode:8 [7:0] indicates completion/error status of the cpt coprocessor + * for the associated instruction, as enumerated by cpt_comp_e. + * core software may write the memory location containing [compcode] to + * 0x0 before ringing the doorbell, and then poll for completion by + * checking for a nonzero value. + * once the core observes a nonzero [compcode] value in this case,the cpt + * coprocessor will have also completed l2/dram write operations. + * word 1 + * reserved + * + */ +union otx2_cpt_res_s { + u64 u[2]; + + struct { + u64 compcode:8; + u64 uc_compcode:8; + u64 doneint:1; + u64 reserved_17_63:47; + u64 reserved_64_127; + } s; +}; + +/* + * register (rvu_pf_bar0) cpt#_af_constants1 + * + * cpt af constants register + * this register contains implementation-related parameters of cpt. + */ +union otx2_cptx_af_constants1 { + u64 u; + struct otx2_cptx_af_constants1_s { + u64 se:16; + u64 ie:16; + u64 ae:16; + u64 reserved_48_63:16; + } s; +}; + +/* + * rvu_pfvf_bar2 - cpt_lf_misc_int + * + * this register contain the per-queue miscellaneous interrupts. + * + */ +union otx2_cptx_lf_misc_int { + u64 u; + struct otx2_cptx_lf_misc_int_s { + u64 reserved_0:1; + u64 nqerr:1; + u64 irde:1; + u64 nwrp:1; + u64 reserved_4:1; + u64 hwerr:1; + u64 fault:1; + u64 reserved_7_63:57; + } s; +}; + +/* + * rvu_pfvf_bar2 - cpt_lf_misc_int_ena_w1s + * + * this register sets interrupt enable bits. + * + */ +union otx2_cptx_lf_misc_int_ena_w1s { + u64 u; + struct otx2_cptx_lf_misc_int_ena_w1s_s { + u64 reserved_0:1; + u64 nqerr:1; + u64 irde:1; + u64 nwrp:1; + u64 reserved_4:1; + u64 hwerr:1; + u64 fault:1; + u64 reserved_7_63:57; + } s; +}; + +/* + * rvu_pfvf_bar2 - cpt_lf_ctl + * + * this register configures the queue. + * + * when the queue is not execution-quiescent (see cpt_lf_inprog[eena,inflight]), + * software must only write this register with [ena]=0. + */ +union otx2_cptx_lf_ctl { + u64 u; + struct otx2_cptx_lf_ctl_s { + u64 ena:1; + u64 fc_ena:1; + u64 fc_up_crossing:1; + u64 reserved_3:1; + u64 fc_hyst_bits:4; + u64 reserved_8_63:56; + } s; +}; + +/* + * rvu_pfvf_bar2 - cpt_lf_done_wait + * + * this register specifies the per-queue interrupt coalescing settings. + */ +union otx2_cptx_lf_done_wait { + u64 u; + struct otx2_cptx_lf_done_wait_s { + u64 num_wait:20; + u64 reserved_20_31:12; + u64 time_wait:16; + u64 reserved_48_63:16; + } s; +}; + +/* + * rvu_pfvf_bar2 - cpt_lf_done + * + * this register contain the per-queue instruction done count. + */ +union otx2_cptx_lf_done { + u64 u; + struct otx2_cptx_lf_done_s { + u64 done:20; + u64 reserved_20_63:44; + } s; +}; + +/* + * rvu_pfvf_bar2 - cpt_lf_inprog + * + * these registers contain the per-queue instruction in flight registers. + * + */ +union otx2_cptx_lf_inprog { + u64 u; + struct otx2_cptx_lf_inprog_s { + u64 inflight:9; + u64 reserved_9_15:7; + u64 eena:1; + u64 grp_drp:1; + u64 reserved_18_30:13; + u64 grb_partial:1; + u64 grb_cnt:8; + u64 gwb_cnt:8; + u64 reserved_48_63:16; + } s; +}; + +/* + * rvu_pfvf_bar2 - cpt_lf_q_base + * + * cpt initializes these csr fields to these values on any cpt_lf_q_base write: + * _ cpt_lf_q_inst_ptr[xq_xor]=0. + * _ cpt_lf_q_inst_ptr[nq_ptr]=2. + * _ cpt_lf_q_inst_ptr[dq_ptr]=2. + * _ cpt_lf_q_grp_ptr[xq_xor]=0. + * _ cpt_lf_q_grp_ptr[nq_ptr]=1. + * _ cpt_lf_q_grp_ptr[dq_ptr]=1. + */ +union otx2_cptx_lf_q_base { + u64 u; + struct otx2_cptx_lf_q_base_s { + u64 fault:1; + u64 reserved_1_6:6; + u64 addr:46; + u64 reserved_53_63:11; + } s; +}; + +/* + * rvu_pfvf_bar2 - cpt_lf_q_size + * + * cpt initializes these csr fields to these values on any cpt_lf_q_size write: + * _ cpt_lf_q_inst_ptr[xq_xor]=0. + * _ cpt_lf_q_inst_ptr[nq_ptr]=2. + * _ cpt_lf_q_inst_ptr[dq_ptr]=2. + * _ cpt_lf_q_grp_ptr[xq_xor]=0. + * _ cpt_lf_q_grp_ptr[nq_ptr]=1. + * _ cpt_lf_q_grp_ptr[dq_ptr]=1. + */ +union otx2_cptx_lf_q_size { + u64 u; + struct otx2_cptx_lf_q_size_s { + u64 size_div40:15; + u64 reserved_15_63:49; + } s; +}; + +/* + * rvu_pf_bar0 - cpt_af_lf_ctl + * + * this register configures queues. this register should be written only + * when the queue is execution-quiescent (see cpt_lf_inprog[inflight]). + */ +union otx2_cptx_af_lf_ctrl { + u64 u; + struct otx2_cptx_af_lf_ctrl_s { + u64 pri:1; + u64 reserved_1_8:8; + u64 pf_func_inst:1; + u64 cont_err:1; + u64 reserved_11_15:5; + u64 nixtx_en:1; + u64 reserved_17_47:31; + u64 grp:8; + u64 reserved_56_63:8; + } s; +}; + +#endif /* __otx2_cpt_hw_types_h */ diff --git a/drivers/crypto/marvell/octeontx2/otx2_cptpf.h b/drivers/crypto/marvell/octeontx2/otx2_cptpf.h --- /dev/null +++ b/drivers/crypto/marvell/octeontx2/otx2_cptpf.h +/* spdx-license-identifier: gpl-2.0-only + * copyright (c) 2020 marvell. + */ + +#ifndef __otx2_cptpf_h +#define __otx2_cptpf_h + +struct otx2_cptpf_dev { + void __iomem *reg_base; /* cpt pf registers start address */ + struct pci_dev *pdev; /* pci device handle */ +}; + +#endif /* __otx2_cptpf_h */ diff --git a/drivers/crypto/marvell/octeontx2/otx2_cptpf_main.c b/drivers/crypto/marvell/octeontx2/otx2_cptpf_main.c --- /dev/null +++ b/drivers/crypto/marvell/octeontx2/otx2_cptpf_main.c +// spdx-license-identifier: gpl-2.0-only +/* copyright (c) 2020 marvell. */ + +#include <linux/firmware.h> +#include "otx2_cpt_hw_types.h" +#include "otx2_cpt_common.h" +#include "otx2_cptpf.h" +#include "rvu_reg.h" + +#define otx2_cpt_drv_name "octeontx2-cpt" +#define otx2_cpt_drv_string "marvell octeontx2 cpt physical function driver" + +static int cpt_is_pf_usable(struct otx2_cptpf_dev *cptpf) +{ + u64 rev; + + rev = otx2_cpt_read64(cptpf->reg_base, blkaddr_rvum, 0, + rvu_pf_block_addrx_disc(blkaddr_rvum)); + rev = (rev >> 12) & 0xff; + /* + * check if af has setup revision for rvum block, otherwise + * driver probe should be deferred until af driver comes up + */ + if (!rev) { + dev_warn(&cptpf->pdev->dev, + "af is not initialized, deferring probe "); + return -eprobe_defer; + } + return 0; +} + +static int otx2_cptpf_probe(struct pci_dev *pdev, + const struct pci_device_id *ent) +{ + struct device *dev = &pdev->dev; + struct otx2_cptpf_dev *cptpf; + int err; + + cptpf = devm_kzalloc(dev, sizeof(*cptpf), gfp_kernel); + if (!cptpf) + return -enomem; + + err = pcim_enable_device(pdev); + if (err) { + dev_err(dev, "failed to enable pci device "); + goto clear_drvdata; + } + + err = dma_set_mask_and_coherent(dev, dma_bit_mask(48)); + if (err) { + dev_err(dev, "unable to get usable dma configuration "); + goto clear_drvdata; + } + /* map pf's configuration registers */ + err = pcim_iomap_regions_request_all(pdev, 1 << pci_pf_reg_bar_num, + otx2_cpt_drv_name); + if (err) { + dev_err(dev, "couldn't get pci resources 0x%x ", err); + goto clear_drvdata; + } + pci_set_master(pdev); + pci_set_drvdata(pdev, cptpf); + cptpf->pdev = pdev; + + cptpf->reg_base = pcim_iomap_table(pdev)[pci_pf_reg_bar_num]; + + /* check if af driver is up, otherwise defer probe */ + err = cpt_is_pf_usable(cptpf); + if (err) + goto clear_drvdata; + + return 0; + +clear_drvdata: + pci_set_drvdata(pdev, null); + return err; +} + +static void otx2_cptpf_remove(struct pci_dev *pdev) +{ + struct otx2_cptpf_dev *cptpf = pci_get_drvdata(pdev); + + if (!cptpf) + return; + + pci_set_drvdata(pdev, null); +} + +/* supported devices */ +static const struct pci_device_id otx2_cpt_id_table[] = { + { pci_device(pci_vendor_id_cavium, otx2_cpt_pci_pf_device_id) }, + { 0, } /* end of table */ +}; + +static struct pci_driver otx2_cpt_pci_driver = { + .name = otx2_cpt_drv_name, + .id_table = otx2_cpt_id_table, + .probe = otx2_cptpf_probe, + .remove = otx2_cptpf_remove, +}; + +module_pci_driver(otx2_cpt_pci_driver); + +module_author("marvell"); +module_description(otx2_cpt_drv_string); +module_license("gpl v2"); +module_device_table(pci, otx2_cpt_id_table);
|
Cryptography hardware acceleration
|
5e8ce8334734c5f23fe54774e989b395bc6da635
|
srujana challa
|
drivers
|
crypto
|
marvell, octeontx2
|
crypto: octeontx2 - add mailbox communication with af
|
in the resource virtualization unit (rvu) each of the pf and af (admin function) share a 64kb of reserved memory region for communication. this patch initializes pf <=> af mailbox irqs, registers handlers for processing these communication messages.
|
this release allows to map an uid to a different one in a mount; it also adds support for selecting the preemption model at runtime; support for a low-overhead memory error detector designed to be used in production; support for the acrn hypervisor designed for embedded systems; btrfs initial support for zoned devices, subpage blocks sizes and performance improvements; support for eager nfs writes; support for a thermal power management to control the surface temperature of embedded devices in an unified way; the napi polling can be moved to a kernel thread; and support for non-blocking path lookups. as always, there are many other features, new drivers, improvements and fixes.
|
add support for marvell octeontx2 cpt engine
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'cryptography', 'security', 'networking', 'architectures x86 arm risc-v powerpc mips csky s390 pa-risc c6x']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'device voltage and frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'non-transparent bridge (ntb)', 'thunderbolt', 'clock', 'phy ("physical layer" framework)', 'cxl (compute express link)', 'various']
|
['octeontx2']
|
['h', 'c', 'makefile']
| 6
| 236
| 2
|
--- diff --git a/drivers/crypto/marvell/octeontx2/makefile b/drivers/crypto/marvell/octeontx2/makefile --- a/drivers/crypto/marvell/octeontx2/makefile +++ b/drivers/crypto/marvell/octeontx2/makefile -octeontx2-cpt-objs := otx2_cptpf_main.o +octeontx2-cpt-objs := otx2_cptpf_main.o otx2_cptpf_mbox.o \ + otx2_cpt_mbox_common.o diff --git a/drivers/crypto/marvell/octeontx2/otx2_cpt_common.h b/drivers/crypto/marvell/octeontx2/otx2_cpt_common.h --- a/drivers/crypto/marvell/octeontx2/otx2_cpt_common.h +++ b/drivers/crypto/marvell/octeontx2/otx2_cpt_common.h +#include "mbox.h" + +int otx2_cpt_send_ready_msg(struct otx2_mbox *mbox, struct pci_dev *pdev); +int otx2_cpt_send_mbox_msg(struct otx2_mbox *mbox, struct pci_dev *pdev); diff --git a/drivers/crypto/marvell/octeontx2/otx2_cpt_mbox_common.c b/drivers/crypto/marvell/octeontx2/otx2_cpt_mbox_common.c --- /dev/null +++ b/drivers/crypto/marvell/octeontx2/otx2_cpt_mbox_common.c +// spdx-license-identifier: gpl-2.0-only +/* copyright (c) 2020 marvell. */ + +#include "otx2_cpt_common.h" + +int otx2_cpt_send_mbox_msg(struct otx2_mbox *mbox, struct pci_dev *pdev) +{ + int ret; + + otx2_mbox_msg_send(mbox, 0); + ret = otx2_mbox_wait_for_rsp(mbox, 0); + if (ret == -eio) { + dev_err(&pdev->dev, "rvu mbox timeout. "); + return ret; + } else if (ret) { + dev_err(&pdev->dev, "rvu mbox error: %d. ", ret); + return -efault; + } + return ret; +} + +int otx2_cpt_send_ready_msg(struct otx2_mbox *mbox, struct pci_dev *pdev) +{ + struct mbox_msghdr *req; + + req = otx2_mbox_alloc_msg_rsp(mbox, 0, sizeof(*req), + sizeof(struct ready_msg_rsp)); + if (req == null) { + dev_err(&pdev->dev, "rvu mbox failed to get message. "); + return -efault; + } + req->id = mbox_msg_ready; + req->sig = otx2_mbox_req_sig; + req->pcifunc = 0; + + return otx2_cpt_send_mbox_msg(mbox, pdev); +} diff --git a/drivers/crypto/marvell/octeontx2/otx2_cptpf.h b/drivers/crypto/marvell/octeontx2/otx2_cptpf.h --- a/drivers/crypto/marvell/octeontx2/otx2_cptpf.h +++ b/drivers/crypto/marvell/octeontx2/otx2_cptpf.h +#include "otx2_cpt_common.h" + + void __iomem *afpf_mbox_base; /* pf-af mbox start address */ + /* af <=> pf mbox */ + struct otx2_mbox afpf_mbox; + struct work_struct afpf_mbox_work; + struct workqueue_struct *afpf_mbox_wq; + + u8 pf_id; /* rvu pf number */ +irqreturn_t otx2_cptpf_afpf_mbox_intr(int irq, void *arg); +void otx2_cptpf_afpf_mbox_handler(struct work_struct *work); + diff --git a/drivers/crypto/marvell/octeontx2/otx2_cptpf_main.c b/drivers/crypto/marvell/octeontx2/otx2_cptpf_main.c --- a/drivers/crypto/marvell/octeontx2/otx2_cptpf_main.c +++ b/drivers/crypto/marvell/octeontx2/otx2_cptpf_main.c +static void cptpf_disable_afpf_mbox_intr(struct otx2_cptpf_dev *cptpf) +{ + /* disable af-pf interrupt */ + otx2_cpt_write64(cptpf->reg_base, blkaddr_rvum, 0, rvu_pf_int_ena_w1c, + 0x1ull); + /* clear interrupt if any */ + otx2_cpt_write64(cptpf->reg_base, blkaddr_rvum, 0, rvu_pf_int, 0x1ull); +} + +static int cptpf_register_afpf_mbox_intr(struct otx2_cptpf_dev *cptpf) +{ + struct pci_dev *pdev = cptpf->pdev; + struct device *dev = &pdev->dev; + int ret, irq; + + irq = pci_irq_vector(pdev, rvu_pf_int_vec_afpf_mbox); + /* register af-pf mailbox interrupt handler */ + ret = devm_request_irq(dev, irq, otx2_cptpf_afpf_mbox_intr, 0, + "cptafpf mbox", cptpf); + if (ret) { + dev_err(dev, + "irq registration failed for pfaf mbox irq "); + return ret; + } + /* clear interrupt if any, to avoid spurious interrupts */ + otx2_cpt_write64(cptpf->reg_base, blkaddr_rvum, 0, rvu_pf_int, 0x1ull); + /* enable af-pf interrupt */ + otx2_cpt_write64(cptpf->reg_base, blkaddr_rvum, 0, rvu_pf_int_ena_w1s, + 0x1ull); + + ret = otx2_cpt_send_ready_msg(&cptpf->afpf_mbox, cptpf->pdev); + if (ret) { + dev_warn(dev, + "af not responding to mailbox, deferring probe "); + cptpf_disable_afpf_mbox_intr(cptpf); + return -eprobe_defer; + } + return 0; +} + +static int cptpf_afpf_mbox_init(struct otx2_cptpf_dev *cptpf) +{ + int err; + + cptpf->afpf_mbox_wq = alloc_workqueue("cpt_afpf_mailbox", + wq_unbound | wq_highpri | + wq_mem_reclaim, 1); + if (!cptpf->afpf_mbox_wq) + return -enomem; + + err = otx2_mbox_init(&cptpf->afpf_mbox, cptpf->afpf_mbox_base, + cptpf->pdev, cptpf->reg_base, mbox_dir_pfaf, 1); + if (err) + goto error; + + init_work(&cptpf->afpf_mbox_work, otx2_cptpf_afpf_mbox_handler); + return 0; + +error: + destroy_workqueue(cptpf->afpf_mbox_wq); + return err; +} + +static void cptpf_afpf_mbox_destroy(struct otx2_cptpf_dev *cptpf) +{ + destroy_workqueue(cptpf->afpf_mbox_wq); + otx2_mbox_destroy(&cptpf->afpf_mbox); +} + + resource_size_t offset, size; + offset = pci_resource_start(pdev, pci_mbox_bar_num); + size = pci_resource_len(pdev, pci_mbox_bar_num); + /* map af-pf mailbox memory */ + cptpf->afpf_mbox_base = devm_ioremap_wc(dev, offset, size); + if (!cptpf->afpf_mbox_base) { + dev_err(&pdev->dev, "unable to map bar4 "); + err = -enodev; + goto clear_drvdata; + } + err = pci_alloc_irq_vectors(pdev, rvu_pf_int_vec_cnt, + rvu_pf_int_vec_cnt, pci_irq_msix); + if (err < 0) { + dev_err(dev, "request for %d msix vectors failed ", + rvu_pf_int_vec_cnt); + goto clear_drvdata; + } + /* initialize af-pf mailbox */ + err = cptpf_afpf_mbox_init(cptpf); + if (err) + goto clear_drvdata; + /* register mailbox interrupt */ + err = cptpf_register_afpf_mbox_intr(cptpf); + if (err) + goto destroy_afpf_mbox; + +destroy_afpf_mbox: + cptpf_afpf_mbox_destroy(cptpf); - + /* disable af-pf mailbox interrupt */ + cptpf_disable_afpf_mbox_intr(cptpf); + /* destroy af-pf mbox */ + cptpf_afpf_mbox_destroy(cptpf); diff --git a/drivers/crypto/marvell/octeontx2/otx2_cptpf_mbox.c b/drivers/crypto/marvell/octeontx2/otx2_cptpf_mbox.c --- /dev/null +++ b/drivers/crypto/marvell/octeontx2/otx2_cptpf_mbox.c +// spdx-license-identifier: gpl-2.0-only +/* copyright (c) 2020 marvell. */ + +#include "otx2_cpt_common.h" +#include "otx2_cptpf.h" +#include "rvu_reg.h" + +irqreturn_t otx2_cptpf_afpf_mbox_intr(int __always_unused irq, void *arg) +{ + struct otx2_cptpf_dev *cptpf = arg; + u64 intr; + + /* read the interrupt bits */ + intr = otx2_cpt_read64(cptpf->reg_base, blkaddr_rvum, 0, rvu_pf_int); + + if (intr & 0x1ull) { + /* schedule work queue function to process the mbox request */ + queue_work(cptpf->afpf_mbox_wq, &cptpf->afpf_mbox_work); + /* clear and ack the interrupt */ + otx2_cpt_write64(cptpf->reg_base, blkaddr_rvum, 0, rvu_pf_int, + 0x1ull); + } + return irq_handled; +} + +static void process_afpf_mbox_msg(struct otx2_cptpf_dev *cptpf, + struct mbox_msghdr *msg) +{ + struct device *dev = &cptpf->pdev->dev; + + if (msg->id >= mbox_msg_max) { + dev_err(dev, "mbox msg with unknown id %d ", msg->id); + return; + } + if (msg->sig != otx2_mbox_rsp_sig) { + dev_err(dev, "mbox msg with wrong signature %x, id %d ", + msg->sig, msg->id); + return; + } + + switch (msg->id) { + case mbox_msg_ready: + cptpf->pf_id = (msg->pcifunc >> rvu_pfvf_pf_shift) & + rvu_pfvf_pf_mask; + break; + default: + dev_err(dev, + "unsupported msg %d received. ", msg->id); + break; + } +} + +/* handle mailbox messages received from af */ +void otx2_cptpf_afpf_mbox_handler(struct work_struct *work) +{ + struct otx2_cptpf_dev *cptpf; + struct otx2_mbox *afpf_mbox; + struct otx2_mbox_dev *mdev; + struct mbox_hdr *rsp_hdr; + struct mbox_msghdr *msg; + int offset, i; + + cptpf = container_of(work, struct otx2_cptpf_dev, afpf_mbox_work); + afpf_mbox = &cptpf->afpf_mbox; + mdev = &afpf_mbox->dev[0]; + /* sync mbox data into memory */ + smp_wmb(); + + rsp_hdr = (struct mbox_hdr *)(mdev->mbase + afpf_mbox->rx_start); + offset = align(sizeof(*rsp_hdr), mbox_msg_align); + + for (i = 0; i < rsp_hdr->num_msgs; i++) { + msg = (struct mbox_msghdr *)(mdev->mbase + afpf_mbox->rx_start + + offset); + process_afpf_mbox_msg(cptpf, msg); + offset = msg->next_msgoff; + mdev->msgs_acked++; + } + otx2_mbox_reset(afpf_mbox, 0); +}
|
Cryptography hardware acceleration
|
83ffcf78627f98919ebae3dc6715982cc83176ed
|
srujana challa
|
drivers
|
crypto
|
marvell, octeontx2
|
crypto: octeontx2 - enable sr-iov and mailbox communication with vf
|
adds 'sriov_configure' to enable/disable virtual functions (vfs). also initializes vf<=>pf mailbox irqs, register handlers for processing these mailbox messages.
|
this release allows to map an uid to a different one in a mount; it also adds support for selecting the preemption model at runtime; support for a low-overhead memory error detector designed to be used in production; support for the acrn hypervisor designed for embedded systems; btrfs initial support for zoned devices, subpage blocks sizes and performance improvements; support for eager nfs writes; support for a thermal power management to control the surface temperature of embedded devices in an unified way; the napi polling can be moved to a kernel thread; and support for non-blocking path lookups. as always, there are many other features, new drivers, improvements and fixes.
|
add support for marvell octeontx2 cpt engine
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'cryptography', 'security', 'networking', 'architectures x86 arm risc-v powerpc mips csky s390 pa-risc c6x']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'device voltage and frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'non-transparent bridge (ntb)', 'thunderbolt', 'clock', 'phy ("physical layer" framework)', 'cxl (compute express link)', 'various']
|
['octeontx2']
|
['h', 'c']
| 4
| 583
| 2
|
--- diff --git a/drivers/crypto/marvell/octeontx2/otx2_cpt_common.h b/drivers/crypto/marvell/octeontx2/otx2_cpt_common.h --- a/drivers/crypto/marvell/octeontx2/otx2_cpt_common.h +++ b/drivers/crypto/marvell/octeontx2/otx2_cpt_common.h +#define otx2_cpt_max_vfs_num 128 diff --git a/drivers/crypto/marvell/octeontx2/otx2_cptpf.h b/drivers/crypto/marvell/octeontx2/otx2_cptpf.h --- a/drivers/crypto/marvell/octeontx2/otx2_cptpf.h +++ b/drivers/crypto/marvell/octeontx2/otx2_cptpf.h +struct otx2_cptpf_dev; +struct otx2_cptvf_info { + struct otx2_cptpf_dev *cptpf; /* pf pointer this vf belongs to */ + struct work_struct vfpf_mbox_work; + struct pci_dev *vf_dev; + int vf_id; + int intr_idx; +}; + +struct cptpf_flr_work { + struct work_struct work; + struct otx2_cptpf_dev *pf; +}; + + void __iomem *vfpf_mbox_base; /* vf-pf mbox start address */ + struct otx2_cptvf_info vf[otx2_cpt_max_vfs_num]; + /* vf <=> pf mbox */ + struct otx2_mbox vfpf_mbox; + struct workqueue_struct *vfpf_mbox_wq; + + struct workqueue_struct *flr_wq; + struct cptpf_flr_work *flr_work; + + u8 max_vfs; /* maximum number of vfs supported by cpt */ + u8 enabled_vfs; /* number of enabled vfs */ +irqreturn_t otx2_cptpf_vfpf_mbox_intr(int irq, void *arg); +void otx2_cptpf_vfpf_mbox_handler(struct work_struct *work); diff --git a/drivers/crypto/marvell/octeontx2/otx2_cptpf_main.c b/drivers/crypto/marvell/octeontx2/otx2_cptpf_main.c --- a/drivers/crypto/marvell/octeontx2/otx2_cptpf_main.c +++ b/drivers/crypto/marvell/octeontx2/otx2_cptpf_main.c +static void cptpf_enable_vfpf_mbox_intr(struct otx2_cptpf_dev *cptpf, + int num_vfs) +{ + int ena_bits; + + /* clear any pending interrupts */ + otx2_cpt_write64(cptpf->reg_base, blkaddr_rvum, 0, + rvu_pf_vfpf_mbox_intx(0), ~0x0ull); + otx2_cpt_write64(cptpf->reg_base, blkaddr_rvum, 0, + rvu_pf_vfpf_mbox_intx(1), ~0x0ull); + + /* enable vf interrupts for vfs from 0 to 63 */ + ena_bits = ((num_vfs - 1) % 64); + otx2_cpt_write64(cptpf->reg_base, blkaddr_rvum, 0, + rvu_pf_vfpf_mbox_int_ena_w1sx(0), + genmask_ull(ena_bits, 0)); + + if (num_vfs > 64) { + /* enable vf interrupts for vfs from 64 to 127 */ + ena_bits = num_vfs - 64 - 1; + otx2_cpt_write64(cptpf->reg_base, blkaddr_rvum, 0, + rvu_pf_vfpf_mbox_int_ena_w1sx(1), + genmask_ull(ena_bits, 0)); + } +} + +static void cptpf_disable_vfpf_mbox_intr(struct otx2_cptpf_dev *cptpf, + int num_vfs) +{ + int vector; + + /* disable vf-pf interrupts */ + otx2_cpt_write64(cptpf->reg_base, blkaddr_rvum, 0, + rvu_pf_vfpf_mbox_int_ena_w1cx(0), ~0ull); + otx2_cpt_write64(cptpf->reg_base, blkaddr_rvum, 0, + rvu_pf_vfpf_mbox_int_ena_w1cx(1), ~0ull); + /* clear any pending interrupts */ + otx2_cpt_write64(cptpf->reg_base, blkaddr_rvum, 0, + rvu_pf_vfpf_mbox_intx(0), ~0ull); + + vector = pci_irq_vector(cptpf->pdev, rvu_pf_int_vec_vfpf_mbox0); + free_irq(vector, cptpf); + + if (num_vfs > 64) { + otx2_cpt_write64(cptpf->reg_base, blkaddr_rvum, 0, + rvu_pf_vfpf_mbox_intx(1), ~0ull); + vector = pci_irq_vector(cptpf->pdev, rvu_pf_int_vec_vfpf_mbox1); + free_irq(vector, cptpf); + } +} + +static void cptpf_enable_vf_flr_intrs(struct otx2_cptpf_dev *cptpf) +{ + /* clear interrupt if any */ + otx2_cpt_write64(cptpf->reg_base, blkaddr_rvum, 0, rvu_pf_vfflr_intx(0), + ~0x0ull); + otx2_cpt_write64(cptpf->reg_base, blkaddr_rvum, 0, rvu_pf_vfflr_intx(1), + ~0x0ull); + + /* enable vf flr interrupts */ + otx2_cpt_write64(cptpf->reg_base, blkaddr_rvum, 0, + rvu_pf_vfflr_int_ena_w1sx(0), ~0x0ull); + otx2_cpt_write64(cptpf->reg_base, blkaddr_rvum, 0, + rvu_pf_vfflr_int_ena_w1sx(1), ~0x0ull); +} + +static void cptpf_disable_vf_flr_intrs(struct otx2_cptpf_dev *cptpf, + int num_vfs) +{ + int vector; + + /* disable vf flr interrupts */ + otx2_cpt_write64(cptpf->reg_base, blkaddr_rvum, 0, + rvu_pf_vfflr_int_ena_w1cx(0), ~0x0ull); + otx2_cpt_write64(cptpf->reg_base, blkaddr_rvum, 0, + rvu_pf_vfflr_int_ena_w1cx(1), ~0x0ull); + + /* clear interrupt if any */ + otx2_cpt_write64(cptpf->reg_base, blkaddr_rvum, 0, rvu_pf_vfflr_intx(0), + ~0x0ull); + otx2_cpt_write64(cptpf->reg_base, blkaddr_rvum, 0, rvu_pf_vfflr_intx(1), + ~0x0ull); + + vector = pci_irq_vector(cptpf->pdev, rvu_pf_int_vec_vfflr0); + free_irq(vector, cptpf); + + if (num_vfs > 64) { + vector = pci_irq_vector(cptpf->pdev, rvu_pf_int_vec_vfflr1); + free_irq(vector, cptpf); + } +} + +static void cptpf_flr_wq_handler(struct work_struct *work) +{ + struct cptpf_flr_work *flr_work; + struct otx2_cptpf_dev *pf; + struct mbox_msghdr *req; + struct otx2_mbox *mbox; + int vf, reg = 0; + + flr_work = container_of(work, struct cptpf_flr_work, work); + pf = flr_work->pf; + mbox = &pf->afpf_mbox; + + vf = flr_work - pf->flr_work; + + req = otx2_mbox_alloc_msg_rsp(mbox, 0, sizeof(*req), + sizeof(struct msg_rsp)); + if (!req) + return; + + req->sig = otx2_mbox_req_sig; + req->id = mbox_msg_vf_flr; + req->pcifunc &= rvu_pfvf_func_mask; + req->pcifunc |= (vf + 1) & rvu_pfvf_func_mask; + + otx2_cpt_send_mbox_msg(mbox, pf->pdev); + + if (vf >= 64) { + reg = 1; + vf = vf - 64; + } + /* clear transaction pending register */ + otx2_cpt_write64(pf->reg_base, blkaddr_rvum, 0, + rvu_pf_vftrpendx(reg), bit_ull(vf)); + otx2_cpt_write64(pf->reg_base, blkaddr_rvum, 0, + rvu_pf_vfflr_int_ena_w1sx(reg), bit_ull(vf)); +} + +static irqreturn_t cptpf_vf_flr_intr(int __always_unused irq, void *arg) +{ + int reg, dev, vf, start_vf, num_reg = 1; + struct otx2_cptpf_dev *cptpf = arg; + u64 intr; + + if (cptpf->max_vfs > 64) + num_reg = 2; + + for (reg = 0; reg < num_reg; reg++) { + intr = otx2_cpt_read64(cptpf->reg_base, blkaddr_rvum, 0, + rvu_pf_vfflr_intx(reg)); + if (!intr) + continue; + start_vf = 64 * reg; + for (vf = 0; vf < 64; vf++) { + if (!(intr & bit_ull(vf))) + continue; + dev = vf + start_vf; + queue_work(cptpf->flr_wq, &cptpf->flr_work[dev].work); + /* clear interrupt */ + otx2_cpt_write64(cptpf->reg_base, blkaddr_rvum, 0, + rvu_pf_vfflr_intx(reg), bit_ull(vf)); + /* disable the interrupt */ + otx2_cpt_write64(cptpf->reg_base, blkaddr_rvum, 0, + rvu_pf_vfflr_int_ena_w1cx(reg), + bit_ull(vf)); + } + } + return irq_handled; +} + +static void cptpf_unregister_vfpf_intr(struct otx2_cptpf_dev *cptpf, + int num_vfs) +{ + cptpf_disable_vfpf_mbox_intr(cptpf, num_vfs); + cptpf_disable_vf_flr_intrs(cptpf, num_vfs); +} + +static int cptpf_register_vfpf_intr(struct otx2_cptpf_dev *cptpf, int num_vfs) +{ + struct pci_dev *pdev = cptpf->pdev; + struct device *dev = &pdev->dev; + int ret, vector; + + vector = pci_irq_vector(pdev, rvu_pf_int_vec_vfpf_mbox0); + /* register vf-pf mailbox interrupt handler */ + ret = request_irq(vector, otx2_cptpf_vfpf_mbox_intr, 0, "cptvfpf mbox0", + cptpf); + if (ret) { + dev_err(dev, + "irq registration failed for pfvf mbox0 irq "); + return ret; + } + vector = pci_irq_vector(pdev, rvu_pf_int_vec_vfflr0); + /* register vf flr interrupt handler */ + ret = request_irq(vector, cptpf_vf_flr_intr, 0, "cptpf flr0", cptpf); + if (ret) { + dev_err(dev, + "irq registration failed for vfflr0 irq "); + goto free_mbox0_irq; + } + if (num_vfs > 64) { + vector = pci_irq_vector(pdev, rvu_pf_int_vec_vfpf_mbox1); + ret = request_irq(vector, otx2_cptpf_vfpf_mbox_intr, 0, + "cptvfpf mbox1", cptpf); + if (ret) { + dev_err(dev, + "irq registration failed for pfvf mbox1 irq "); + goto free_flr0_irq; + } + vector = pci_irq_vector(pdev, rvu_pf_int_vec_vfflr1); + /* register vf flr interrupt handler */ + ret = request_irq(vector, cptpf_vf_flr_intr, 0, "cptpf flr1", + cptpf); + if (ret) { + dev_err(dev, + "irq registration failed for vfflr1 irq "); + goto free_mbox1_irq; + } + } + cptpf_enable_vfpf_mbox_intr(cptpf, num_vfs); + cptpf_enable_vf_flr_intrs(cptpf); + + return 0; + +free_mbox1_irq: + vector = pci_irq_vector(pdev, rvu_pf_int_vec_vfpf_mbox1); + free_irq(vector, cptpf); +free_flr0_irq: + vector = pci_irq_vector(pdev, rvu_pf_int_vec_vfflr0); + free_irq(vector, cptpf); +free_mbox0_irq: + vector = pci_irq_vector(pdev, rvu_pf_int_vec_vfpf_mbox0); + free_irq(vector, cptpf); + return ret; +} + +static void cptpf_flr_wq_destroy(struct otx2_cptpf_dev *pf) +{ + if (!pf->flr_wq) + return; + destroy_workqueue(pf->flr_wq); + pf->flr_wq = null; + kfree(pf->flr_work); +} + +static int cptpf_flr_wq_init(struct otx2_cptpf_dev *cptpf, int num_vfs) +{ + int vf; + + cptpf->flr_wq = alloc_ordered_workqueue("cptpf_flr_wq", 0); + if (!cptpf->flr_wq) + return -enomem; + + cptpf->flr_work = kcalloc(num_vfs, sizeof(struct cptpf_flr_work), + gfp_kernel); + if (!cptpf->flr_work) + goto destroy_wq; + + for (vf = 0; vf < num_vfs; vf++) { + cptpf->flr_work[vf].pf = cptpf; + init_work(&cptpf->flr_work[vf].work, cptpf_flr_wq_handler); + } + return 0; + +destroy_wq: + destroy_workqueue(cptpf->flr_wq); + return -enomem; +} + +static int cptpf_vfpf_mbox_init(struct otx2_cptpf_dev *cptpf, int num_vfs) +{ + struct device *dev = &cptpf->pdev->dev; + u64 vfpf_mbox_base; + int err, i; + + cptpf->vfpf_mbox_wq = alloc_workqueue("cpt_vfpf_mailbox", + wq_unbound | wq_highpri | + wq_mem_reclaim, 1); + if (!cptpf->vfpf_mbox_wq) + return -enomem; + + /* map vf-pf mailbox memory */ + vfpf_mbox_base = readq(cptpf->reg_base + rvu_pf_vf_bar4_addr); + if (!vfpf_mbox_base) { + dev_err(dev, "vf-pf mailbox address not configured "); + err = -enomem; + goto free_wqe; + } + cptpf->vfpf_mbox_base = devm_ioremap_wc(dev, vfpf_mbox_base, + mbox_size * cptpf->max_vfs); + if (!cptpf->vfpf_mbox_base) { + dev_err(dev, "mapping of vf-pf mailbox address failed "); + err = -enomem; + goto free_wqe; + } + err = otx2_mbox_init(&cptpf->vfpf_mbox, cptpf->vfpf_mbox_base, + cptpf->pdev, cptpf->reg_base, mbox_dir_pfvf, + num_vfs); + if (err) + goto free_wqe; + + for (i = 0; i < num_vfs; i++) { + cptpf->vf[i].vf_id = i; + cptpf->vf[i].cptpf = cptpf; + cptpf->vf[i].intr_idx = i % 64; + init_work(&cptpf->vf[i].vfpf_mbox_work, + otx2_cptpf_vfpf_mbox_handler); + } + return 0; + +free_wqe: + destroy_workqueue(cptpf->vfpf_mbox_wq); + return err; +} + +static void cptpf_vfpf_mbox_destroy(struct otx2_cptpf_dev *cptpf) +{ + destroy_workqueue(cptpf->vfpf_mbox_wq); + otx2_mbox_destroy(&cptpf->vfpf_mbox); +} + +static int cptpf_sriov_disable(struct pci_dev *pdev) +{ + struct otx2_cptpf_dev *cptpf = pci_get_drvdata(pdev); + int num_vfs = pci_num_vf(pdev); + + if (!num_vfs) + return 0; + + pci_disable_sriov(pdev); + cptpf_unregister_vfpf_intr(cptpf, num_vfs); + cptpf_flr_wq_destroy(cptpf); + cptpf_vfpf_mbox_destroy(cptpf); + module_put(this_module); + cptpf->enabled_vfs = 0; + + return 0; +} + +static int cptpf_sriov_enable(struct pci_dev *pdev, int num_vfs) +{ + struct otx2_cptpf_dev *cptpf = pci_get_drvdata(pdev); + int ret; + + /* initialize vf<=>pf mailbox */ + ret = cptpf_vfpf_mbox_init(cptpf, num_vfs); + if (ret) + return ret; + + ret = cptpf_flr_wq_init(cptpf, num_vfs); + if (ret) + goto destroy_mbox; + /* register vf<=>pf mailbox interrupt */ + ret = cptpf_register_vfpf_intr(cptpf, num_vfs); + if (ret) + goto destroy_flr; + + cptpf->enabled_vfs = num_vfs; + ret = pci_enable_sriov(pdev, num_vfs); + if (ret) + goto disable_intr; + + dev_notice(&cptpf->pdev->dev, "vfs enabled: %d ", num_vfs); + + try_module_get(this_module); + return num_vfs; + +disable_intr: + cptpf_unregister_vfpf_intr(cptpf, num_vfs); + cptpf->enabled_vfs = 0; +destroy_flr: + cptpf_flr_wq_destroy(cptpf); +destroy_mbox: + cptpf_vfpf_mbox_destroy(cptpf); + return ret; +} + +static int otx2_cptpf_sriov_configure(struct pci_dev *pdev, int num_vfs) +{ + if (num_vfs > 0) { + return cptpf_sriov_enable(pdev, num_vfs); + } else { + return cptpf_sriov_disable(pdev); + } +} + + cptpf->max_vfs = pci_sriov_get_totalvfs(pdev); + + + cptpf_sriov_disable(pdev); + .sriov_configure = otx2_cptpf_sriov_configure diff --git a/drivers/crypto/marvell/octeontx2/otx2_cptpf_mbox.c b/drivers/crypto/marvell/octeontx2/otx2_cptpf_mbox.c --- a/drivers/crypto/marvell/octeontx2/otx2_cptpf_mbox.c +++ b/drivers/crypto/marvell/octeontx2/otx2_cptpf_mbox.c +static int forward_to_af(struct otx2_cptpf_dev *cptpf, + struct otx2_cptvf_info *vf, + struct mbox_msghdr *req, int size) +{ + struct mbox_msghdr *msg; + int ret; + + msg = otx2_mbox_alloc_msg(&cptpf->afpf_mbox, 0, size); + if (msg == null) + return -enomem; + + memcpy((uint8_t *)msg + sizeof(struct mbox_msghdr), + (uint8_t *)req + sizeof(struct mbox_msghdr), size); + msg->id = req->id; + msg->pcifunc = req->pcifunc; + msg->sig = req->sig; + msg->ver = req->ver; + + otx2_mbox_msg_send(&cptpf->afpf_mbox, 0); + ret = otx2_mbox_wait_for_rsp(&cptpf->afpf_mbox, 0); + if (ret == -eio) { + dev_err(&cptpf->pdev->dev, "rvu mbox timeout. "); + return ret; + } else if (ret) { + dev_err(&cptpf->pdev->dev, "rvu mbox error: %d. ", ret); + return -efault; + } + return 0; +} + +static int cptpf_handle_vf_req(struct otx2_cptpf_dev *cptpf, + struct otx2_cptvf_info *vf, + struct mbox_msghdr *req, int size) +{ + int err = 0; + + /* check if msg is valid, if not reply with an invalid msg */ + if (req->sig != otx2_mbox_req_sig) + goto inval_msg; + + return forward_to_af(cptpf, vf, req, size); + +inval_msg: + otx2_reply_invalid_msg(&cptpf->vfpf_mbox, vf->vf_id, 0, req->id); + otx2_mbox_msg_send(&cptpf->vfpf_mbox, vf->vf_id); + return err; +} + +irqreturn_t otx2_cptpf_vfpf_mbox_intr(int __always_unused irq, void *arg) +{ + struct otx2_cptpf_dev *cptpf = arg; + struct otx2_cptvf_info *vf; + int i, vf_idx; + u64 intr; + + /* + * check which vf has raised an interrupt and schedule + * corresponding work queue to process the messages + */ + for (i = 0; i < 2; i++) { + /* read the interrupt bits */ + intr = otx2_cpt_read64(cptpf->reg_base, blkaddr_rvum, 0, + rvu_pf_vfpf_mbox_intx(i)); + + for (vf_idx = i * 64; vf_idx < cptpf->enabled_vfs; vf_idx++) { + vf = &cptpf->vf[vf_idx]; + if (intr & (1ull << vf->intr_idx)) { + queue_work(cptpf->vfpf_mbox_wq, + &vf->vfpf_mbox_work); + /* clear the interrupt */ + otx2_cpt_write64(cptpf->reg_base, blkaddr_rvum, + 0, rvu_pf_vfpf_mbox_intx(i), + bit_ull(vf->intr_idx)); + } + } + } + return irq_handled; +} + +void otx2_cptpf_vfpf_mbox_handler(struct work_struct *work) +{ + struct otx2_cptpf_dev *cptpf; + struct otx2_cptvf_info *vf; + struct otx2_mbox_dev *mdev; + struct mbox_hdr *req_hdr; + struct mbox_msghdr *msg; + struct otx2_mbox *mbox; + int offset, i, err; + + vf = container_of(work, struct otx2_cptvf_info, vfpf_mbox_work); + cptpf = vf->cptpf; + mbox = &cptpf->vfpf_mbox; + /* sync with mbox memory region */ + smp_rmb(); + mdev = &mbox->dev[vf->vf_id]; + /* process received mbox messages */ + req_hdr = (struct mbox_hdr *)(mdev->mbase + mbox->rx_start); + offset = mbox->rx_start + align(sizeof(*req_hdr), mbox_msg_align); + + for (i = 0; i < req_hdr->num_msgs; i++) { + msg = (struct mbox_msghdr *)(mdev->mbase + offset); + + /* set which vf sent this message based on mbox irq */ + msg->pcifunc = ((u16)cptpf->pf_id << rvu_pfvf_pf_shift) | + ((vf->vf_id + 1) & rvu_pfvf_func_mask); + + err = cptpf_handle_vf_req(cptpf, vf, msg, + msg->next_msgoff - offset); + /* + * behave as the af, drop the msg if there is + * no memory, timeout handling also goes here + */ + if (err == -enomem || err == -eio) + break; + offset = msg->next_msgoff; + } + /* send mbox responses to vf */ + if (mdev->num_msgs) + otx2_mbox_msg_send(mbox, vf->vf_id); +} + +static void forward_to_vf(struct otx2_cptpf_dev *cptpf, struct mbox_msghdr *msg, + int vf_id, int size) +{ + struct otx2_mbox *vfpf_mbox; + struct mbox_msghdr *fwd; + + if (msg->id >= mbox_msg_max) { + dev_err(&cptpf->pdev->dev, + "mbox msg with unknown id %d ", msg->id); + return; + } + if (msg->sig != otx2_mbox_rsp_sig) { + dev_err(&cptpf->pdev->dev, + "mbox msg with wrong signature %x, id %d ", + msg->sig, msg->id); + return; + } + vfpf_mbox = &cptpf->vfpf_mbox; + vf_id--; + if (vf_id >= cptpf->enabled_vfs) { + dev_err(&cptpf->pdev->dev, + "mbox msg to unknown vf: %d >= %d ", + vf_id, cptpf->enabled_vfs); + return; + } + if (msg->id == mbox_msg_vf_flr) + return; + + fwd = otx2_mbox_alloc_msg(vfpf_mbox, vf_id, size); + if (!fwd) { + dev_err(&cptpf->pdev->dev, + "forwarding to vf%d failed. ", vf_id); + return; + } + memcpy((uint8_t *)fwd + sizeof(struct mbox_msghdr), + (uint8_t *)msg + sizeof(struct mbox_msghdr), size); + fwd->id = msg->id; + fwd->pcifunc = msg->pcifunc; + fwd->sig = msg->sig; + fwd->ver = msg->ver; + fwd->rc = msg->rc; +} + - int offset, i; + int offset, vf_id, i; - process_afpf_mbox_msg(cptpf, msg); + vf_id = (msg->pcifunc >> rvu_pfvf_func_shift) & + rvu_pfvf_func_mask; + if (vf_id > 0) + forward_to_vf(cptpf, msg, vf_id, + msg->next_msgoff - offset); + else + process_afpf_mbox_msg(cptpf, msg); +
|
Cryptography hardware acceleration
|
fe16eceab0463c160a333b7df4edd707f3a24d5c
|
srujana challa
|
drivers
|
crypto
|
marvell, octeontx2
|
crypto: octeontx2 - load microcode and create engine groups
|
cpt includes microcoded gigacypher symmetric engines(ses), ipsec symmetric engines(ies), and asymmetric engines (aes). each engine receives cpt instructions from the engine groups it has subscribed to. this patch loads microcode, configures three engine groups(one for ses, one for ies and one for aes), and configures all engines.
|
this release allows to map an uid to a different one in a mount; it also adds support for selecting the preemption model at runtime; support for a low-overhead memory error detector designed to be used in production; support for the acrn hypervisor designed for embedded systems; btrfs initial support for zoned devices, subpage blocks sizes and performance improvements; support for eager nfs writes; support for a thermal power management to control the surface temperature of embedded devices in an unified way; the napi polling can be moved to a kernel thread; and support for non-blocking path lookups. as always, there are many other features, new drivers, improvements and fixes.
|
add support for marvell octeontx2 cpt engine
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'cryptography', 'security', 'networking', 'architectures x86 arm risc-v powerpc mips csky s390 pa-risc c6x']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'device voltage and frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'non-transparent bridge (ntb)', 'thunderbolt', 'clock', 'phy ("physical layer" framework)', 'cxl (compute express link)', 'various']
|
['octeontx2']
|
['h', 'c', 'makefile']
| 8
| 1,655
| 2
|
--- diff --git a/drivers/crypto/marvell/octeontx2/makefile b/drivers/crypto/marvell/octeontx2/makefile --- a/drivers/crypto/marvell/octeontx2/makefile +++ b/drivers/crypto/marvell/octeontx2/makefile - otx2_cpt_mbox_common.o + otx2_cpt_mbox_common.o otx2_cptpf_ucode.o diff --git a/drivers/crypto/marvell/octeontx2/otx2_cpt_common.h b/drivers/crypto/marvell/octeontx2/otx2_cpt_common.h --- a/drivers/crypto/marvell/octeontx2/otx2_cpt_common.h +++ b/drivers/crypto/marvell/octeontx2/otx2_cpt_common.h +#define otx2_cpt_invalid_crypto_eng_grp 0xff +#define otx2_cpt_name_length 64 + +#define bad_otx2_cpt_eng_type otx2_cpt_max_eng_types + +enum otx2_cpt_eng_type { + otx2_cpt_ae_types = 1, + otx2_cpt_se_types = 2, + otx2_cpt_ie_types = 3, + otx2_cpt_max_eng_types, +}; + +/* take mbox id from end of cpt mbox range in af (range 0xa00 - 0xbff) */ +#define mbox_msg_get_eng_grp_num 0xbff + +/* + * message request and response to get engine group number + * which has attached a given type of engines (se, ae, ie) + * this messages are only used between cpt pf <=> cpt vf + */ +struct otx2_cpt_egrp_num_msg { + struct mbox_msghdr hdr; + u8 eng_type; +}; + +struct otx2_cpt_egrp_num_rsp { + struct mbox_msghdr hdr; + u8 eng_type; + u8 eng_grp_num; +}; + + +int otx2_cpt_send_af_reg_requests(struct otx2_mbox *mbox, + struct pci_dev *pdev); +int otx2_cpt_add_read_af_reg(struct otx2_mbox *mbox, + struct pci_dev *pdev, u64 reg, u64 *val); +int otx2_cpt_add_write_af_reg(struct otx2_mbox *mbox, struct pci_dev *pdev, + u64 reg, u64 val); +int otx2_cpt_read_af_reg(struct otx2_mbox *mbox, struct pci_dev *pdev, + u64 reg, u64 *val); +int otx2_cpt_write_af_reg(struct otx2_mbox *mbox, struct pci_dev *pdev, + u64 reg, u64 val); diff --git a/drivers/crypto/marvell/octeontx2/otx2_cpt_mbox_common.c b/drivers/crypto/marvell/octeontx2/otx2_cpt_mbox_common.c --- a/drivers/crypto/marvell/octeontx2/otx2_cpt_mbox_common.c +++ b/drivers/crypto/marvell/octeontx2/otx2_cpt_mbox_common.c + +int otx2_cpt_send_af_reg_requests(struct otx2_mbox *mbox, struct pci_dev *pdev) +{ + return otx2_cpt_send_mbox_msg(mbox, pdev); +} + +int otx2_cpt_add_read_af_reg(struct otx2_mbox *mbox, struct pci_dev *pdev, + u64 reg, u64 *val) +{ + struct cpt_rd_wr_reg_msg *reg_msg; + + reg_msg = (struct cpt_rd_wr_reg_msg *) + otx2_mbox_alloc_msg_rsp(mbox, 0, sizeof(*reg_msg), + sizeof(*reg_msg)); + if (reg_msg == null) { + dev_err(&pdev->dev, "rvu mbox failed to get message. "); + return -efault; + } + + reg_msg->hdr.id = mbox_msg_cpt_rd_wr_register; + reg_msg->hdr.sig = otx2_mbox_req_sig; + reg_msg->hdr.pcifunc = 0; + + reg_msg->is_write = 0; + reg_msg->reg_offset = reg; + reg_msg->ret_val = val; + + return 0; +} + +int otx2_cpt_add_write_af_reg(struct otx2_mbox *mbox, struct pci_dev *pdev, + u64 reg, u64 val) +{ + struct cpt_rd_wr_reg_msg *reg_msg; + + reg_msg = (struct cpt_rd_wr_reg_msg *) + otx2_mbox_alloc_msg_rsp(mbox, 0, sizeof(*reg_msg), + sizeof(*reg_msg)); + if (reg_msg == null) { + dev_err(&pdev->dev, "rvu mbox failed to get message. "); + return -efault; + } + + reg_msg->hdr.id = mbox_msg_cpt_rd_wr_register; + reg_msg->hdr.sig = otx2_mbox_req_sig; + reg_msg->hdr.pcifunc = 0; + + reg_msg->is_write = 1; + reg_msg->reg_offset = reg; + reg_msg->val = val; + + return 0; +} + +int otx2_cpt_read_af_reg(struct otx2_mbox *mbox, struct pci_dev *pdev, + u64 reg, u64 *val) +{ + int ret; + + ret = otx2_cpt_add_read_af_reg(mbox, pdev, reg, val); + if (ret) + return ret; + + return otx2_cpt_send_mbox_msg(mbox, pdev); +} + +int otx2_cpt_write_af_reg(struct otx2_mbox *mbox, struct pci_dev *pdev, + u64 reg, u64 val) +{ + int ret; + + ret = otx2_cpt_add_write_af_reg(mbox, pdev, reg, val); + if (ret) + return ret; + + return otx2_cpt_send_mbox_msg(mbox, pdev); +} diff --git a/drivers/crypto/marvell/octeontx2/otx2_cptpf.h b/drivers/crypto/marvell/octeontx2/otx2_cptpf.h --- a/drivers/crypto/marvell/octeontx2/otx2_cptpf.h +++ b/drivers/crypto/marvell/octeontx2/otx2_cptpf.h +#include "otx2_cptpf_ucode.h" + struct otx2_cpt_eng_grps eng_grps;/* engine groups information */ + diff --git a/drivers/crypto/marvell/octeontx2/otx2_cptpf_main.c b/drivers/crypto/marvell/octeontx2/otx2_cptpf_main.c --- a/drivers/crypto/marvell/octeontx2/otx2_cptpf_main.c +++ b/drivers/crypto/marvell/octeontx2/otx2_cptpf_main.c +#include "otx2_cptpf_ucode.h" +static int cptpf_device_reset(struct otx2_cptpf_dev *cptpf) +{ + int timeout = 10, ret; + u64 reg = 0; + + ret = otx2_cpt_write_af_reg(&cptpf->afpf_mbox, cptpf->pdev, + cpt_af_blk_rst, 0x1); + if (ret) + return ret; + + do { + ret = otx2_cpt_read_af_reg(&cptpf->afpf_mbox, cptpf->pdev, + cpt_af_blk_rst, ®); + if (ret) + return ret; + + if (!((reg >> 63) & 0x1)) + break; + + usleep_range(10000, 20000); + if (timeout-- < 0) + return -ebusy; + } while (1); + + return ret; +} + +static int cptpf_device_init(struct otx2_cptpf_dev *cptpf) +{ + union otx2_cptx_af_constants1 af_cnsts1 = {0}; + int ret = 0; + + /* reset the cpt pf device */ + ret = cptpf_device_reset(cptpf); + if (ret) + return ret; + + /* get number of se, ie and ae engines */ + ret = otx2_cpt_read_af_reg(&cptpf->afpf_mbox, cptpf->pdev, + cpt_af_constants1, &af_cnsts1.u); + if (ret) + return ret; + + cptpf->eng_grps.avail.max_se_cnt = af_cnsts1.s.se; + cptpf->eng_grps.avail.max_ie_cnt = af_cnsts1.s.ie; + cptpf->eng_grps.avail.max_ae_cnt = af_cnsts1.s.ae; + + /* disable all cores */ + ret = otx2_cpt_disable_all_cores(cptpf); + + return ret; +} + + ret = otx2_cpt_create_eng_grps(cptpf->pdev, &cptpf->eng_grps); + if (ret) + goto disable_intr; + + /* initialize cpt pf device */ + err = cptpf_device_init(cptpf); + if (err) + goto unregister_intr; + + /* initialize engine groups */ + err = otx2_cpt_init_eng_grps(pdev, &cptpf->eng_grps); + if (err) + goto unregister_intr; + +unregister_intr: + cptpf_disable_afpf_mbox_intr(cptpf); + /* cleanup engine groups */ + otx2_cpt_cleanup_eng_grps(pdev, &cptpf->eng_grps); diff --git a/drivers/crypto/marvell/octeontx2/otx2_cptpf_mbox.c b/drivers/crypto/marvell/octeontx2/otx2_cptpf_mbox.c --- a/drivers/crypto/marvell/octeontx2/otx2_cptpf_mbox.c +++ b/drivers/crypto/marvell/octeontx2/otx2_cptpf_mbox.c +static int handle_msg_get_eng_grp_num(struct otx2_cptpf_dev *cptpf, + struct otx2_cptvf_info *vf, + struct mbox_msghdr *req) +{ + struct otx2_cpt_egrp_num_msg *grp_req; + struct otx2_cpt_egrp_num_rsp *rsp; + + grp_req = (struct otx2_cpt_egrp_num_msg *)req; + rsp = (struct otx2_cpt_egrp_num_rsp *) + otx2_mbox_alloc_msg(&cptpf->vfpf_mbox, vf->vf_id, sizeof(*rsp)); + if (!rsp) + return -enomem; + + rsp->hdr.id = mbox_msg_get_eng_grp_num; + rsp->hdr.sig = otx2_mbox_rsp_sig; + rsp->hdr.pcifunc = req->pcifunc; + rsp->eng_type = grp_req->eng_type; + rsp->eng_grp_num = otx2_cpt_get_eng_grp(&cptpf->eng_grps, + grp_req->eng_type); + + return 0; +} + - return forward_to_af(cptpf, vf, req, size); + switch (req->id) { + case mbox_msg_get_eng_grp_num: + err = handle_msg_get_eng_grp_num(cptpf, vf, req); + break; + default: + err = forward_to_af(cptpf, vf, req, size); + break; + } + return err; + struct cpt_rd_wr_reg_msg *rsp_rd_wr; + case mbox_msg_cpt_rd_wr_register: + rsp_rd_wr = (struct cpt_rd_wr_reg_msg *)msg; + if (msg->rc) { + dev_err(dev, "reg %llx rd/wr(%d) failed %d ", + rsp_rd_wr->reg_offset, rsp_rd_wr->is_write, + msg->rc); + return; + } + if (!rsp_rd_wr->is_write) + *rsp_rd_wr->ret_val = rsp_rd_wr->val; + break; + diff --git a/drivers/crypto/marvell/octeontx2/otx2_cptpf_ucode.c b/drivers/crypto/marvell/octeontx2/otx2_cptpf_ucode.c --- /dev/null +++ b/drivers/crypto/marvell/octeontx2/otx2_cptpf_ucode.c +// spdx-license-identifier: gpl-2.0-only +/* copyright (c) 2020 marvell. */ + +#include <linux/ctype.h> +#include <linux/firmware.h> +#include "otx2_cptpf_ucode.h" +#include "otx2_cpt_common.h" +#include "otx2_cptpf.h" +#include "rvu_reg.h" + +#define csr_delay 30 + +#define loadfvc_rlen 8 +#define loadfvc_major_op 0x01 +#define loadfvc_minor_op 0x08 + +struct fw_info_t { + struct list_head ucodes; +}; + +static struct otx2_cpt_bitmap get_cores_bmap(struct device *dev, + struct otx2_cpt_eng_grp_info *eng_grp) +{ + struct otx2_cpt_bitmap bmap = { {0} }; + bool found = false; + int i; + + if (eng_grp->g->engs_num > otx2_cpt_max_engines) { + dev_err(dev, "unsupported number of engines %d on octeontx2 ", + eng_grp->g->engs_num); + return bmap; + } + + for (i = 0; i < otx2_cpt_max_etypes_per_grp; i++) { + if (eng_grp->engs[i].type) { + bitmap_or(bmap.bits, bmap.bits, + eng_grp->engs[i].bmap, + eng_grp->g->engs_num); + bmap.size = eng_grp->g->engs_num; + found = true; + } + } + + if (!found) + dev_err(dev, "no engines reserved for engine group %d ", + eng_grp->idx); + return bmap; +} + +static int is_eng_type(int val, int eng_type) +{ + return val & (1 << eng_type); +} + +static int is_2nd_ucode_used(struct otx2_cpt_eng_grp_info *eng_grp) +{ + if (eng_grp->ucode[1].type) + return true; + else + return false; +} + +static void set_ucode_filename(struct otx2_cpt_ucode *ucode, + const char *filename) +{ + strlcpy(ucode->filename, filename, otx2_cpt_name_length); +} + +static char *get_eng_type_str(int eng_type) +{ + char *str = "unknown"; + + switch (eng_type) { + case otx2_cpt_se_types: + str = "se"; + break; + + case otx2_cpt_ie_types: + str = "ie"; + break; + + case otx2_cpt_ae_types: + str = "ae"; + break; + } + return str; +} + +static char *get_ucode_type_str(int ucode_type) +{ + char *str = "unknown"; + + switch (ucode_type) { + case (1 << otx2_cpt_se_types): + str = "se"; + break; + + case (1 << otx2_cpt_ie_types): + str = "ie"; + break; + + case (1 << otx2_cpt_ae_types): + str = "ae"; + break; + + case (1 << otx2_cpt_se_types | 1 << otx2_cpt_ie_types): + str = "se+ipsec"; + break; + } + return str; +} + +static int get_ucode_type(struct device *dev, + struct otx2_cpt_ucode_hdr *ucode_hdr, + int *ucode_type) +{ + struct otx2_cptpf_dev *cptpf = dev_get_drvdata(dev); + char ver_str_prefix[otx2_cpt_ucode_ver_str_sz]; + char tmp_ver_str[otx2_cpt_ucode_ver_str_sz]; + struct pci_dev *pdev = cptpf->pdev; + int i, val = 0; + u8 nn; + + strlcpy(tmp_ver_str, ucode_hdr->ver_str, otx2_cpt_ucode_ver_str_sz); + for (i = 0; i < strlen(tmp_ver_str); i++) + tmp_ver_str[i] = tolower(tmp_ver_str[i]); + + sprintf(ver_str_prefix, "ocpt-%02d", pdev->revision); + if (!strnstr(tmp_ver_str, ver_str_prefix, otx2_cpt_ucode_ver_str_sz)) + return -einval; + + nn = ucode_hdr->ver_num.nn; + if (strnstr(tmp_ver_str, "se-", otx2_cpt_ucode_ver_str_sz) && + (nn == otx2_cpt_se_uc_type1 || nn == otx2_cpt_se_uc_type2 || + nn == otx2_cpt_se_uc_type3)) + val |= 1 << otx2_cpt_se_types; + if (strnstr(tmp_ver_str, "ie-", otx2_cpt_ucode_ver_str_sz) && + (nn == otx2_cpt_ie_uc_type1 || nn == otx2_cpt_ie_uc_type2 || + nn == otx2_cpt_ie_uc_type3)) + val |= 1 << otx2_cpt_ie_types; + if (strnstr(tmp_ver_str, "ae", otx2_cpt_ucode_ver_str_sz) && + nn == otx2_cpt_ae_uc_type) + val |= 1 << otx2_cpt_ae_types; + + *ucode_type = val; + + if (!val) + return -einval; + + return 0; +} + +static int __write_ucode_base(struct otx2_cptpf_dev *cptpf, int eng, + dma_addr_t dma_addr) +{ + return otx2_cpt_write_af_reg(&cptpf->afpf_mbox, cptpf->pdev, + cpt_af_exex_ucode_base(eng), + (u64)dma_addr); +} + +static int cpt_set_ucode_base(struct otx2_cpt_eng_grp_info *eng_grp, void *obj) +{ + struct otx2_cptpf_dev *cptpf = obj; + struct otx2_cpt_engs_rsvd *engs; + dma_addr_t dma_addr; + int i, bit, ret; + + /* set pf number for microcode fetches */ + ret = otx2_cpt_write_af_reg(&cptpf->afpf_mbox, cptpf->pdev, + cpt_af_pf_func, + cptpf->pf_id << rvu_pfvf_pf_shift); + if (ret) + return ret; + + for (i = 0; i < otx2_cpt_max_etypes_per_grp; i++) { + engs = &eng_grp->engs[i]; + if (!engs->type) + continue; + + dma_addr = engs->ucode->dma; + + /* + * set ucode_base only for the cores which are not used, + * other cores should have already valid ucode_base set + */ + for_each_set_bit(bit, engs->bmap, eng_grp->g->engs_num) + if (!eng_grp->g->eng_ref_cnt[bit]) { + ret = __write_ucode_base(cptpf, bit, dma_addr); + if (ret) + return ret; + } + } + return 0; +} + +static int cpt_detach_and_disable_cores(struct otx2_cpt_eng_grp_info *eng_grp, + void *obj) +{ + struct otx2_cptpf_dev *cptpf = obj; + struct otx2_cpt_bitmap bmap; + int i, timeout = 10; + int busy, ret; + u64 reg = 0; + + bmap = get_cores_bmap(&cptpf->pdev->dev, eng_grp); + if (!bmap.size) + return -einval; + + /* detach the cores from group */ + for_each_set_bit(i, bmap.bits, bmap.size) { + ret = otx2_cpt_read_af_reg(&cptpf->afpf_mbox, cptpf->pdev, + cpt_af_exex_ctl2(i), ®); + if (ret) + return ret; + + if (reg & (1ull << eng_grp->idx)) { + eng_grp->g->eng_ref_cnt[i]--; + reg &= ~(1ull << eng_grp->idx); + + ret = otx2_cpt_write_af_reg(&cptpf->afpf_mbox, + cptpf->pdev, + cpt_af_exex_ctl2(i), reg); + if (ret) + return ret; + } + } + + /* wait for cores to become idle */ + do { + busy = 0; + usleep_range(10000, 20000); + if (timeout-- < 0) + return -ebusy; + + for_each_set_bit(i, bmap.bits, bmap.size) { + ret = otx2_cpt_read_af_reg(&cptpf->afpf_mbox, + cptpf->pdev, + cpt_af_exex_sts(i), ®); + if (ret) + return ret; + + if (reg & 0x1) { + busy = 1; + break; + } + } + } while (busy); + + /* disable the cores only if they are not used anymore */ + for_each_set_bit(i, bmap.bits, bmap.size) { + if (!eng_grp->g->eng_ref_cnt[i]) { + ret = otx2_cpt_write_af_reg(&cptpf->afpf_mbox, + cptpf->pdev, + cpt_af_exex_ctl(i), 0x0); + if (ret) + return ret; + } + } + + return 0; +} + +static int cpt_attach_and_enable_cores(struct otx2_cpt_eng_grp_info *eng_grp, + void *obj) +{ + struct otx2_cptpf_dev *cptpf = obj; + struct otx2_cpt_bitmap bmap; + u64 reg = 0; + int i, ret; + + bmap = get_cores_bmap(&cptpf->pdev->dev, eng_grp); + if (!bmap.size) + return -einval; + + /* attach the cores to the group */ + for_each_set_bit(i, bmap.bits, bmap.size) { + ret = otx2_cpt_read_af_reg(&cptpf->afpf_mbox, cptpf->pdev, + cpt_af_exex_ctl2(i), ®); + if (ret) + return ret; + + if (!(reg & (1ull << eng_grp->idx))) { + eng_grp->g->eng_ref_cnt[i]++; + reg |= 1ull << eng_grp->idx; + + ret = otx2_cpt_write_af_reg(&cptpf->afpf_mbox, + cptpf->pdev, + cpt_af_exex_ctl2(i), reg); + if (ret) + return ret; + } + } + + /* enable the cores */ + for_each_set_bit(i, bmap.bits, bmap.size) { + ret = otx2_cpt_add_write_af_reg(&cptpf->afpf_mbox, + cptpf->pdev, + cpt_af_exex_ctl(i), 0x1); + if (ret) + return ret; + } + ret = otx2_cpt_send_af_reg_requests(&cptpf->afpf_mbox, cptpf->pdev); + + return ret; +} + +static int load_fw(struct device *dev, struct fw_info_t *fw_info, + char *filename) +{ + struct otx2_cpt_ucode_hdr *ucode_hdr; + struct otx2_cpt_uc_info_t *uc_info; + int ucode_type, ucode_size; + int ret; + + uc_info = kzalloc(sizeof(*uc_info), gfp_kernel); + if (!uc_info) + return -enomem; + + ret = request_firmware(&uc_info->fw, filename, dev); + if (ret) + goto free_uc_info; + + ucode_hdr = (struct otx2_cpt_ucode_hdr *)uc_info->fw->data; + ret = get_ucode_type(dev, ucode_hdr, &ucode_type); + if (ret) + goto release_fw; + + ucode_size = ntohl(ucode_hdr->code_length) * 2; + if (!ucode_size) { + dev_err(dev, "ucode %s invalid size ", filename); + ret = -einval; + goto release_fw; + } + + set_ucode_filename(&uc_info->ucode, filename); + memcpy(uc_info->ucode.ver_str, ucode_hdr->ver_str, + otx2_cpt_ucode_ver_str_sz); + uc_info->ucode.ver_num = ucode_hdr->ver_num; + uc_info->ucode.type = ucode_type; + uc_info->ucode.size = ucode_size; + list_add_tail(&uc_info->list, &fw_info->ucodes); + + return 0; + +release_fw: + release_firmware(uc_info->fw); +free_uc_info: + kfree(uc_info); + return ret; +} + +static void cpt_ucode_release_fw(struct fw_info_t *fw_info) +{ + struct otx2_cpt_uc_info_t *curr, *temp; + + if (!fw_info) + return; + + list_for_each_entry_safe(curr, temp, &fw_info->ucodes, list) { + list_del(&curr->list); + release_firmware(curr->fw); + kfree(curr); + } +} + +static struct otx2_cpt_uc_info_t *get_ucode(struct fw_info_t *fw_info, + int ucode_type) +{ + struct otx2_cpt_uc_info_t *curr; + + list_for_each_entry(curr, &fw_info->ucodes, list) { + if (!is_eng_type(curr->ucode.type, ucode_type)) + continue; + + return curr; + } + return null; +} + +static void print_uc_info(struct fw_info_t *fw_info) +{ + struct otx2_cpt_uc_info_t *curr; + + list_for_each_entry(curr, &fw_info->ucodes, list) { + pr_debug("ucode filename %s ", curr->ucode.filename); + pr_debug("ucode version string %s ", curr->ucode.ver_str); + pr_debug("ucode version %d.%d.%d.%d ", + curr->ucode.ver_num.nn, curr->ucode.ver_num.xx, + curr->ucode.ver_num.yy, curr->ucode.ver_num.zz); + pr_debug("ucode type (%d) %s ", curr->ucode.type, + get_ucode_type_str(curr->ucode.type)); + pr_debug("ucode size %d ", curr->ucode.size); + pr_debug("ucode ptr %p ", curr->fw->data); + } +} + +static int cpt_ucode_load_fw(struct pci_dev *pdev, struct fw_info_t *fw_info) +{ + char filename[otx2_cpt_name_length]; + char eng_type[8] = {0}; + int ret, e, i; + + init_list_head(&fw_info->ucodes); + + for (e = 1; e < otx2_cpt_max_eng_types; e++) { + strcpy(eng_type, get_eng_type_str(e)); + for (i = 0; i < strlen(eng_type); i++) + eng_type[i] = tolower(eng_type[i]); + + snprintf(filename, sizeof(filename), "mrvl/cpt%02d/%s.out", + pdev->revision, eng_type); + /* request firmware for each engine type */ + ret = load_fw(&pdev->dev, fw_info, filename); + if (ret) + goto release_fw; + } + print_uc_info(fw_info); + return 0; + +release_fw: + cpt_ucode_release_fw(fw_info); + return ret; +} + +static struct otx2_cpt_engs_rsvd *find_engines_by_type( + struct otx2_cpt_eng_grp_info *eng_grp, + int eng_type) +{ + int i; + + for (i = 0; i < otx2_cpt_max_etypes_per_grp; i++) { + if (!eng_grp->engs[i].type) + continue; + + if (eng_grp->engs[i].type == eng_type) + return &eng_grp->engs[i]; + } + return null; +} + +static int eng_grp_has_eng_type(struct otx2_cpt_eng_grp_info *eng_grp, + int eng_type) +{ + struct otx2_cpt_engs_rsvd *engs; + + engs = find_engines_by_type(eng_grp, eng_type); + + return (engs != null ? 1 : 0); +} + +static int update_engines_avail_count(struct device *dev, + struct otx2_cpt_engs_available *avail, + struct otx2_cpt_engs_rsvd *engs, int val) +{ + switch (engs->type) { + case otx2_cpt_se_types: + avail->se_cnt += val; + break; + + case otx2_cpt_ie_types: + avail->ie_cnt += val; + break; + + case otx2_cpt_ae_types: + avail->ae_cnt += val; + break; + + default: + dev_err(dev, "invalid engine type %d ", engs->type); + return -einval; + } + return 0; +} + +static int update_engines_offset(struct device *dev, + struct otx2_cpt_engs_available *avail, + struct otx2_cpt_engs_rsvd *engs) +{ + switch (engs->type) { + case otx2_cpt_se_types: + engs->offset = 0; + break; + + case otx2_cpt_ie_types: + engs->offset = avail->max_se_cnt; + break; + + case otx2_cpt_ae_types: + engs->offset = avail->max_se_cnt + avail->max_ie_cnt; + break; + + default: + dev_err(dev, "invalid engine type %d ", engs->type); + return -einval; + } + return 0; +} + +static int release_engines(struct device *dev, + struct otx2_cpt_eng_grp_info *grp) +{ + int i, ret = 0; + + for (i = 0; i < otx2_cpt_max_etypes_per_grp; i++) { + if (!grp->engs[i].type) + continue; + + if (grp->engs[i].count > 0) { + ret = update_engines_avail_count(dev, &grp->g->avail, + &grp->engs[i], + grp->engs[i].count); + if (ret) + return ret; + } + + grp->engs[i].type = 0; + grp->engs[i].count = 0; + grp->engs[i].offset = 0; + grp->engs[i].ucode = null; + bitmap_zero(grp->engs[i].bmap, grp->g->engs_num); + } + return 0; +} + +static int do_reserve_engines(struct device *dev, + struct otx2_cpt_eng_grp_info *grp, + struct otx2_cpt_engines *req_engs) +{ + struct otx2_cpt_engs_rsvd *engs = null; + int i, ret; + + for (i = 0; i < otx2_cpt_max_etypes_per_grp; i++) { + if (!grp->engs[i].type) { + engs = &grp->engs[i]; + break; + } + } + + if (!engs) + return -enomem; + + engs->type = req_engs->type; + engs->count = req_engs->count; + + ret = update_engines_offset(dev, &grp->g->avail, engs); + if (ret) + return ret; + + if (engs->count > 0) { + ret = update_engines_avail_count(dev, &grp->g->avail, engs, + -engs->count); + if (ret) + return ret; + } + + return 0; +} + +static int check_engines_availability(struct device *dev, + struct otx2_cpt_eng_grp_info *grp, + struct otx2_cpt_engines *req_eng) +{ + int avail_cnt = 0; + + switch (req_eng->type) { + case otx2_cpt_se_types: + avail_cnt = grp->g->avail.se_cnt; + break; + + case otx2_cpt_ie_types: + avail_cnt = grp->g->avail.ie_cnt; + break; + + case otx2_cpt_ae_types: + avail_cnt = grp->g->avail.ae_cnt; + break; + + default: + dev_err(dev, "invalid engine type %d ", req_eng->type); + return -einval; + } + + if (avail_cnt < req_eng->count) { + dev_err(dev, + "error available %s engines %d < than requested %d ", + get_eng_type_str(req_eng->type), + avail_cnt, req_eng->count); + return -ebusy; + } + return 0; +} + +static int reserve_engines(struct device *dev, + struct otx2_cpt_eng_grp_info *grp, + struct otx2_cpt_engines *req_engs, int ucodes_cnt) +{ + int i, ret = 0; + + /* validate if a number of requested engines are available */ + for (i = 0; i < ucodes_cnt; i++) { + ret = check_engines_availability(dev, grp, &req_engs[i]); + if (ret) + return ret; + } + + /* reserve requested engines for this engine group */ + for (i = 0; i < ucodes_cnt; i++) { + ret = do_reserve_engines(dev, grp, &req_engs[i]); + if (ret) + return ret; + } + return 0; +} + +static void ucode_unload(struct device *dev, struct otx2_cpt_ucode *ucode) +{ + if (ucode->va) { + dma_free_coherent(dev, ucode->size, ucode->va, ucode->dma); + ucode->va = null; + ucode->dma = 0; + ucode->size = 0; + } + + memset(&ucode->ver_str, 0, otx2_cpt_ucode_ver_str_sz); + memset(&ucode->ver_num, 0, sizeof(struct otx2_cpt_ucode_ver_num)); + set_ucode_filename(ucode, ""); + ucode->type = 0; +} + +static int copy_ucode_to_dma_mem(struct device *dev, + struct otx2_cpt_ucode *ucode, + const u8 *ucode_data) +{ + u32 i; + + /* allocate dmaable space */ + ucode->va = dma_alloc_coherent(dev, ucode->size, &ucode->dma, + gfp_kernel); + if (!ucode->va) + return -enomem; + + memcpy(ucode->va, ucode_data + sizeof(struct otx2_cpt_ucode_hdr), + ucode->size); + + /* byte swap 64-bit */ + for (i = 0; i < (ucode->size / 8); i++) + cpu_to_be64s(&((u64 *)ucode->va)[i]); + /* ucode needs 16-bit swap */ + for (i = 0; i < (ucode->size / 2); i++) + cpu_to_be16s(&((u16 *)ucode->va)[i]); + return 0; +} + +static int enable_eng_grp(struct otx2_cpt_eng_grp_info *eng_grp, + void *obj) +{ + int ret; + + /* point microcode to each core of the group */ + ret = cpt_set_ucode_base(eng_grp, obj); + if (ret) + return ret; + + /* attach the cores to the group and enable them */ + ret = cpt_attach_and_enable_cores(eng_grp, obj); + + return ret; +} + +static int disable_eng_grp(struct device *dev, + struct otx2_cpt_eng_grp_info *eng_grp, + void *obj) +{ + int i, ret; + + /* disable all engines used by this group */ + ret = cpt_detach_and_disable_cores(eng_grp, obj); + if (ret) + return ret; + + /* unload ucode used by this engine group */ + ucode_unload(dev, &eng_grp->ucode[0]); + ucode_unload(dev, &eng_grp->ucode[1]); + + for (i = 0; i < otx2_cpt_max_etypes_per_grp; i++) { + if (!eng_grp->engs[i].type) + continue; + + eng_grp->engs[i].ucode = &eng_grp->ucode[0]; + } + + /* clear ucode_base register for each engine used by this group */ + ret = cpt_set_ucode_base(eng_grp, obj); + + return ret; +} + +static void setup_eng_grp_mirroring(struct otx2_cpt_eng_grp_info *dst_grp, + struct otx2_cpt_eng_grp_info *src_grp) +{ + /* setup fields for engine group which is mirrored */ + src_grp->mirror.is_ena = false; + src_grp->mirror.idx = 0; + src_grp->mirror.ref_count++; + + /* setup fields for mirroring engine group */ + dst_grp->mirror.is_ena = true; + dst_grp->mirror.idx = src_grp->idx; + dst_grp->mirror.ref_count = 0; +} + +static void remove_eng_grp_mirroring(struct otx2_cpt_eng_grp_info *dst_grp) +{ + struct otx2_cpt_eng_grp_info *src_grp; + + if (!dst_grp->mirror.is_ena) + return; + + src_grp = &dst_grp->g->grp[dst_grp->mirror.idx]; + + src_grp->mirror.ref_count--; + dst_grp->mirror.is_ena = false; + dst_grp->mirror.idx = 0; + dst_grp->mirror.ref_count = 0; +} + +static void update_requested_engs(struct otx2_cpt_eng_grp_info *mirror_eng_grp, + struct otx2_cpt_engines *engs, int engs_cnt) +{ + struct otx2_cpt_engs_rsvd *mirrored_engs; + int i; + + for (i = 0; i < engs_cnt; i++) { + mirrored_engs = find_engines_by_type(mirror_eng_grp, + engs[i].type); + if (!mirrored_engs) + continue; + + /* + * if mirrored group has this type of engines attached then + * there are 3 scenarios possible: + * 1) mirrored_engs.count == engs[i].count then all engines + * from mirrored engine group will be shared with this engine + * group + * 2) mirrored_engs.count > engs[i].count then only a subset of + * engines from mirrored engine group will be shared with this + * engine group + * 3) mirrored_engs.count < engs[i].count then all engines + * from mirrored engine group will be shared with this group + * and additional engines will be reserved for exclusively use + * by this engine group + */ + engs[i].count -= mirrored_engs->count; + } +} + +static struct otx2_cpt_eng_grp_info *find_mirrored_eng_grp( + struct otx2_cpt_eng_grp_info *grp) +{ + struct otx2_cpt_eng_grps *eng_grps = grp->g; + int i; + + for (i = 0; i < otx2_cpt_max_engine_groups; i++) { + if (!eng_grps->grp[i].is_enabled) + continue; + if (eng_grps->grp[i].ucode[0].type && + eng_grps->grp[i].ucode[1].type) + continue; + if (grp->idx == i) + continue; + if (!strncasecmp(eng_grps->grp[i].ucode[0].ver_str, + grp->ucode[0].ver_str, + otx2_cpt_ucode_ver_str_sz)) + return &eng_grps->grp[i]; + } + + return null; +} + +static struct otx2_cpt_eng_grp_info *find_unused_eng_grp( + struct otx2_cpt_eng_grps *eng_grps) +{ + int i; + + for (i = 0; i < otx2_cpt_max_engine_groups; i++) { + if (!eng_grps->grp[i].is_enabled) + return &eng_grps->grp[i]; + } + return null; +} + +static int eng_grp_update_masks(struct device *dev, + struct otx2_cpt_eng_grp_info *eng_grp) +{ + struct otx2_cpt_engs_rsvd *engs, *mirrored_engs; + struct otx2_cpt_bitmap tmp_bmap = { {0} }; + int i, j, cnt, max_cnt; + int bit; + + for (i = 0; i < otx2_cpt_max_etypes_per_grp; i++) { + engs = &eng_grp->engs[i]; + if (!engs->type) + continue; + if (engs->count <= 0) + continue; + + switch (engs->type) { + case otx2_cpt_se_types: + max_cnt = eng_grp->g->avail.max_se_cnt; + break; + + case otx2_cpt_ie_types: + max_cnt = eng_grp->g->avail.max_ie_cnt; + break; + + case otx2_cpt_ae_types: + max_cnt = eng_grp->g->avail.max_ae_cnt; + break; + + default: + dev_err(dev, "invalid engine type %d ", engs->type); + return -einval; + } + + cnt = engs->count; + warn_on(engs->offset + max_cnt > otx2_cpt_max_engines); + bitmap_zero(tmp_bmap.bits, eng_grp->g->engs_num); + for (j = engs->offset; j < engs->offset + max_cnt; j++) { + if (!eng_grp->g->eng_ref_cnt[j]) { + bitmap_set(tmp_bmap.bits, j, 1); + cnt--; + if (!cnt) + break; + } + } + + if (cnt) + return -enospc; + + bitmap_copy(engs->bmap, tmp_bmap.bits, eng_grp->g->engs_num); + } + + if (!eng_grp->mirror.is_ena) + return 0; + + for (i = 0; i < otx2_cpt_max_etypes_per_grp; i++) { + engs = &eng_grp->engs[i]; + if (!engs->type) + continue; + + mirrored_engs = find_engines_by_type( + &eng_grp->g->grp[eng_grp->mirror.idx], + engs->type); + warn_on(!mirrored_engs && engs->count <= 0); + if (!mirrored_engs) + continue; + + bitmap_copy(tmp_bmap.bits, mirrored_engs->bmap, + eng_grp->g->engs_num); + if (engs->count < 0) { + bit = find_first_bit(mirrored_engs->bmap, + eng_grp->g->engs_num); + bitmap_clear(tmp_bmap.bits, bit, -engs->count); + } + bitmap_or(engs->bmap, engs->bmap, tmp_bmap.bits, + eng_grp->g->engs_num); + } + return 0; +} + +static int delete_engine_group(struct device *dev, + struct otx2_cpt_eng_grp_info *eng_grp) +{ + int ret; + + if (!eng_grp->is_enabled) + return 0; + + if (eng_grp->mirror.ref_count) + return -einval; + + /* removing engine group mirroring if enabled */ + remove_eng_grp_mirroring(eng_grp); + + /* disable engine group */ + ret = disable_eng_grp(dev, eng_grp, eng_grp->g->obj); + if (ret) + return ret; + + /* release all engines held by this engine group */ + ret = release_engines(dev, eng_grp); + if (ret) + return ret; + + eng_grp->is_enabled = false; + + return 0; +} + +static void update_ucode_ptrs(struct otx2_cpt_eng_grp_info *eng_grp) +{ + struct otx2_cpt_ucode *ucode; + + if (eng_grp->mirror.is_ena) + ucode = &eng_grp->g->grp[eng_grp->mirror.idx].ucode[0]; + else + ucode = &eng_grp->ucode[0]; + warn_on(!eng_grp->engs[0].type); + eng_grp->engs[0].ucode = ucode; + + if (eng_grp->engs[1].type) { + if (is_2nd_ucode_used(eng_grp)) + eng_grp->engs[1].ucode = &eng_grp->ucode[1]; + else + eng_grp->engs[1].ucode = ucode; + } +} + +static int create_engine_group(struct device *dev, + struct otx2_cpt_eng_grps *eng_grps, + struct otx2_cpt_engines *engs, int ucodes_cnt, + void *ucode_data[], int is_print) +{ + struct otx2_cpt_eng_grp_info *mirrored_eng_grp; + struct otx2_cpt_eng_grp_info *eng_grp; + struct otx2_cpt_uc_info_t *uc_info; + int i, ret = 0; + + /* find engine group which is not used */ + eng_grp = find_unused_eng_grp(eng_grps); + if (!eng_grp) { + dev_err(dev, "error all engine groups are being used "); + return -enospc; + } + /* load ucode */ + for (i = 0; i < ucodes_cnt; i++) { + uc_info = (struct otx2_cpt_uc_info_t *) ucode_data[i]; + eng_grp->ucode[i] = uc_info->ucode; + ret = copy_ucode_to_dma_mem(dev, &eng_grp->ucode[i], + uc_info->fw->data); + if (ret) + goto unload_ucode; + } + + /* check if this group mirrors another existing engine group */ + mirrored_eng_grp = find_mirrored_eng_grp(eng_grp); + if (mirrored_eng_grp) { + /* setup mirroring */ + setup_eng_grp_mirroring(eng_grp, mirrored_eng_grp); + + /* + * update count of requested engines because some + * of them might be shared with mirrored group + */ + update_requested_engs(mirrored_eng_grp, engs, ucodes_cnt); + } + ret = reserve_engines(dev, eng_grp, engs, ucodes_cnt); + if (ret) + goto unload_ucode; + + /* update ucode pointers used by engines */ + update_ucode_ptrs(eng_grp); + + /* update engine masks used by this group */ + ret = eng_grp_update_masks(dev, eng_grp); + if (ret) + goto release_engs; + + /* enable engine group */ + ret = enable_eng_grp(eng_grp, eng_grps->obj); + if (ret) + goto release_engs; + + /* + * if this engine group mirrors another engine group + * then we need to unload ucode as we will use ucode + * from mirrored engine group + */ + if (eng_grp->mirror.is_ena) + ucode_unload(dev, &eng_grp->ucode[0]); + + eng_grp->is_enabled = true; + + if (!is_print) + return 0; + + if (mirrored_eng_grp) + dev_info(dev, + "engine_group%d: reuse microcode %s from group %d ", + eng_grp->idx, mirrored_eng_grp->ucode[0].ver_str, + mirrored_eng_grp->idx); + else + dev_info(dev, "engine_group%d: microcode loaded %s ", + eng_grp->idx, eng_grp->ucode[0].ver_str); + if (is_2nd_ucode_used(eng_grp)) + dev_info(dev, "engine_group%d: microcode loaded %s ", + eng_grp->idx, eng_grp->ucode[1].ver_str); + + return 0; + +release_engs: + release_engines(dev, eng_grp); +unload_ucode: + ucode_unload(dev, &eng_grp->ucode[0]); + ucode_unload(dev, &eng_grp->ucode[1]); + return ret; +} + +static void delete_engine_grps(struct pci_dev *pdev, + struct otx2_cpt_eng_grps *eng_grps) +{ + int i; + + /* first delete all mirroring engine groups */ + for (i = 0; i < otx2_cpt_max_engine_groups; i++) + if (eng_grps->grp[i].mirror.is_ena) + delete_engine_group(&pdev->dev, &eng_grps->grp[i]); + + /* delete remaining engine groups */ + for (i = 0; i < otx2_cpt_max_engine_groups; i++) + delete_engine_group(&pdev->dev, &eng_grps->grp[i]); +} + +int otx2_cpt_get_eng_grp(struct otx2_cpt_eng_grps *eng_grps, int eng_type) +{ + + int eng_grp_num = otx2_cpt_invalid_crypto_eng_grp; + struct otx2_cpt_eng_grp_info *grp; + int i; + + for (i = 0; i < otx2_cpt_max_engine_groups; i++) { + grp = &eng_grps->grp[i]; + if (!grp->is_enabled) + continue; + + if (eng_type == otx2_cpt_se_types) { + if (eng_grp_has_eng_type(grp, eng_type) && + !eng_grp_has_eng_type(grp, otx2_cpt_ie_types)) { + eng_grp_num = i; + break; + } + } else { + if (eng_grp_has_eng_type(grp, eng_type)) { + eng_grp_num = i; + break; + } + } + } + return eng_grp_num; +} + +int otx2_cpt_create_eng_grps(struct pci_dev *pdev, + struct otx2_cpt_eng_grps *eng_grps) +{ + struct otx2_cpt_uc_info_t *uc_info[otx2_cpt_max_etypes_per_grp] = { }; + struct otx2_cpt_engines engs[otx2_cpt_max_etypes_per_grp] = { {0} }; + struct fw_info_t fw_info; + int ret; + + /* + * we don't create engine groups if it was already + * made (when user enabled vfs for the first time) + */ + if (eng_grps->is_grps_created) + return 0; + + ret = cpt_ucode_load_fw(pdev, &fw_info); + if (ret) + return ret; + + /* + * create engine group with se engines for kernel + * crypto functionality (symmetric crypto) + */ + uc_info[0] = get_ucode(&fw_info, otx2_cpt_se_types); + if (uc_info[0] == null) { + dev_err(&pdev->dev, "unable to find firmware for se "); + ret = -einval; + goto release_fw; + } + engs[0].type = otx2_cpt_se_types; + engs[0].count = eng_grps->avail.max_se_cnt; + + ret = create_engine_group(&pdev->dev, eng_grps, engs, 1, + (void **) uc_info, 1); + if (ret) + goto release_fw; + + /* + * create engine group with se+ie engines for ipsec. + * all se engines will be shared with engine group 0. + */ + uc_info[0] = get_ucode(&fw_info, otx2_cpt_se_types); + uc_info[1] = get_ucode(&fw_info, otx2_cpt_ie_types); + + if (uc_info[1] == null) { + dev_err(&pdev->dev, "unable to find firmware for ie"); + ret = -einval; + goto delete_eng_grp; + } + engs[0].type = otx2_cpt_se_types; + engs[0].count = eng_grps->avail.max_se_cnt; + engs[1].type = otx2_cpt_ie_types; + engs[1].count = eng_grps->avail.max_ie_cnt; + + ret = create_engine_group(&pdev->dev, eng_grps, engs, 2, + (void **) uc_info, 1); + if (ret) + goto delete_eng_grp; + + /* + * create engine group with ae engines for asymmetric + * crypto functionality. + */ + uc_info[0] = get_ucode(&fw_info, otx2_cpt_ae_types); + if (uc_info[0] == null) { + dev_err(&pdev->dev, "unable to find firmware for ae"); + ret = -einval; + goto delete_eng_grp; + } + engs[0].type = otx2_cpt_ae_types; + engs[0].count = eng_grps->avail.max_ae_cnt; + + ret = create_engine_group(&pdev->dev, eng_grps, engs, 1, + (void **) uc_info, 1); + if (ret) + goto delete_eng_grp; + + eng_grps->is_grps_created = true; + + cpt_ucode_release_fw(&fw_info); + return 0; + +delete_eng_grp: + delete_engine_grps(pdev, eng_grps); +release_fw: + cpt_ucode_release_fw(&fw_info); + return ret; +} + +int otx2_cpt_disable_all_cores(struct otx2_cptpf_dev *cptpf) +{ + int i, ret, busy, total_cores; + int timeout = 10; + u64 reg = 0; + + total_cores = cptpf->eng_grps.avail.max_se_cnt + + cptpf->eng_grps.avail.max_ie_cnt + + cptpf->eng_grps.avail.max_ae_cnt; + + /* disengage the cores from groups */ + for (i = 0; i < total_cores; i++) { + ret = otx2_cpt_add_write_af_reg(&cptpf->afpf_mbox, cptpf->pdev, + cpt_af_exex_ctl2(i), 0x0); + if (ret) + return ret; + + cptpf->eng_grps.eng_ref_cnt[i] = 0; + } + ret = otx2_cpt_send_af_reg_requests(&cptpf->afpf_mbox, cptpf->pdev); + if (ret) + return ret; + + /* wait for cores to become idle */ + do { + busy = 0; + usleep_range(10000, 20000); + if (timeout-- < 0) + return -ebusy; + + for (i = 0; i < total_cores; i++) { + ret = otx2_cpt_read_af_reg(&cptpf->afpf_mbox, + cptpf->pdev, + cpt_af_exex_sts(i), ®); + if (ret) + return ret; + + if (reg & 0x1) { + busy = 1; + break; + } + } + } while (busy); + + /* disable the cores */ + for (i = 0; i < total_cores; i++) { + ret = otx2_cpt_add_write_af_reg(&cptpf->afpf_mbox, cptpf->pdev, + cpt_af_exex_ctl(i), 0x0); + if (ret) + return ret; + } + return otx2_cpt_send_af_reg_requests(&cptpf->afpf_mbox, cptpf->pdev); +} + +void otx2_cpt_cleanup_eng_grps(struct pci_dev *pdev, + struct otx2_cpt_eng_grps *eng_grps) +{ + struct otx2_cpt_eng_grp_info *grp; + int i, j; + + delete_engine_grps(pdev, eng_grps); + /* release memory */ + for (i = 0; i < otx2_cpt_max_engine_groups; i++) { + grp = &eng_grps->grp[i]; + for (j = 0; j < otx2_cpt_max_etypes_per_grp; j++) { + kfree(grp->engs[j].bmap); + grp->engs[j].bmap = null; + } + } +} + +int otx2_cpt_init_eng_grps(struct pci_dev *pdev, + struct otx2_cpt_eng_grps *eng_grps) +{ + struct otx2_cpt_eng_grp_info *grp; + int i, j, ret; + + eng_grps->obj = pci_get_drvdata(pdev); + eng_grps->avail.se_cnt = eng_grps->avail.max_se_cnt; + eng_grps->avail.ie_cnt = eng_grps->avail.max_ie_cnt; + eng_grps->avail.ae_cnt = eng_grps->avail.max_ae_cnt; + + eng_grps->engs_num = eng_grps->avail.max_se_cnt + + eng_grps->avail.max_ie_cnt + + eng_grps->avail.max_ae_cnt; + if (eng_grps->engs_num > otx2_cpt_max_engines) { + dev_err(&pdev->dev, + "number of engines %d > than max supported %d ", + eng_grps->engs_num, otx2_cpt_max_engines); + ret = -einval; + goto cleanup_eng_grps; + } + + for (i = 0; i < otx2_cpt_max_engine_groups; i++) { + grp = &eng_grps->grp[i]; + grp->g = eng_grps; + grp->idx = i; + + for (j = 0; j < otx2_cpt_max_etypes_per_grp; j++) { + grp->engs[j].bmap = + kcalloc(bits_to_longs(eng_grps->engs_num), + sizeof(long), gfp_kernel); + if (!grp->engs[j].bmap) { + ret = -enomem; + goto cleanup_eng_grps; + } + } + } + return 0; + +cleanup_eng_grps: + otx2_cpt_cleanup_eng_grps(pdev, eng_grps); + return ret; +} diff --git a/drivers/crypto/marvell/octeontx2/otx2_cptpf_ucode.h b/drivers/crypto/marvell/octeontx2/otx2_cptpf_ucode.h --- /dev/null +++ b/drivers/crypto/marvell/octeontx2/otx2_cptpf_ucode.h +/* spdx-license-identifier: gpl-2.0-only + * copyright (c) 2020 marvell. + */ + +#ifndef __otx2_cptpf_ucode_h +#define __otx2_cptpf_ucode_h + +#include <linux/pci.h> +#include <linux/types.h> +#include <linux/module.h> +#include "otx2_cpt_hw_types.h" +#include "otx2_cpt_common.h" + +/* + * on octeontx2 platform ipsec ucode can use both ie and se engines therefore + * ie and se engines can be attached to the same engine group. + */ +#define otx2_cpt_max_etypes_per_grp 2 + +/* cpt ucode signature size */ +#define otx2_cpt_ucode_sign_len 256 + +/* microcode version string length */ +#define otx2_cpt_ucode_ver_str_sz 44 + +/* maximum number of supported engines/cores on octeontx2 platform */ +#define otx2_cpt_max_engines 128 + +#define otx2_cpt_engs_bitmask_len bits_to_longs(otx2_cpt_max_engines) + +/* microcode types */ +enum otx2_cpt_ucode_type { + otx2_cpt_ae_uc_type = 1, /* ae-main */ + otx2_cpt_se_uc_type1 = 20,/* se-main - combination of 21 and 22 */ + otx2_cpt_se_uc_type2 = 21,/* fast path ipsec + aircrypto */ + otx2_cpt_se_uc_type3 = 22,/* + * hash + hmac + flexicrypto + rng + + * full feature ipsec + aircrypto + kasumi + */ + otx2_cpt_ie_uc_type1 = 30, /* ie-main - combination of 31 and 32 */ + otx2_cpt_ie_uc_type2 = 31, /* fast path ipsec */ + otx2_cpt_ie_uc_type3 = 32, /* + * hash + hmac + flexicrypto + rng + + * full future ipsec + */ +}; + +struct otx2_cpt_bitmap { + unsigned long bits[otx2_cpt_engs_bitmask_len]; + int size; +}; + +struct otx2_cpt_engines { + int type; + int count; +}; + +/* microcode version number */ +struct otx2_cpt_ucode_ver_num { + u8 nn; + u8 xx; + u8 yy; + u8 zz; +}; + +struct otx2_cpt_ucode_hdr { + struct otx2_cpt_ucode_ver_num ver_num; + u8 ver_str[otx2_cpt_ucode_ver_str_sz]; + __be32 code_length; + u32 padding[3]; +}; + +struct otx2_cpt_ucode { + u8 ver_str[otx2_cpt_ucode_ver_str_sz];/* + * ucode version in readable + * format + */ + struct otx2_cpt_ucode_ver_num ver_num;/* ucode version number */ + char filename[otx2_cpt_name_length];/* ucode filename */ + dma_addr_t dma; /* phys address of ucode image */ + void *va; /* virt address of ucode image */ + u32 size; /* ucode image size */ + int type; /* ucode image type se, ie, ae or se+ie */ +}; + +struct otx2_cpt_uc_info_t { + struct list_head list; + struct otx2_cpt_ucode ucode;/* microcode information */ + const struct firmware *fw; +}; + +/* maximum and current number of engines available for all engine groups */ +struct otx2_cpt_engs_available { + int max_se_cnt; + int max_ie_cnt; + int max_ae_cnt; + int se_cnt; + int ie_cnt; + int ae_cnt; +}; + +/* engines reserved to an engine group */ +struct otx2_cpt_engs_rsvd { + int type; /* engine type */ + int count; /* number of engines attached */ + int offset; /* constant offset of engine type in the bitmap */ + unsigned long *bmap; /* attached engines bitmap */ + struct otx2_cpt_ucode *ucode; /* ucode used by these engines */ +}; + +struct otx2_cpt_mirror_info { + int is_ena; /* + * is mirroring enabled, it is set only for engine + * group which mirrors another engine group + */ + int idx; /* + * index of engine group which is mirrored by this + * group, set only for engine group which mirrors + * another group + */ + int ref_count; /* + * number of times this engine group is mirrored by + * other groups, this is set only for engine group + * which is mirrored by other group(s) + */ +}; + +struct otx2_cpt_eng_grp_info { + struct otx2_cpt_eng_grps *g; /* pointer to engine_groups structure */ + /* engines attached */ + struct otx2_cpt_engs_rsvd engs[otx2_cpt_max_etypes_per_grp]; + /* ucodes information */ + struct otx2_cpt_ucode ucode[otx2_cpt_max_etypes_per_grp]; + /* engine group mirroring information */ + struct otx2_cpt_mirror_info mirror; + int idx; /* engine group index */ + bool is_enabled; /* + * is engine group enabled, engine group is enabled + * when it has engines attached and ucode loaded + */ +}; + +struct otx2_cpt_eng_grps { + struct otx2_cpt_eng_grp_info grp[otx2_cpt_max_engine_groups]; + struct otx2_cpt_engs_available avail; + void *obj; /* device specific data */ + int engs_num; /* total number of engines supported */ + u8 eng_ref_cnt[otx2_cpt_max_engines];/* engines reference count */ + bool is_grps_created; /* is the engine groups are already created */ +}; +struct otx2_cptpf_dev; +int otx2_cpt_init_eng_grps(struct pci_dev *pdev, + struct otx2_cpt_eng_grps *eng_grps); +void otx2_cpt_cleanup_eng_grps(struct pci_dev *pdev, + struct otx2_cpt_eng_grps *eng_grps); +int otx2_cpt_create_eng_grps(struct pci_dev *pdev, + struct otx2_cpt_eng_grps *eng_grps); +int otx2_cpt_disable_all_cores(struct otx2_cptpf_dev *cptpf); +int otx2_cpt_get_eng_grp(struct otx2_cpt_eng_grps *eng_grps, int eng_type); + +#endif /* __otx2_cptpf_ucode_h */
|
Cryptography hardware acceleration
|
43ac0b824f1cb7c63c5fe98ea2b80ec480412601
|
srujana challa
|
drivers
|
crypto
|
marvell, octeontx2
|
crypto: octeontx2 - add lf framework
|
cpt rvu local functions(lfs) needs to be attached to the pf/vf to submit the instructions to cpt. this patch adds the interface to initialize and attach the lfs. it also adds interface to register the lf's interrupts.
|
this release allows to map an uid to a different one in a mount; it also adds support for selecting the preemption model at runtime; support for a low-overhead memory error detector designed to be used in production; support for the acrn hypervisor designed for embedded systems; btrfs initial support for zoned devices, subpage blocks sizes and performance improvements; support for eager nfs writes; support for a thermal power management to control the surface temperature of embedded devices in an unified way; the napi polling can be moved to a kernel thread; and support for non-blocking path lookups. as always, there are many other features, new drivers, improvements and fixes.
|
add support for marvell octeontx2 cpt engine
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'cryptography', 'security', 'networking', 'architectures x86 arm risc-v powerpc mips csky s390 pa-risc c6x']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'device voltage and frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'non-transparent bridge (ntb)', 'thunderbolt', 'clock', 'phy ("physical layer" framework)', 'cxl (compute express link)', 'various']
|
['octeontx2']
|
['h', 'c', 'makefile']
| 7
| 783
| 1
|
--- diff --git a/drivers/crypto/marvell/octeontx2/makefile b/drivers/crypto/marvell/octeontx2/makefile --- a/drivers/crypto/marvell/octeontx2/makefile +++ b/drivers/crypto/marvell/octeontx2/makefile - otx2_cpt_mbox_common.o otx2_cptpf_ucode.o + otx2_cpt_mbox_common.o otx2_cptpf_ucode.o otx2_cptlf.o diff --git a/drivers/crypto/marvell/octeontx2/otx2_cpt_common.h b/drivers/crypto/marvell/octeontx2/otx2_cpt_common.h --- a/drivers/crypto/marvell/octeontx2/otx2_cpt_common.h +++ b/drivers/crypto/marvell/octeontx2/otx2_cpt_common.h +struct otx2_cptlfs_info; +int otx2_cpt_attach_rscrs_msg(struct otx2_cptlfs_info *lfs); +int otx2_cpt_detach_rsrcs_msg(struct otx2_cptlfs_info *lfs); + diff --git a/drivers/crypto/marvell/octeontx2/otx2_cpt_mbox_common.c b/drivers/crypto/marvell/octeontx2/otx2_cpt_mbox_common.c --- a/drivers/crypto/marvell/octeontx2/otx2_cpt_mbox_common.c +++ b/drivers/crypto/marvell/octeontx2/otx2_cpt_mbox_common.c +#include "otx2_cptlf.h" + +int otx2_cpt_attach_rscrs_msg(struct otx2_cptlfs_info *lfs) +{ + struct otx2_mbox *mbox = lfs->mbox; + struct rsrc_attach *req; + int ret; + + req = (struct rsrc_attach *) + otx2_mbox_alloc_msg_rsp(mbox, 0, sizeof(*req), + sizeof(struct msg_rsp)); + if (req == null) { + dev_err(&lfs->pdev->dev, "rvu mbox failed to get message. "); + return -efault; + } + + req->hdr.id = mbox_msg_attach_resources; + req->hdr.sig = otx2_mbox_req_sig; + req->hdr.pcifunc = 0; + req->cptlfs = lfs->lfs_num; + ret = otx2_cpt_send_mbox_msg(mbox, lfs->pdev); + if (ret) + return ret; + + if (!lfs->are_lfs_attached) + ret = -einval; + + return ret; +} + +int otx2_cpt_detach_rsrcs_msg(struct otx2_cptlfs_info *lfs) +{ + struct otx2_mbox *mbox = lfs->mbox; + struct rsrc_detach *req; + int ret; + + req = (struct rsrc_detach *) + otx2_mbox_alloc_msg_rsp(mbox, 0, sizeof(*req), + sizeof(struct msg_rsp)); + if (req == null) { + dev_err(&lfs->pdev->dev, "rvu mbox failed to get message. "); + return -efault; + } + + req->hdr.id = mbox_msg_detach_resources; + req->hdr.sig = otx2_mbox_req_sig; + req->hdr.pcifunc = 0; + ret = otx2_cpt_send_mbox_msg(mbox, lfs->pdev); + if (ret) + return ret; + + if (lfs->are_lfs_attached) + ret = -einval; + + return ret; +} diff --git a/drivers/crypto/marvell/octeontx2/otx2_cptlf.c b/drivers/crypto/marvell/octeontx2/otx2_cptlf.c --- /dev/null +++ b/drivers/crypto/marvell/octeontx2/otx2_cptlf.c +// spdx-license-identifier: gpl-2.0-only +/* copyright (c) 2020 marvell. */ + +#include "otx2_cpt_common.h" +#include "otx2_cptlf.h" +#include "rvu_reg.h" + +#define cpt_timer_hold 0x03f +#define cpt_count_hold 32 + +static void cptlf_do_set_done_time_wait(struct otx2_cptlf_info *lf, + int time_wait) +{ + union otx2_cptx_lf_done_wait done_wait; + + done_wait.u = otx2_cpt_read64(lf->lfs->reg_base, blkaddr_cpt0, lf->slot, + otx2_cpt_lf_done_wait); + done_wait.s.time_wait = time_wait; + otx2_cpt_write64(lf->lfs->reg_base, blkaddr_cpt0, lf->slot, + otx2_cpt_lf_done_wait, done_wait.u); +} + +static void cptlf_do_set_done_num_wait(struct otx2_cptlf_info *lf, int num_wait) +{ + union otx2_cptx_lf_done_wait done_wait; + + done_wait.u = otx2_cpt_read64(lf->lfs->reg_base, blkaddr_cpt0, lf->slot, + otx2_cpt_lf_done_wait); + done_wait.s.num_wait = num_wait; + otx2_cpt_write64(lf->lfs->reg_base, blkaddr_cpt0, lf->slot, + otx2_cpt_lf_done_wait, done_wait.u); +} + +static void cptlf_set_done_time_wait(struct otx2_cptlfs_info *lfs, + int time_wait) +{ + int slot; + + for (slot = 0; slot < lfs->lfs_num; slot++) + cptlf_do_set_done_time_wait(&lfs->lf[slot], time_wait); +} + +static void cptlf_set_done_num_wait(struct otx2_cptlfs_info *lfs, int num_wait) +{ + int slot; + + for (slot = 0; slot < lfs->lfs_num; slot++) + cptlf_do_set_done_num_wait(&lfs->lf[slot], num_wait); +} + +static int cptlf_set_pri(struct otx2_cptlf_info *lf, int pri) +{ + struct otx2_cptlfs_info *lfs = lf->lfs; + union otx2_cptx_af_lf_ctrl lf_ctrl; + int ret; + + ret = otx2_cpt_read_af_reg(lfs->mbox, lfs->pdev, + cpt_af_lfx_ctl(lf->slot), + &lf_ctrl.u); + if (ret) + return ret; + + lf_ctrl.s.pri = pri ? 1 : 0; + + ret = otx2_cpt_write_af_reg(lfs->mbox, lfs->pdev, + cpt_af_lfx_ctl(lf->slot), + lf_ctrl.u); + return ret; +} + +static int cptlf_set_eng_grps_mask(struct otx2_cptlf_info *lf, + int eng_grps_mask) +{ + struct otx2_cptlfs_info *lfs = lf->lfs; + union otx2_cptx_af_lf_ctrl lf_ctrl; + int ret; + + ret = otx2_cpt_read_af_reg(lfs->mbox, lfs->pdev, + cpt_af_lfx_ctl(lf->slot), + &lf_ctrl.u); + if (ret) + return ret; + + lf_ctrl.s.grp = eng_grps_mask; + + ret = otx2_cpt_write_af_reg(lfs->mbox, lfs->pdev, + cpt_af_lfx_ctl(lf->slot), + lf_ctrl.u); + return ret; +} + +static int cptlf_set_grp_and_pri(struct otx2_cptlfs_info *lfs, + int eng_grp_mask, int pri) +{ + int slot, ret = 0; + + for (slot = 0; slot < lfs->lfs_num; slot++) { + ret = cptlf_set_pri(&lfs->lf[slot], pri); + if (ret) + return ret; + + ret = cptlf_set_eng_grps_mask(&lfs->lf[slot], eng_grp_mask); + if (ret) + return ret; + } + return ret; +} + +static void cptlf_hw_init(struct otx2_cptlfs_info *lfs) +{ + /* disable instruction queues */ + otx2_cptlf_disable_iqueues(lfs); + + /* set instruction queues base addresses */ + otx2_cptlf_set_iqueues_base_addr(lfs); + + /* set instruction queues sizes */ + otx2_cptlf_set_iqueues_size(lfs); + + /* set done interrupts time wait */ + cptlf_set_done_time_wait(lfs, cpt_timer_hold); + + /* set done interrupts num wait */ + cptlf_set_done_num_wait(lfs, cpt_count_hold); + + /* enable instruction queues */ + otx2_cptlf_enable_iqueues(lfs); +} + +static void cptlf_hw_cleanup(struct otx2_cptlfs_info *lfs) +{ + /* disable instruction queues */ + otx2_cptlf_disable_iqueues(lfs); +} + +static void cptlf_set_misc_intrs(struct otx2_cptlfs_info *lfs, u8 enable) +{ + union otx2_cptx_lf_misc_int_ena_w1s irq_misc = { .u = 0x0 }; + u64 reg = enable ? otx2_cpt_lf_misc_int_ena_w1s : + otx2_cpt_lf_misc_int_ena_w1c; + int slot; + + irq_misc.s.fault = 0x1; + irq_misc.s.hwerr = 0x1; + irq_misc.s.irde = 0x1; + irq_misc.s.nqerr = 0x1; + irq_misc.s.nwrp = 0x1; + + for (slot = 0; slot < lfs->lfs_num; slot++) + otx2_cpt_write64(lfs->reg_base, blkaddr_cpt0, slot, reg, + irq_misc.u); +} + +static void cptlf_enable_intrs(struct otx2_cptlfs_info *lfs) +{ + int slot; + + /* enable done interrupts */ + for (slot = 0; slot < lfs->lfs_num; slot++) + otx2_cpt_write64(lfs->reg_base, blkaddr_cpt0, slot, + otx2_cpt_lf_done_int_ena_w1s, 0x1); + /* enable misc interrupts */ + cptlf_set_misc_intrs(lfs, true); +} + +static void cptlf_disable_intrs(struct otx2_cptlfs_info *lfs) +{ + int slot; + + for (slot = 0; slot < lfs->lfs_num; slot++) + otx2_cpt_write64(lfs->reg_base, blkaddr_cpt0, slot, + otx2_cpt_lf_done_int_ena_w1c, 0x1); + cptlf_set_misc_intrs(lfs, false); +} + +static inline int cptlf_read_done_cnt(struct otx2_cptlf_info *lf) +{ + union otx2_cptx_lf_done irq_cnt; + + irq_cnt.u = otx2_cpt_read64(lf->lfs->reg_base, blkaddr_cpt0, lf->slot, + otx2_cpt_lf_done); + return irq_cnt.s.done; +} + +static irqreturn_t cptlf_misc_intr_handler(int __always_unused irq, void *arg) +{ + union otx2_cptx_lf_misc_int irq_misc, irq_misc_ack; + struct otx2_cptlf_info *lf = arg; + struct device *dev; + + dev = &lf->lfs->pdev->dev; + irq_misc.u = otx2_cpt_read64(lf->lfs->reg_base, blkaddr_cpt0, lf->slot, + otx2_cpt_lf_misc_int); + irq_misc_ack.u = 0x0; + + if (irq_misc.s.fault) { + dev_err(dev, "memory error detected while executing cpt_inst_s, lf %d. ", + lf->slot); + irq_misc_ack.s.fault = 0x1; + + } else if (irq_misc.s.hwerr) { + dev_err(dev, "hw error from an engine executing cpt_inst_s, lf %d.", + lf->slot); + irq_misc_ack.s.hwerr = 0x1; + + } else if (irq_misc.s.nwrp) { + dev_err(dev, "smmu fault while writing cpt_res_s to cpt_inst_s[res_addr], lf %d. ", + lf->slot); + irq_misc_ack.s.nwrp = 0x1; + + } else if (irq_misc.s.irde) { + dev_err(dev, "memory error when accessing instruction memory queue cpt_lf_q_base[addr]. "); + irq_misc_ack.s.irde = 0x1; + + } else if (irq_misc.s.nqerr) { + dev_err(dev, "error enqueuing an instruction received at cpt_lf_nq. "); + irq_misc_ack.s.nqerr = 0x1; + + } else { + dev_err(dev, "unhandled interrupt in cpt lf %d ", lf->slot); + return irq_none; + } + + /* acknowledge interrupts */ + otx2_cpt_write64(lf->lfs->reg_base, blkaddr_cpt0, lf->slot, + otx2_cpt_lf_misc_int, irq_misc_ack.u); + + return irq_handled; +} + +static irqreturn_t cptlf_done_intr_handler(int irq, void *arg) +{ + union otx2_cptx_lf_done_wait done_wait; + struct otx2_cptlf_info *lf = arg; + int irq_cnt; + + /* read the number of completed requests */ + irq_cnt = cptlf_read_done_cnt(lf); + if (irq_cnt) { + done_wait.u = otx2_cpt_read64(lf->lfs->reg_base, blkaddr_cpt0, + lf->slot, otx2_cpt_lf_done_wait); + /* acknowledge the number of completed requests */ + otx2_cpt_write64(lf->lfs->reg_base, blkaddr_cpt0, lf->slot, + otx2_cpt_lf_done_ack, irq_cnt); + + otx2_cpt_write64(lf->lfs->reg_base, blkaddr_cpt0, lf->slot, + otx2_cpt_lf_done_wait, done_wait.u); + if (unlikely(!lf->wqe)) { + dev_err(&lf->lfs->pdev->dev, "no work for lf %d ", + lf->slot); + return irq_none; + } + + /* schedule processing of completed requests */ + tasklet_hi_schedule(&lf->wqe->work); + } + return irq_handled; +} + +void otx2_cptlf_unregister_interrupts(struct otx2_cptlfs_info *lfs) +{ + int i, offs, vector; + + for (i = 0; i < lfs->lfs_num; i++) { + for (offs = 0; offs < otx2_cpt_lf_msix_vectors; offs++) { + if (!lfs->lf[i].is_irq_reg[offs]) + continue; + + vector = pci_irq_vector(lfs->pdev, + lfs->lf[i].msix_offset + offs); + free_irq(vector, &lfs->lf[i]); + lfs->lf[i].is_irq_reg[offs] = false; + } + } + cptlf_disable_intrs(lfs); +} + +static int cptlf_do_register_interrrupts(struct otx2_cptlfs_info *lfs, + int lf_num, int irq_offset, + irq_handler_t handler) +{ + int ret, vector; + + vector = pci_irq_vector(lfs->pdev, lfs->lf[lf_num].msix_offset + + irq_offset); + ret = request_irq(vector, handler, 0, + lfs->lf[lf_num].irq_name[irq_offset], + &lfs->lf[lf_num]); + if (ret) + return ret; + + lfs->lf[lf_num].is_irq_reg[irq_offset] = true; + + return ret; +} + +int otx2_cptlf_register_interrupts(struct otx2_cptlfs_info *lfs) +{ + int irq_offs, ret, i; + + for (i = 0; i < lfs->lfs_num; i++) { + irq_offs = otx2_cpt_lf_int_vec_e_misc; + snprintf(lfs->lf[i].irq_name[irq_offs], 32, "cptlf misc%d", i); + ret = cptlf_do_register_interrrupts(lfs, i, irq_offs, + cptlf_misc_intr_handler); + if (ret) + goto free_irq; + + irq_offs = otx2_cpt_lf_int_vec_e_done; + snprintf(lfs->lf[i].irq_name[irq_offs], 32, "otx2_cptlf done%d", + i); + ret = cptlf_do_register_interrrupts(lfs, i, irq_offs, + cptlf_done_intr_handler); + if (ret) + goto free_irq; + } + cptlf_enable_intrs(lfs); + return 0; + +free_irq: + otx2_cptlf_unregister_interrupts(lfs); + return ret; +} + +void otx2_cptlf_free_irqs_affinity(struct otx2_cptlfs_info *lfs) +{ + int slot, offs; + + for (slot = 0; slot < lfs->lfs_num; slot++) { + for (offs = 0; offs < otx2_cpt_lf_msix_vectors; offs++) + irq_set_affinity_hint(pci_irq_vector(lfs->pdev, + lfs->lf[slot].msix_offset + + offs), null); + if (lfs->lf[slot].affinity_mask) + free_cpumask_var(lfs->lf[slot].affinity_mask); + } +} + +int otx2_cptlf_set_irqs_affinity(struct otx2_cptlfs_info *lfs) +{ + struct otx2_cptlf_info *lf = lfs->lf; + int slot, offs, ret; + + for (slot = 0; slot < lfs->lfs_num; slot++) { + if (!zalloc_cpumask_var(&lf[slot].affinity_mask, gfp_kernel)) { + dev_err(&lfs->pdev->dev, + "cpumask allocation failed for lf %d", slot); + ret = -enomem; + goto free_affinity_mask; + } + + cpumask_set_cpu(cpumask_local_spread(slot, + dev_to_node(&lfs->pdev->dev)), + lf[slot].affinity_mask); + + for (offs = 0; offs < otx2_cpt_lf_msix_vectors; offs++) { + ret = irq_set_affinity_hint(pci_irq_vector(lfs->pdev, + lf[slot].msix_offset + offs), + lf[slot].affinity_mask); + if (ret) + goto free_affinity_mask; + } + } + return 0; + +free_affinity_mask: + otx2_cptlf_free_irqs_affinity(lfs); + return ret; +} + +int otx2_cptlf_init(struct otx2_cptlfs_info *lfs, u8 eng_grp_mask, int pri, + int lfs_num) +{ + int slot, ret; + + if (!lfs->pdev || !lfs->reg_base) + return -einval; + + lfs->lfs_num = lfs_num; + for (slot = 0; slot < lfs->lfs_num; slot++) { + lfs->lf[slot].lfs = lfs; + lfs->lf[slot].slot = slot; + lfs->lf[slot].lmtline = lfs->reg_base + + otx2_cpt_rvu_func_addr_s(blkaddr_lmt, slot, + otx2_cpt_lmt_lf_lmtlinex(0)); + lfs->lf[slot].ioreg = lfs->reg_base + + otx2_cpt_rvu_func_addr_s(blkaddr_cpt0, slot, + otx2_cpt_lf_nqx(0)); + } + /* send request to attach lfs */ + ret = otx2_cpt_attach_rscrs_msg(lfs); + if (ret) + goto clear_lfs_num; + + ret = otx2_cpt_alloc_instruction_queues(lfs); + if (ret) { + dev_err(&lfs->pdev->dev, + "allocating instruction queues failed "); + goto detach_rsrcs; + } + cptlf_hw_init(lfs); + /* + * allow each lf to execute requests destined to any of 8 engine + * groups and set queue priority of each lf to high + */ + ret = cptlf_set_grp_and_pri(lfs, eng_grp_mask, pri); + if (ret) + goto free_iq; + + return 0; + +free_iq: + otx2_cpt_free_instruction_queues(lfs); + cptlf_hw_cleanup(lfs); +detach_rsrcs: + otx2_cpt_detach_rsrcs_msg(lfs); +clear_lfs_num: + lfs->lfs_num = 0; + return ret; +} + +void otx2_cptlf_shutdown(struct otx2_cptlfs_info *lfs) +{ + lfs->lfs_num = 0; + /* cleanup lfs hardware side */ + cptlf_hw_cleanup(lfs); + /* send request to detach lfs */ + otx2_cpt_detach_rsrcs_msg(lfs); +} diff --git a/drivers/crypto/marvell/octeontx2/otx2_cptlf.h b/drivers/crypto/marvell/octeontx2/otx2_cptlf.h --- /dev/null +++ b/drivers/crypto/marvell/octeontx2/otx2_cptlf.h +/* spdx-license-identifier: gpl-2.0-only + * copyright (c) 2020 marvell. + */ +#ifndef __otx2_cptlf_h +#define __otx2_cptlf_h + +#include <mbox.h> +#include <rvu.h> +#include "otx2_cpt_common.h" + +/* + * cpt instruction and pending queues user requested length in cpt_inst_s msgs + */ +#define otx2_cpt_user_requested_qlen_msgs 8200 + +/* + * cpt instruction queue size passed to hw is in units of 40*cpt_inst_s + * messages. + */ +#define otx2_cpt_size_div40 (otx2_cpt_user_requested_qlen_msgs/40) + +/* + * cpt instruction and pending queues length in cpt_inst_s messages + */ +#define otx2_cpt_inst_qlen_msgs ((otx2_cpt_size_div40 - 1) * 40) + +/* cpt instruction queue length in bytes */ +#define otx2_cpt_inst_qlen_bytes (otx2_cpt_size_div40 * 40 * \ + otx2_cpt_inst_size) + +/* cpt instruction group queue length in bytes */ +#define otx2_cpt_inst_grp_qlen_bytes (otx2_cpt_size_div40 * 16) + +/* cpt fc length in bytes */ +#define otx2_cpt_q_fc_len 128 + +/* cpt instruction queue alignment */ +#define otx2_cpt_inst_q_alignment 128 + +/* mask which selects all engine groups */ +#define otx2_cpt_all_eng_grps_mask 0xff + +/* maximum lfs supported in octeontx2 for cpt */ +#define otx2_cpt_max_lfs_num 64 + +/* queue priority */ +#define otx2_cpt_queue_hi_prio 0x1 +#define otx2_cpt_queue_low_prio 0x0 + +enum otx2_cptlf_state { + otx2_cptlf_in_reset, + otx2_cptlf_started, +}; + +struct otx2_cpt_inst_queue { + u8 *vaddr; + u8 *real_vaddr; + dma_addr_t dma_addr; + dma_addr_t real_dma_addr; + u32 size; +}; + +struct otx2_cptlfs_info; +struct otx2_cptlf_wqe { + struct tasklet_struct work; + struct otx2_cptlfs_info *lfs; + u8 lf_num; +}; + +struct otx2_cptlf_info { + struct otx2_cptlfs_info *lfs; /* ptr to cptlfs_info struct */ + void __iomem *lmtline; /* address of lmtline */ + void __iomem *ioreg; /* lmtline send register */ + int msix_offset; /* msi-x interrupts offset */ + cpumask_var_t affinity_mask; /* irqs affinity mask */ + u8 irq_name[otx2_cpt_lf_msix_vectors][32];/* interrupts name */ + u8 is_irq_reg[otx2_cpt_lf_msix_vectors]; /* is interrupt registered */ + u8 slot; /* slot number of this lf */ + + struct otx2_cpt_inst_queue iqueue;/* instruction queue */ + struct otx2_cptlf_wqe *wqe; /* tasklet work info */ +}; + +struct otx2_cptlfs_info { + /* registers start address of vf/pf lfs are attached to */ + void __iomem *reg_base; + struct pci_dev *pdev; /* device lfs are attached to */ + struct otx2_cptlf_info lf[otx2_cpt_max_lfs_num]; + struct otx2_mbox *mbox; + u8 are_lfs_attached; /* whether cpt lfs are attached */ + u8 lfs_num; /* number of cpt lfs */ + atomic_t state; /* lf's state. started/reset */ +}; + +static inline void otx2_cpt_free_instruction_queues( + struct otx2_cptlfs_info *lfs) +{ + struct otx2_cpt_inst_queue *iq; + int i; + + for (i = 0; i < lfs->lfs_num; i++) { + iq = &lfs->lf[i].iqueue; + if (iq->real_vaddr) + dma_free_coherent(&lfs->pdev->dev, + iq->size, + iq->real_vaddr, + iq->real_dma_addr); + iq->real_vaddr = null; + iq->vaddr = null; + } +} + +static inline int otx2_cpt_alloc_instruction_queues( + struct otx2_cptlfs_info *lfs) +{ + struct otx2_cpt_inst_queue *iq; + int ret = 0, i; + + if (!lfs->lfs_num) + return -einval; + + for (i = 0; i < lfs->lfs_num; i++) { + iq = &lfs->lf[i].iqueue; + iq->size = otx2_cpt_inst_qlen_bytes + + otx2_cpt_q_fc_len + + otx2_cpt_inst_grp_qlen_bytes + + otx2_cpt_inst_q_alignment; + iq->real_vaddr = dma_alloc_coherent(&lfs->pdev->dev, iq->size, + &iq->real_dma_addr, gfp_kernel); + if (!iq->real_vaddr) { + ret = -enomem; + goto error; + } + iq->vaddr = iq->real_vaddr + otx2_cpt_inst_grp_qlen_bytes; + iq->dma_addr = iq->real_dma_addr + otx2_cpt_inst_grp_qlen_bytes; + + /* align pointers */ + iq->vaddr = ptr_align(iq->vaddr, otx2_cpt_inst_q_alignment); + iq->dma_addr = ptr_align(iq->dma_addr, + otx2_cpt_inst_q_alignment); + } + return 0; + +error: + otx2_cpt_free_instruction_queues(lfs); + return ret; +} + +static inline void otx2_cptlf_set_iqueues_base_addr( + struct otx2_cptlfs_info *lfs) +{ + union otx2_cptx_lf_q_base lf_q_base; + int slot; + + for (slot = 0; slot < lfs->lfs_num; slot++) { + lf_q_base.u = lfs->lf[slot].iqueue.dma_addr; + otx2_cpt_write64(lfs->reg_base, blkaddr_cpt0, slot, + otx2_cpt_lf_q_base, lf_q_base.u); + } +} + +static inline void otx2_cptlf_do_set_iqueue_size(struct otx2_cptlf_info *lf) +{ + union otx2_cptx_lf_q_size lf_q_size = { .u = 0x0 }; + + lf_q_size.s.size_div40 = otx2_cpt_size_div40; + otx2_cpt_write64(lf->lfs->reg_base, blkaddr_cpt0, lf->slot, + otx2_cpt_lf_q_size, lf_q_size.u); +} + +static inline void otx2_cptlf_set_iqueues_size(struct otx2_cptlfs_info *lfs) +{ + int slot; + + for (slot = 0; slot < lfs->lfs_num; slot++) + otx2_cptlf_do_set_iqueue_size(&lfs->lf[slot]); +} + +static inline void otx2_cptlf_do_disable_iqueue(struct otx2_cptlf_info *lf) +{ + union otx2_cptx_lf_ctl lf_ctl = { .u = 0x0 }; + union otx2_cptx_lf_inprog lf_inprog; + int timeout = 20; + + /* disable instructions enqueuing */ + otx2_cpt_write64(lf->lfs->reg_base, blkaddr_cpt0, lf->slot, + otx2_cpt_lf_ctl, lf_ctl.u); + + /* wait for instruction queue to become empty */ + do { + lf_inprog.u = otx2_cpt_read64(lf->lfs->reg_base, blkaddr_cpt0, + lf->slot, otx2_cpt_lf_inprog); + if (!lf_inprog.s.inflight) + break; + + usleep_range(10000, 20000); + if (timeout-- < 0) { + dev_err(&lf->lfs->pdev->dev, + "error lf %d is still busy. ", lf->slot); + break; + } + + } while (1); + + /* + * disable executions in the lf's queue, + * the queue should be empty at this point + */ + lf_inprog.s.eena = 0x0; + otx2_cpt_write64(lf->lfs->reg_base, blkaddr_cpt0, lf->slot, + otx2_cpt_lf_inprog, lf_inprog.u); +} + +static inline void otx2_cptlf_disable_iqueues(struct otx2_cptlfs_info *lfs) +{ + int slot; + + for (slot = 0; slot < lfs->lfs_num; slot++) + otx2_cptlf_do_disable_iqueue(&lfs->lf[slot]); +} + +static inline void otx2_cptlf_set_iqueue_enq(struct otx2_cptlf_info *lf, + bool enable) +{ + union otx2_cptx_lf_ctl lf_ctl; + + lf_ctl.u = otx2_cpt_read64(lf->lfs->reg_base, blkaddr_cpt0, lf->slot, + otx2_cpt_lf_ctl); + + /* set iqueue's enqueuing */ + lf_ctl.s.ena = enable ? 0x1 : 0x0; + otx2_cpt_write64(lf->lfs->reg_base, blkaddr_cpt0, lf->slot, + otx2_cpt_lf_ctl, lf_ctl.u); +} + +static inline void otx2_cptlf_enable_iqueue_enq(struct otx2_cptlf_info *lf) +{ + otx2_cptlf_set_iqueue_enq(lf, true); +} + +static inline void otx2_cptlf_set_iqueue_exec(struct otx2_cptlf_info *lf, + bool enable) +{ + union otx2_cptx_lf_inprog lf_inprog; + + lf_inprog.u = otx2_cpt_read64(lf->lfs->reg_base, blkaddr_cpt0, lf->slot, + otx2_cpt_lf_inprog); + + /* set iqueue's execution */ + lf_inprog.s.eena = enable ? 0x1 : 0x0; + otx2_cpt_write64(lf->lfs->reg_base, blkaddr_cpt0, lf->slot, + otx2_cpt_lf_inprog, lf_inprog.u); +} + +static inline void otx2_cptlf_enable_iqueue_exec(struct otx2_cptlf_info *lf) +{ + otx2_cptlf_set_iqueue_exec(lf, true); +} + +static inline void otx2_cptlf_disable_iqueue_exec(struct otx2_cptlf_info *lf) +{ + otx2_cptlf_set_iqueue_exec(lf, false); +} + +static inline void otx2_cptlf_enable_iqueues(struct otx2_cptlfs_info *lfs) +{ + int slot; + + for (slot = 0; slot < lfs->lfs_num; slot++) { + otx2_cptlf_enable_iqueue_exec(&lfs->lf[slot]); + otx2_cptlf_enable_iqueue_enq(&lfs->lf[slot]); + } +} + +int otx2_cptlf_init(struct otx2_cptlfs_info *lfs, u8 eng_grp_msk, int pri, + int lfs_num); +void otx2_cptlf_shutdown(struct otx2_cptlfs_info *lfs); +int otx2_cptlf_register_interrupts(struct otx2_cptlfs_info *lfs); +void otx2_cptlf_unregister_interrupts(struct otx2_cptlfs_info *lfs); +void otx2_cptlf_free_irqs_affinity(struct otx2_cptlfs_info *lfs); +int otx2_cptlf_set_irqs_affinity(struct otx2_cptlfs_info *lfs); + +#endif /* __otx2_cptlf_h */ diff --git a/drivers/crypto/marvell/octeontx2/otx2_cptpf.h b/drivers/crypto/marvell/octeontx2/otx2_cptpf.h --- a/drivers/crypto/marvell/octeontx2/otx2_cptpf.h +++ b/drivers/crypto/marvell/octeontx2/otx2_cptpf.h +#include "otx2_cptlf.h" + struct otx2_cptlfs_info lfs; /* cpt lfs attached to this pf */ diff --git a/drivers/crypto/marvell/octeontx2/otx2_cptpf_mbox.c b/drivers/crypto/marvell/octeontx2/otx2_cptpf_mbox.c --- a/drivers/crypto/marvell/octeontx2/otx2_cptpf_mbox.c +++ b/drivers/crypto/marvell/octeontx2/otx2_cptpf_mbox.c + case mbox_msg_attach_resources: + if (!msg->rc) + cptpf->lfs.are_lfs_attached = 1; + break; + case mbox_msg_detach_resources: + if (!msg->rc) + cptpf->lfs.are_lfs_attached = 0; + break;
|
Cryptography hardware acceleration
|
64506017030dd44f0fc91c5110840ac7996213dd
|
srujana challa
|
drivers
|
crypto
|
marvell, octeontx2
|
crypto: octeontx2 - add support to get engine capabilities
|
adds support to get engine capabilities and adds a new mailbox to share capabilities with vf driver.
|
this release allows to map an uid to a different one in a mount; it also adds support for selecting the preemption model at runtime; support for a low-overhead memory error detector designed to be used in production; support for the acrn hypervisor designed for embedded systems; btrfs initial support for zoned devices, subpage blocks sizes and performance improvements; support for eager nfs writes; support for a thermal power management to control the surface temperature of embedded devices in an unified way; the napi polling can be moved to a kernel thread; and support for non-blocking path lookups. as always, there are many other features, new drivers, improvements and fixes.
|
add support for marvell octeontx2 cpt engine
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'cryptography', 'security', 'networking', 'architectures x86 arm risc-v powerpc mips csky s390 pa-risc c6x']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'device voltage and frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'non-transparent bridge (ntb)', 'thunderbolt', 'clock', 'phy ("physical layer" framework)', 'cxl (compute express link)', 'various']
|
['octeontx2']
|
['h', 'c']
| 8
| 350
| 0
|
--- diff --git a/drivers/crypto/marvell/octeontx2/otx2_cpt_common.h b/drivers/crypto/marvell/octeontx2/otx2_cpt_common.h --- a/drivers/crypto/marvell/octeontx2/otx2_cpt_common.h +++ b/drivers/crypto/marvell/octeontx2/otx2_cpt_common.h +#define otx2_cpt_dma_minalign 128 +#define mbox_msg_get_caps 0xbfd +/* cpt hw capabilities */ +union otx2_cpt_eng_caps { + u64 u; + struct { + u64 reserved_0_4:5; + u64 mul:1; + u64 sha1_sha2:1; + u64 chacha20:1; + u64 zuc_snow3g:1; + u64 sha3:1; + u64 aes:1; + u64 kasumi:1; + u64 des:1; + u64 crc:1; + u64 reserved_14_63:50; + }; +}; + +/* + * message request and response to get hw capabilities for each + * engine type (se, ie, ae). + * this messages are only used between cpt pf <=> cpt vf + */ +struct otx2_cpt_caps_msg { + struct mbox_msghdr hdr; +}; + +struct otx2_cpt_caps_rsp { + struct mbox_msghdr hdr; + u16 cpt_pf_drv_version; + u8 cpt_revision; + union otx2_cpt_eng_caps eng_caps[otx2_cpt_max_eng_types]; +}; + diff --git a/drivers/crypto/marvell/octeontx2/otx2_cpt_reqmgr.h b/drivers/crypto/marvell/octeontx2/otx2_cpt_reqmgr.h --- /dev/null +++ b/drivers/crypto/marvell/octeontx2/otx2_cpt_reqmgr.h +/* spdx-license-identifier: gpl-2.0-only + * copyright (c) 2020 marvell. + */ + +#ifndef __otx2_cpt_reqmgr_h +#define __otx2_cpt_reqmgr_h + +#include "otx2_cpt_common.h" + +/* completion code size and initial value */ +#define otx2_cpt_completion_code_size 8 +#define otx2_cpt_completion_code_init otx2_cpt_comp_e_notdone + +union otx2_cpt_opcode { + u16 flags; + struct { + u8 major; + u8 minor; + } s; +}; + +/* + * cpt_inst_s software command definitions + * words ei (0-3) + */ +union otx2_cpt_iq_cmd_word0 { + u64 u; + struct { + __be16 opcode; + __be16 param1; + __be16 param2; + __be16 dlen; + } s; +}; + +union otx2_cpt_iq_cmd_word3 { + u64 u; + struct { + u64 cptr:61; + u64 grp:3; + } s; +}; + +struct otx2_cpt_iq_command { + union otx2_cpt_iq_cmd_word0 cmd; + u64 dptr; + u64 rptr; + union otx2_cpt_iq_cmd_word3 cptr; +}; + +#endif /* __otx2_cpt_reqmgr_h */ diff --git a/drivers/crypto/marvell/octeontx2/otx2_cptlf.h b/drivers/crypto/marvell/octeontx2/otx2_cptlf.h --- a/drivers/crypto/marvell/octeontx2/otx2_cptlf.h +++ b/drivers/crypto/marvell/octeontx2/otx2_cptlf.h +#include <linux/soc/marvell/octeontx2/asm.h> +#include "otx2_cpt_reqmgr.h" +static inline void otx2_cpt_fill_inst(union otx2_cpt_inst_s *cptinst, + struct otx2_cpt_iq_command *iq_cmd, + u64 comp_baddr) +{ + cptinst->u[0] = 0x0; + cptinst->s.doneint = true; + cptinst->s.res_addr = comp_baddr; + cptinst->u[2] = 0x0; + cptinst->u[3] = 0x0; + cptinst->s.ei0 = iq_cmd->cmd.u; + cptinst->s.ei1 = iq_cmd->dptr; + cptinst->s.ei2 = iq_cmd->rptr; + cptinst->s.ei3 = iq_cmd->cptr.u; +} + +/* + * on octeontx2 platform the parameter insts_num is used as a count of + * instructions to be enqueued. the valid values for insts_num are: + * 1 - 1 cpt instruction will be enqueued during lmtst operation + * 2 - 2 cpt instructions will be enqueued during lmtst operation + */ +static inline void otx2_cpt_send_cmd(union otx2_cpt_inst_s *cptinst, + u32 insts_num, struct otx2_cptlf_info *lf) +{ + void __iomem *lmtline = lf->lmtline; + long ret; + + /* + * make sure memory areas pointed in cpt_inst_s + * are flushed before the instruction is sent to cpt + */ + dma_wmb(); + + do { + /* copy cpt command to lmtline */ + memcpy_toio(lmtline, cptinst, insts_num * otx2_cpt_inst_size); + + /* + * ldeor initiates atomic transfer to i/o device + * the following will cause the lmtst to fail (the ldeor + * returns zero): + * - no stores have been performed to the lmtline since it was + * last invalidated. + * - the bytes which have been stored to lmtline since it was + * last invalidated form a pattern that is non-contiguous, does + * not start at byte 0, or does not end on a 8-byte boundary. + * (i.e.comprises a formation of other than 116 8-byte + * words.) + * + * these rules are designed such that an operating system + * context switch or hypervisor guest switch need have no + * knowledge of the lmtst operations; the switch code does not + * need to store to lmtcancel. also note as lmtline data cannot + * be read, there is no information leakage between processes. + */ + ret = otx2_lmt_flush(lf->ioreg); + + } while (!ret); +} + diff --git a/drivers/crypto/marvell/octeontx2/otx2_cptpf.h b/drivers/crypto/marvell/octeontx2/otx2_cptpf.h --- a/drivers/crypto/marvell/octeontx2/otx2_cptpf.h +++ b/drivers/crypto/marvell/octeontx2/otx2_cptpf.h + /* hw capabilities for each engine type */ + union otx2_cpt_eng_caps eng_caps[otx2_cpt_max_eng_types]; + bool is_eng_caps_discovered; diff --git a/drivers/crypto/marvell/octeontx2/otx2_cptpf_main.c b/drivers/crypto/marvell/octeontx2/otx2_cptpf_main.c --- a/drivers/crypto/marvell/octeontx2/otx2_cptpf_main.c +++ b/drivers/crypto/marvell/octeontx2/otx2_cptpf_main.c + /* get cpt hw capabilities using load_fvc operation. */ + ret = otx2_cpt_discover_eng_capabilities(cptpf); + if (ret) + goto disable_intr; + diff --git a/drivers/crypto/marvell/octeontx2/otx2_cptpf_mbox.c b/drivers/crypto/marvell/octeontx2/otx2_cptpf_mbox.c --- a/drivers/crypto/marvell/octeontx2/otx2_cptpf_mbox.c +++ b/drivers/crypto/marvell/octeontx2/otx2_cptpf_mbox.c +/* + * cpt pf driver version, it will be incremented by 1 for every feature + * addition in cpt mailbox messages. + */ +#define otx2_cpt_pf_drv_version 0x1 + +static int handle_msg_get_caps(struct otx2_cptpf_dev *cptpf, + struct otx2_cptvf_info *vf, + struct mbox_msghdr *req) +{ + struct otx2_cpt_caps_rsp *rsp; + + rsp = (struct otx2_cpt_caps_rsp *) + otx2_mbox_alloc_msg(&cptpf->vfpf_mbox, vf->vf_id, + sizeof(*rsp)); + if (!rsp) + return -enomem; + + rsp->hdr.id = mbox_msg_get_caps; + rsp->hdr.sig = otx2_mbox_rsp_sig; + rsp->hdr.pcifunc = req->pcifunc; + rsp->cpt_pf_drv_version = otx2_cpt_pf_drv_version; + rsp->cpt_revision = cptpf->pdev->revision; + memcpy(&rsp->eng_caps, &cptpf->eng_caps, sizeof(rsp->eng_caps)); + + return 0; +} + + case mbox_msg_get_caps: + err = handle_msg_get_caps(cptpf, vf, req); + break; diff --git a/drivers/crypto/marvell/octeontx2/otx2_cptpf_ucode.c b/drivers/crypto/marvell/octeontx2/otx2_cptpf_ucode.c --- a/drivers/crypto/marvell/octeontx2/otx2_cptpf_ucode.c +++ b/drivers/crypto/marvell/octeontx2/otx2_cptpf_ucode.c +#include "otx2_cptlf.h" +#include "otx2_cpt_reqmgr.h" + +static int create_eng_caps_discovery_grps(struct pci_dev *pdev, + struct otx2_cpt_eng_grps *eng_grps) +{ + struct otx2_cpt_uc_info_t *uc_info[otx2_cpt_max_etypes_per_grp] = { }; + struct otx2_cpt_engines engs[otx2_cpt_max_etypes_per_grp] = { {0} }; + struct fw_info_t fw_info; + int ret; + + ret = cpt_ucode_load_fw(pdev, &fw_info); + if (ret) + return ret; + + uc_info[0] = get_ucode(&fw_info, otx2_cpt_se_types); + if (uc_info[0] == null) { + dev_err(&pdev->dev, "unable to find firmware for ae "); + ret = -einval; + goto release_fw; + } + engs[0].type = otx2_cpt_ae_types; + engs[0].count = 2; + + ret = create_engine_group(&pdev->dev, eng_grps, engs, 1, + (void **) uc_info, 0); + if (ret) + goto release_fw; + + uc_info[0] = get_ucode(&fw_info, otx2_cpt_se_types); + if (uc_info[0] == null) { + dev_err(&pdev->dev, "unable to find firmware for se "); + ret = -einval; + goto delete_eng_grp; + } + engs[0].type = otx2_cpt_se_types; + engs[0].count = 2; + + ret = create_engine_group(&pdev->dev, eng_grps, engs, 1, + (void **) uc_info, 0); + if (ret) + goto delete_eng_grp; + + uc_info[0] = get_ucode(&fw_info, otx2_cpt_ie_types); + if (uc_info[0] == null) { + dev_err(&pdev->dev, "unable to find firmware for ie "); + ret = -einval; + goto delete_eng_grp; + } + engs[0].type = otx2_cpt_ie_types; + engs[0].count = 2; + + ret = create_engine_group(&pdev->dev, eng_grps, engs, 1, + (void **) uc_info, 0); + if (ret) + goto delete_eng_grp; + + cpt_ucode_release_fw(&fw_info); + return 0; + +delete_eng_grp: + delete_engine_grps(pdev, eng_grps); +release_fw: + cpt_ucode_release_fw(&fw_info); + return ret; +} + +/* + * get cpt hw capabilities using load_fvc operation. + */ +int otx2_cpt_discover_eng_capabilities(struct otx2_cptpf_dev *cptpf) +{ + struct otx2_cptlfs_info *lfs = &cptpf->lfs; + struct otx2_cpt_iq_command iq_cmd; + union otx2_cpt_opcode opcode; + union otx2_cpt_res_s *result; + union otx2_cpt_inst_s inst; + dma_addr_t rptr_baddr; + struct pci_dev *pdev; + u32 len, compl_rlen; + int ret, etype; + void *rptr; + + /* + * we don't get capabilities if it was already done + * (when user enabled vfs for the first time) + */ + if (cptpf->is_eng_caps_discovered) + return 0; + + pdev = cptpf->pdev; + /* + * create engine groups for each type to submit load_fvc op and + * get engine's capabilities. + */ + ret = create_eng_caps_discovery_grps(pdev, &cptpf->eng_grps); + if (ret) + goto delete_grps; + + lfs->pdev = pdev; + lfs->reg_base = cptpf->reg_base; + lfs->mbox = &cptpf->afpf_mbox; + ret = otx2_cptlf_init(&cptpf->lfs, otx2_cpt_all_eng_grps_mask, + otx2_cpt_queue_hi_prio, 1); + if (ret) + goto delete_grps; + + compl_rlen = align(sizeof(union otx2_cpt_res_s), otx2_cpt_dma_minalign); + len = compl_rlen + loadfvc_rlen; + + result = kzalloc(len, gfp_kernel); + if (!result) { + ret = -enomem; + goto lf_cleanup; + } + rptr_baddr = dma_map_single(&pdev->dev, (void *)result, len, + dma_bidirectional); + if (dma_mapping_error(&pdev->dev, rptr_baddr)) { + dev_err(&pdev->dev, "dma mapping failed "); + ret = -efault; + goto free_result; + } + rptr = (u8 *)result + compl_rlen; + + /* fill in the command */ + opcode.s.major = loadfvc_major_op; + opcode.s.minor = loadfvc_minor_op; + + iq_cmd.cmd.u = 0; + iq_cmd.cmd.s.opcode = cpu_to_be16(opcode.flags); + + /* 64-bit swap for microcode data reads, not needed for addresses */ + cpu_to_be64s(&iq_cmd.cmd.u); + iq_cmd.dptr = 0; + iq_cmd.rptr = rptr_baddr + compl_rlen; + iq_cmd.cptr.u = 0; + + for (etype = 1; etype < otx2_cpt_max_eng_types; etype++) { + result->s.compcode = otx2_cpt_completion_code_init; + iq_cmd.cptr.s.grp = otx2_cpt_get_eng_grp(&cptpf->eng_grps, + etype); + otx2_cpt_fill_inst(&inst, &iq_cmd, rptr_baddr); + otx2_cpt_send_cmd(&inst, 1, &cptpf->lfs.lf[0]); + + while (result->s.compcode == otx2_cpt_completion_code_init) + cpu_relax(); + + cptpf->eng_caps[etype].u = be64_to_cpup(rptr); + } + dma_unmap_single(&pdev->dev, rptr_baddr, len, dma_bidirectional); + cptpf->is_eng_caps_discovered = true; + +free_result: + kfree(result); +lf_cleanup: + otx2_cptlf_shutdown(&cptpf->lfs); +delete_grps: + delete_engine_grps(pdev, &cptpf->eng_grps); + + return ret; +} diff --git a/drivers/crypto/marvell/octeontx2/otx2_cptpf_ucode.h b/drivers/crypto/marvell/octeontx2/otx2_cptpf_ucode.h --- a/drivers/crypto/marvell/octeontx2/otx2_cptpf_ucode.h +++ b/drivers/crypto/marvell/octeontx2/otx2_cptpf_ucode.h +int otx2_cpt_discover_eng_capabilities(struct otx2_cptpf_dev *cptpf);
|
Cryptography hardware acceleration
|
78506c2a1eac97504ff56de1c587bac403ca8dca
|
srujana challa
|
drivers
|
crypto
|
marvell, octeontx2
|
crypto: octeontx2 - add virtual function driver support
|
add support for the marvell octeontx2 cpt virtual function driver. this patch includes probe, pci specific initialization and interrupt handling.
|
this release allows to map an uid to a different one in a mount; it also adds support for selecting the preemption model at runtime; support for a low-overhead memory error detector designed to be used in production; support for the acrn hypervisor designed for embedded systems; btrfs initial support for zoned devices, subpage blocks sizes and performance improvements; support for eager nfs writes; support for a thermal power management to control the surface temperature of embedded devices in an unified way; the napi polling can be moved to a kernel thread; and support for non-blocking path lookups. as always, there are many other features, new drivers, improvements and fixes.
|
add support for marvell octeontx2 cpt engine
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'cryptography', 'security', 'networking', 'architectures x86 arm risc-v powerpc mips csky s390 pa-risc c6x']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'device voltage and frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'non-transparent bridge (ntb)', 'thunderbolt', 'clock', 'phy ("physical layer" framework)', 'cxl (compute express link)', 'various']
|
['octeontx2']
|
['h', 'c', 'makefile']
| 6
| 373
| 1
|
--- diff --git a/drivers/crypto/marvell/octeontx2/makefile b/drivers/crypto/marvell/octeontx2/makefile --- a/drivers/crypto/marvell/octeontx2/makefile +++ b/drivers/crypto/marvell/octeontx2/makefile -obj-$(config_crypto_dev_octeontx2_cpt) += octeontx2-cpt.o +obj-$(config_crypto_dev_octeontx2_cpt) += octeontx2-cpt.o octeontx2-cptvf.o +octeontx2-cptvf-objs := otx2_cptvf_main.o otx2_cptvf_mbox.o otx2_cptlf.o \ + otx2_cpt_mbox_common.o diff --git a/drivers/crypto/marvell/octeontx2/otx2_cpt_common.h b/drivers/crypto/marvell/octeontx2/otx2_cpt_common.h --- a/drivers/crypto/marvell/octeontx2/otx2_cpt_common.h +++ b/drivers/crypto/marvell/octeontx2/otx2_cpt_common.h +int otx2_cpt_msix_offset_msg(struct otx2_cptlfs_info *lfs); diff --git a/drivers/crypto/marvell/octeontx2/otx2_cpt_mbox_common.c b/drivers/crypto/marvell/octeontx2/otx2_cpt_mbox_common.c --- a/drivers/crypto/marvell/octeontx2/otx2_cpt_mbox_common.c +++ b/drivers/crypto/marvell/octeontx2/otx2_cpt_mbox_common.c + +int otx2_cpt_msix_offset_msg(struct otx2_cptlfs_info *lfs) +{ + struct otx2_mbox *mbox = lfs->mbox; + struct pci_dev *pdev = lfs->pdev; + struct mbox_msghdr *req; + int ret, i; + + req = otx2_mbox_alloc_msg_rsp(mbox, 0, sizeof(*req), + sizeof(struct msix_offset_rsp)); + if (req == null) { + dev_err(&pdev->dev, "rvu mbox failed to get message. "); + return -efault; + } + + req->id = mbox_msg_msix_offset; + req->sig = otx2_mbox_req_sig; + req->pcifunc = 0; + ret = otx2_cpt_send_mbox_msg(mbox, pdev); + if (ret) + return ret; + + for (i = 0; i < lfs->lfs_num; i++) { + if (lfs->lf[i].msix_offset == msix_vector_invalid) { + dev_err(&pdev->dev, + "invalid msix offset %d for lf %d ", + lfs->lf[i].msix_offset, i); + return -einval; + } + } + return ret; +} diff --git a/drivers/crypto/marvell/octeontx2/otx2_cptvf.h b/drivers/crypto/marvell/octeontx2/otx2_cptvf.h --- /dev/null +++ b/drivers/crypto/marvell/octeontx2/otx2_cptvf.h +/* spdx-license-identifier: gpl-2.0-only + * copyright (c) 2020 marvell. + */ + +#ifndef __otx2_cptvf_h +#define __otx2_cptvf_h + +#include "mbox.h" +#include "otx2_cptlf.h" + +struct otx2_cptvf_dev { + void __iomem *reg_base; /* register start address */ + void __iomem *pfvf_mbox_base; /* pf-vf mbox start address */ + struct pci_dev *pdev; /* pci device handle */ + struct otx2_cptlfs_info lfs; /* cpt lfs attached to this vf */ + u8 vf_id; /* virtual function index */ + + /* pf <=> vf mbox */ + struct otx2_mbox pfvf_mbox; + struct work_struct pfvf_mbox_work; + struct workqueue_struct *pfvf_mbox_wq; +}; + +irqreturn_t otx2_cptvf_pfvf_mbox_intr(int irq, void *arg); +void otx2_cptvf_pfvf_mbox_handler(struct work_struct *work); +int otx2_cptvf_send_eng_grp_num_msg(struct otx2_cptvf_dev *cptvf, int eng_type); + +#endif /* __otx2_cptvf_h */ diff --git a/drivers/crypto/marvell/octeontx2/otx2_cptvf_main.c b/drivers/crypto/marvell/octeontx2/otx2_cptvf_main.c --- /dev/null +++ b/drivers/crypto/marvell/octeontx2/otx2_cptvf_main.c +// spdx-license-identifier: gpl-2.0-only +/* copyright (c) 2020 marvell. */ + +#include "otx2_cpt_common.h" +#include "otx2_cptvf.h" +#include <rvu_reg.h> + +#define otx2_cptvf_drv_name "octeontx2-cptvf" + +static void cptvf_enable_pfvf_mbox_intrs(struct otx2_cptvf_dev *cptvf) +{ + /* clear interrupt if any */ + otx2_cpt_write64(cptvf->reg_base, blkaddr_rvum, 0, otx2_rvu_vf_int, + 0x1ull); + + /* enable pf-vf interrupt */ + otx2_cpt_write64(cptvf->reg_base, blkaddr_rvum, 0, + otx2_rvu_vf_int_ena_w1s, 0x1ull); +} + +static void cptvf_disable_pfvf_mbox_intrs(struct otx2_cptvf_dev *cptvf) +{ + /* disable pf-vf interrupt */ + otx2_cpt_write64(cptvf->reg_base, blkaddr_rvum, 0, + otx2_rvu_vf_int_ena_w1c, 0x1ull); + + /* clear interrupt if any */ + otx2_cpt_write64(cptvf->reg_base, blkaddr_rvum, 0, otx2_rvu_vf_int, + 0x1ull); +} + +static int cptvf_register_interrupts(struct otx2_cptvf_dev *cptvf) +{ + int ret, irq; + u32 num_vec; + + num_vec = pci_msix_vec_count(cptvf->pdev); + if (num_vec <= 0) + return -einval; + + /* enable msi-x */ + ret = pci_alloc_irq_vectors(cptvf->pdev, num_vec, num_vec, + pci_irq_msix); + if (ret < 0) { + dev_err(&cptvf->pdev->dev, + "request for %d msix vectors failed ", num_vec); + return ret; + } + irq = pci_irq_vector(cptvf->pdev, otx2_cpt_vf_int_vec_e_mbox); + /* register vf<=>pf mailbox interrupt handler */ + ret = devm_request_irq(&cptvf->pdev->dev, irq, + otx2_cptvf_pfvf_mbox_intr, 0, + "cptpfvf mbox", cptvf); + if (ret) + return ret; + /* enable pf-vf mailbox interrupts */ + cptvf_enable_pfvf_mbox_intrs(cptvf); + + ret = otx2_cpt_send_ready_msg(&cptvf->pfvf_mbox, cptvf->pdev); + if (ret) { + dev_warn(&cptvf->pdev->dev, + "pf not responding to mailbox, deferring probe "); + cptvf_disable_pfvf_mbox_intrs(cptvf); + return -eprobe_defer; + } + return 0; +} + +static int cptvf_pfvf_mbox_init(struct otx2_cptvf_dev *cptvf) +{ + int ret; + + cptvf->pfvf_mbox_wq = alloc_workqueue("cpt_pfvf_mailbox", + wq_unbound | wq_highpri | + wq_mem_reclaim, 1); + if (!cptvf->pfvf_mbox_wq) + return -enomem; + + ret = otx2_mbox_init(&cptvf->pfvf_mbox, cptvf->pfvf_mbox_base, + cptvf->pdev, cptvf->reg_base, mbox_dir_vfpf, 1); + if (ret) + goto free_wqe; + + init_work(&cptvf->pfvf_mbox_work, otx2_cptvf_pfvf_mbox_handler); + return 0; + +free_wqe: + destroy_workqueue(cptvf->pfvf_mbox_wq); + return ret; +} + +static void cptvf_pfvf_mbox_destroy(struct otx2_cptvf_dev *cptvf) +{ + destroy_workqueue(cptvf->pfvf_mbox_wq); + otx2_mbox_destroy(&cptvf->pfvf_mbox); +} + +static int otx2_cptvf_probe(struct pci_dev *pdev, + const struct pci_device_id *ent) +{ + struct device *dev = &pdev->dev; + resource_size_t offset, size; + struct otx2_cptvf_dev *cptvf; + int ret; + + cptvf = devm_kzalloc(dev, sizeof(*cptvf), gfp_kernel); + if (!cptvf) + return -enomem; + + ret = pcim_enable_device(pdev); + if (ret) { + dev_err(dev, "failed to enable pci device "); + goto clear_drvdata; + } + + ret = dma_set_mask_and_coherent(dev, dma_bit_mask(48)); + if (ret) { + dev_err(dev, "unable to get usable dma configuration "); + goto clear_drvdata; + } + /* map vf's configuration registers */ + ret = pcim_iomap_regions_request_all(pdev, 1 << pci_pf_reg_bar_num, + otx2_cptvf_drv_name); + if (ret) { + dev_err(dev, "couldn't get pci resources 0x%x ", ret); + goto clear_drvdata; + } + pci_set_master(pdev); + pci_set_drvdata(pdev, cptvf); + cptvf->pdev = pdev; + + cptvf->reg_base = pcim_iomap_table(pdev)[pci_pf_reg_bar_num]; + + offset = pci_resource_start(pdev, pci_mbox_bar_num); + size = pci_resource_len(pdev, pci_mbox_bar_num); + /* map pf-vf mailbox memory */ + cptvf->pfvf_mbox_base = devm_ioremap_wc(dev, offset, size); + if (!cptvf->pfvf_mbox_base) { + dev_err(&pdev->dev, "unable to map bar4 "); + ret = -enodev; + goto clear_drvdata; + } + /* initialize pf<=>vf mailbox */ + ret = cptvf_pfvf_mbox_init(cptvf); + if (ret) + goto clear_drvdata; + + /* register interrupts */ + ret = cptvf_register_interrupts(cptvf); + if (ret) + goto destroy_pfvf_mbox; + + return 0; + +destroy_pfvf_mbox: + cptvf_pfvf_mbox_destroy(cptvf); +clear_drvdata: + pci_set_drvdata(pdev, null); + + return ret; +} + +static void otx2_cptvf_remove(struct pci_dev *pdev) +{ + struct otx2_cptvf_dev *cptvf = pci_get_drvdata(pdev); + + if (!cptvf) { + dev_err(&pdev->dev, "invalid cpt vf device. "); + return; + } + /* disable pf-vf mailbox interrupt */ + cptvf_disable_pfvf_mbox_intrs(cptvf); + /* destroy pf-vf mbox */ + cptvf_pfvf_mbox_destroy(cptvf); + pci_set_drvdata(pdev, null); +} + +/* supported devices */ +static const struct pci_device_id otx2_cptvf_id_table[] = { + {pci_vdevice(cavium, otx2_cpt_pci_vf_device_id), 0}, + { 0, } /* end of table */ +}; + +static struct pci_driver otx2_cptvf_pci_driver = { + .name = otx2_cptvf_drv_name, + .id_table = otx2_cptvf_id_table, + .probe = otx2_cptvf_probe, + .remove = otx2_cptvf_remove, +}; + +module_pci_driver(otx2_cptvf_pci_driver); + +module_author("marvell"); +module_description("marvell octeontx2 cpt virtual function driver"); +module_license("gpl v2"); +module_device_table(pci, otx2_cptvf_id_table); diff --git a/drivers/crypto/marvell/octeontx2/otx2_cptvf_mbox.c b/drivers/crypto/marvell/octeontx2/otx2_cptvf_mbox.c --- /dev/null +++ b/drivers/crypto/marvell/octeontx2/otx2_cptvf_mbox.c +// spdx-license-identifier: gpl-2.0-only +/* copyright (c) 2020 marvell. */ + +#include "otx2_cpt_common.h" +#include "otx2_cptvf.h" +#include <rvu_reg.h> + +irqreturn_t otx2_cptvf_pfvf_mbox_intr(int __always_unused irq, void *arg) +{ + struct otx2_cptvf_dev *cptvf = arg; + u64 intr; + + /* read the interrupt bits */ + intr = otx2_cpt_read64(cptvf->reg_base, blkaddr_rvum, 0, + otx2_rvu_vf_int); + + if (intr & 0x1ull) { + /* schedule work queue function to process the mbox request */ + queue_work(cptvf->pfvf_mbox_wq, &cptvf->pfvf_mbox_work); + /* clear and ack the interrupt */ + otx2_cpt_write64(cptvf->reg_base, blkaddr_rvum, 0, + otx2_rvu_vf_int, 0x1ull); + } + return irq_handled; +} + +static void process_pfvf_mbox_mbox_msg(struct otx2_cptvf_dev *cptvf, + struct mbox_msghdr *msg) +{ + struct otx2_cptlfs_info *lfs = &cptvf->lfs; + struct cpt_rd_wr_reg_msg *rsp_reg; + struct msix_offset_rsp *rsp_msix; + int i; + + if (msg->id >= mbox_msg_max) { + dev_err(&cptvf->pdev->dev, + "mbox msg with unknown id %d ", msg->id); + return; + } + if (msg->sig != otx2_mbox_rsp_sig) { + dev_err(&cptvf->pdev->dev, + "mbox msg with wrong signature %x, id %d ", + msg->sig, msg->id); + return; + } + switch (msg->id) { + case mbox_msg_ready: + cptvf->vf_id = ((msg->pcifunc >> rvu_pfvf_func_shift) + & rvu_pfvf_func_mask) - 1; + break; + case mbox_msg_attach_resources: + /* check if resources were successfully attached */ + if (!msg->rc) + lfs->are_lfs_attached = 1; + break; + case mbox_msg_detach_resources: + /* check if resources were successfully detached */ + if (!msg->rc) + lfs->are_lfs_attached = 0; + break; + case mbox_msg_msix_offset: + rsp_msix = (struct msix_offset_rsp *) msg; + for (i = 0; i < rsp_msix->cptlfs; i++) + lfs->lf[i].msix_offset = rsp_msix->cptlf_msixoff[i]; + break; + case mbox_msg_cpt_rd_wr_register: + rsp_reg = (struct cpt_rd_wr_reg_msg *) msg; + if (msg->rc) { + dev_err(&cptvf->pdev->dev, + "reg %llx rd/wr(%d) failed %d ", + rsp_reg->reg_offset, rsp_reg->is_write, + msg->rc); + return; + } + if (!rsp_reg->is_write) + *rsp_reg->ret_val = rsp_reg->val; + break; + default: + dev_err(&cptvf->pdev->dev, "unsupported msg %d received. ", + msg->id); + break; + } +} + +void otx2_cptvf_pfvf_mbox_handler(struct work_struct *work) +{ + struct otx2_cptvf_dev *cptvf; + struct otx2_mbox *pfvf_mbox; + struct otx2_mbox_dev *mdev; + struct mbox_hdr *rsp_hdr; + struct mbox_msghdr *msg; + int offset, i; + + /* sync with mbox memory region */ + smp_rmb(); + + cptvf = container_of(work, struct otx2_cptvf_dev, pfvf_mbox_work); + pfvf_mbox = &cptvf->pfvf_mbox; + mdev = &pfvf_mbox->dev[0]; + rsp_hdr = (struct mbox_hdr *)(mdev->mbase + pfvf_mbox->rx_start); + if (rsp_hdr->num_msgs == 0) + return; + offset = align(sizeof(struct mbox_hdr), mbox_msg_align); + + for (i = 0; i < rsp_hdr->num_msgs; i++) { + msg = (struct mbox_msghdr *)(mdev->mbase + pfvf_mbox->rx_start + + offset); + process_pfvf_mbox_mbox_msg(cptvf, msg); + offset = msg->next_msgoff; + mdev->msgs_acked++; + } + otx2_mbox_reset(pfvf_mbox, 0); +}
|
Cryptography hardware acceleration
|
19d8e8c7be1567b92e99f7201b8e9b286d04dc0f
|
srujana challa
|
drivers
|
crypto
|
marvell, octeontx2
|
crypto: octeontx2 - add support to process the crypto request
|
attach lfs to cpt vf to process the crypto requests and register lf interrupts.
|
this release allows to map an uid to a different one in a mount; it also adds support for selecting the preemption model at runtime; support for a low-overhead memory error detector designed to be used in production; support for the acrn hypervisor designed for embedded systems; btrfs initial support for zoned devices, subpage blocks sizes and performance improvements; support for eager nfs writes; support for a thermal power management to control the surface temperature of embedded devices in an unified way; the napi polling can be moved to a kernel thread; and support for non-blocking path lookups. as always, there are many other features, new drivers, improvements and fixes.
|
add support for marvell octeontx2 cpt engine
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'cryptography', 'security', 'networking', 'architectures x86 arm risc-v powerpc mips csky s390 pa-risc c6x']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'device voltage and frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'non-transparent bridge (ntb)', 'thunderbolt', 'clock', 'phy ("physical layer" framework)', 'cxl (compute express link)', 'various']
|
['octeontx2']
|
['h', 'c', 'makefile']
| 11
| 1,034
| 1
|
--- diff --git a/drivers/crypto/marvell/octeontx2/makefile b/drivers/crypto/marvell/octeontx2/makefile --- a/drivers/crypto/marvell/octeontx2/makefile +++ b/drivers/crypto/marvell/octeontx2/makefile - otx2_cpt_mbox_common.o + otx2_cpt_mbox_common.o otx2_cptvf_reqmgr.o diff --git a/drivers/crypto/marvell/octeontx2/otx2_cpt_common.h b/drivers/crypto/marvell/octeontx2/otx2_cpt_common.h --- a/drivers/crypto/marvell/octeontx2/otx2_cpt_common.h +++ b/drivers/crypto/marvell/octeontx2/otx2_cpt_common.h +#define otx2_cpt_rvu_pffunc(pf, func) \ + ((((pf) & rvu_pfvf_pf_mask) << rvu_pfvf_pf_shift) | \ + (((func) & rvu_pfvf_func_mask) << rvu_pfvf_func_shift)) +#define mbox_msg_get_kvf_limits 0xbfc +/* + * message request and response to get kernel crypto limits + * this messages are only used between cpt pf <-> cpt vf + */ +struct otx2_cpt_kvf_limits_msg { + struct mbox_msghdr hdr; +}; + +struct otx2_cpt_kvf_limits_rsp { + struct mbox_msghdr hdr; + u8 kvf_limits; +}; + diff --git a/drivers/crypto/marvell/octeontx2/otx2_cpt_reqmgr.h b/drivers/crypto/marvell/octeontx2/otx2_cpt_reqmgr.h --- a/drivers/crypto/marvell/octeontx2/otx2_cpt_reqmgr.h +++ b/drivers/crypto/marvell/octeontx2/otx2_cpt_reqmgr.h +/* + * maximum total number of sg buffers is 100, we divide it equally + * between input and output + */ +#define otx2_cpt_max_sg_in_cnt 50 +#define otx2_cpt_max_sg_out_cnt 50 + +/* dma mode direct or sg */ +#define otx2_cpt_dma_mode_direct 0 +#define otx2_cpt_dma_mode_sg 1 + +/* context source cptr or dptr */ +#define otx2_cpt_from_cptr 0 +#define otx2_cpt_from_dptr 1 + +#define otx2_cpt_max_req_size 65535 +struct otx2_cptvf_request { + u32 param1; + u32 param2; + u16 dlen; + union otx2_cpt_opcode opcode; +}; + +struct otx2_cpt_pending_entry { + void *completion_addr; /* completion address */ + void *info; + /* kernel async request callback */ + void (*callback)(int status, void *arg1, void *arg2); + struct crypto_async_request *areq; /* async request callback arg */ + u8 resume_sender; /* notify sender to resume sending requests */ + u8 busy; /* entry status (free/busy) */ +}; + +struct otx2_cpt_pending_queue { + struct otx2_cpt_pending_entry *head; /* head of the queue */ + u32 front; /* process work from here */ + u32 rear; /* append new work here */ + u32 pending_count; /* pending requests count */ + u32 qlen; /* queue length */ + spinlock_t lock; /* queue lock */ +}; + +struct otx2_cpt_buf_ptr { + u8 *vptr; + dma_addr_t dma_addr; + u16 size; +}; + +union otx2_cpt_ctrl_info { + u32 flags; + struct { +#if defined(__big_endian_bitfield) + u32 reserved_6_31:26; + u32 grp:3; /* group bits */ + u32 dma_mode:2; /* dma mode */ + u32 se_req:1; /* to se core */ +#else + u32 se_req:1; /* to se core */ + u32 dma_mode:2; /* dma mode */ + u32 grp:3; /* group bits */ + u32 reserved_6_31:26; +#endif + } s; +}; + +struct otx2_cpt_req_info { + /* kernel async request callback */ + void (*callback)(int status, void *arg1, void *arg2); + struct crypto_async_request *areq; /* async request callback arg */ + struct otx2_cptvf_request req;/* request information (core specific) */ + union otx2_cpt_ctrl_info ctrl;/* user control information */ + struct otx2_cpt_buf_ptr in[otx2_cpt_max_sg_in_cnt]; + struct otx2_cpt_buf_ptr out[otx2_cpt_max_sg_out_cnt]; + u8 *iv_out; /* iv to send back */ + u16 rlen; /* output length */ + u8 in_cnt; /* number of input buffers */ + u8 out_cnt; /* number of output buffers */ + u8 req_type; /* type of request */ + u8 is_enc; /* is a request an encryption request */ + u8 is_trunc_hmac;/* is truncated hmac used */ +}; + +struct otx2_cpt_inst_info { + struct otx2_cpt_pending_entry *pentry; + struct otx2_cpt_req_info *req; + struct pci_dev *pdev; + void *completion_addr; + u8 *out_buffer; + u8 *in_buffer; + dma_addr_t dptr_baddr; + dma_addr_t rptr_baddr; + dma_addr_t comp_baddr; + unsigned long time_in; + u32 dlen; + u32 dma_len; + u8 extra_time; +}; + +struct otx2_cpt_sglist_component { + __be16 len0; + __be16 len1; + __be16 len2; + __be16 len3; + __be64 ptr0; + __be64 ptr1; + __be64 ptr2; + __be64 ptr3; +}; + +static inline void otx2_cpt_info_destroy(struct pci_dev *pdev, + struct otx2_cpt_inst_info *info) +{ + struct otx2_cpt_req_info *req; + int i; + + if (info->dptr_baddr) + dma_unmap_single(&pdev->dev, info->dptr_baddr, + info->dma_len, dma_bidirectional); + + if (info->req) { + req = info->req; + for (i = 0; i < req->out_cnt; i++) { + if (req->out[i].dma_addr) + dma_unmap_single(&pdev->dev, + req->out[i].dma_addr, + req->out[i].size, + dma_bidirectional); + } + + for (i = 0; i < req->in_cnt; i++) { + if (req->in[i].dma_addr) + dma_unmap_single(&pdev->dev, + req->in[i].dma_addr, + req->in[i].size, + dma_bidirectional); + } + } + kfree(info); +} + +struct otx2_cptlf_wqe; +int otx2_cpt_do_request(struct pci_dev *pdev, struct otx2_cpt_req_info *req, + int cpu_num); +void otx2_cpt_post_process(struct otx2_cptlf_wqe *wqe); + diff --git a/drivers/crypto/marvell/octeontx2/otx2_cptlf.h b/drivers/crypto/marvell/octeontx2/otx2_cptlf.h --- a/drivers/crypto/marvell/octeontx2/otx2_cptlf.h +++ b/drivers/crypto/marvell/octeontx2/otx2_cptlf.h + struct otx2_cpt_pending_queue pqueue; /* pending queue */ + u8 kcrypto_eng_grp_num; /* kernel crypto engine group number */ + u8 kvf_limits; /* kernel crypto limits */ +static inline bool otx2_cptlf_started(struct otx2_cptlfs_info *lfs) +{ + return atomic_read(&lfs->state) == otx2_cptlf_started; +} + diff --git a/drivers/crypto/marvell/octeontx2/otx2_cptpf.h b/drivers/crypto/marvell/octeontx2/otx2_cptpf.h --- a/drivers/crypto/marvell/octeontx2/otx2_cptpf.h +++ b/drivers/crypto/marvell/octeontx2/otx2_cptpf.h + u8 kvf_limits; /* kernel crypto limits */ diff --git a/drivers/crypto/marvell/octeontx2/otx2_cptpf_main.c b/drivers/crypto/marvell/octeontx2/otx2_cptpf_main.c --- a/drivers/crypto/marvell/octeontx2/otx2_cptpf_main.c +++ b/drivers/crypto/marvell/octeontx2/otx2_cptpf_main.c +static ssize_t kvf_limits_show(struct device *dev, + struct device_attribute *attr, char *buf) +{ + struct otx2_cptpf_dev *cptpf = dev_get_drvdata(dev); + + return sprintf(buf, "%d ", cptpf->kvf_limits); +} + +static ssize_t kvf_limits_store(struct device *dev, + struct device_attribute *attr, + const char *buf, size_t count) +{ + struct otx2_cptpf_dev *cptpf = dev_get_drvdata(dev); + int lfs_num; + + if (kstrtoint(buf, 0, &lfs_num)) { + dev_err(dev, "lfs count %d must be in range [1 - %d] ", + lfs_num, num_online_cpus()); + return -einval; + } + if (lfs_num < 1 || lfs_num > num_online_cpus()) { + dev_err(dev, "lfs count %d must be in range [1 - %d] ", + lfs_num, num_online_cpus()); + return -einval; + } + cptpf->kvf_limits = lfs_num; + + return count; +} + +static device_attr_rw(kvf_limits); +static struct attribute *cptpf_attrs[] = { + &dev_attr_kvf_limits.attr, + null +}; + +static const struct attribute_group cptpf_sysfs_group = { + .attrs = cptpf_attrs, +}; + + err = sysfs_create_group(&dev->kobj, &cptpf_sysfs_group); + if (err) + goto cleanup_eng_grps; +cleanup_eng_grps: + otx2_cpt_cleanup_eng_grps(pdev, &cptpf->eng_grps); + /* delete sysfs entry created for kernel vf limits */ + sysfs_remove_group(&pdev->dev.kobj, &cptpf_sysfs_group); diff --git a/drivers/crypto/marvell/octeontx2/otx2_cptpf_mbox.c b/drivers/crypto/marvell/octeontx2/otx2_cptpf_mbox.c --- a/drivers/crypto/marvell/octeontx2/otx2_cptpf_mbox.c +++ b/drivers/crypto/marvell/octeontx2/otx2_cptpf_mbox.c +static int handle_msg_kvf_limits(struct otx2_cptpf_dev *cptpf, + struct otx2_cptvf_info *vf, + struct mbox_msghdr *req) +{ + struct otx2_cpt_kvf_limits_rsp *rsp; + + rsp = (struct otx2_cpt_kvf_limits_rsp *) + otx2_mbox_alloc_msg(&cptpf->vfpf_mbox, vf->vf_id, sizeof(*rsp)); + if (!rsp) + return -enomem; + + rsp->hdr.id = mbox_msg_get_kvf_limits; + rsp->hdr.sig = otx2_mbox_rsp_sig; + rsp->hdr.pcifunc = req->pcifunc; + rsp->kvf_limits = cptpf->kvf_limits; + + return 0; +} + + case mbox_msg_get_kvf_limits: + err = handle_msg_kvf_limits(cptpf, vf, req); + break; diff --git a/drivers/crypto/marvell/octeontx2/otx2_cptvf.h b/drivers/crypto/marvell/octeontx2/otx2_cptvf.h --- a/drivers/crypto/marvell/octeontx2/otx2_cptvf.h +++ b/drivers/crypto/marvell/octeontx2/otx2_cptvf.h +int otx2_cptvf_send_kvf_limits_msg(struct otx2_cptvf_dev *cptvf); diff --git a/drivers/crypto/marvell/octeontx2/otx2_cptvf_main.c b/drivers/crypto/marvell/octeontx2/otx2_cptvf_main.c --- a/drivers/crypto/marvell/octeontx2/otx2_cptvf_main.c +++ b/drivers/crypto/marvell/octeontx2/otx2_cptvf_main.c +#include "otx2_cptlf.h" +static void cptlf_work_handler(unsigned long data) +{ + otx2_cpt_post_process((struct otx2_cptlf_wqe *) data); +} + +static void cleanup_tasklet_work(struct otx2_cptlfs_info *lfs) +{ + int i; + + for (i = 0; i < lfs->lfs_num; i++) { + if (!lfs->lf[i].wqe) + continue; + + tasklet_kill(&lfs->lf[i].wqe->work); + kfree(lfs->lf[i].wqe); + lfs->lf[i].wqe = null; + } +} + +static int init_tasklet_work(struct otx2_cptlfs_info *lfs) +{ + struct otx2_cptlf_wqe *wqe; + int i, ret = 0; + + for (i = 0; i < lfs->lfs_num; i++) { + wqe = kzalloc(sizeof(struct otx2_cptlf_wqe), gfp_kernel); + if (!wqe) { + ret = -enomem; + goto cleanup_tasklet; + } + + tasklet_init(&wqe->work, cptlf_work_handler, (u64) wqe); + wqe->lfs = lfs; + wqe->lf_num = i; + lfs->lf[i].wqe = wqe; + } + return 0; + +cleanup_tasklet: + cleanup_tasklet_work(lfs); + return ret; +} + +static void free_pending_queues(struct otx2_cptlfs_info *lfs) +{ + int i; + + for (i = 0; i < lfs->lfs_num; i++) { + kfree(lfs->lf[i].pqueue.head); + lfs->lf[i].pqueue.head = null; + } +} + +static int alloc_pending_queues(struct otx2_cptlfs_info *lfs) +{ + int size, ret, i; + + if (!lfs->lfs_num) + return -einval; + + for (i = 0; i < lfs->lfs_num; i++) { + lfs->lf[i].pqueue.qlen = otx2_cpt_inst_qlen_msgs; + size = lfs->lf[i].pqueue.qlen * + sizeof(struct otx2_cpt_pending_entry); + + lfs->lf[i].pqueue.head = kzalloc(size, gfp_kernel); + if (!lfs->lf[i].pqueue.head) { + ret = -enomem; + goto error; + } + + /* initialize spin lock */ + spin_lock_init(&lfs->lf[i].pqueue.lock); + } + return 0; + +error: + free_pending_queues(lfs); + return ret; +} + +static void lf_sw_cleanup(struct otx2_cptlfs_info *lfs) +{ + cleanup_tasklet_work(lfs); + free_pending_queues(lfs); +} + +static int lf_sw_init(struct otx2_cptlfs_info *lfs) +{ + int ret; + + ret = alloc_pending_queues(lfs); + if (ret) { + dev_err(&lfs->pdev->dev, + "allocating pending queues failed "); + return ret; + } + ret = init_tasklet_work(lfs); + if (ret) { + dev_err(&lfs->pdev->dev, + "tasklet work init failed "); + goto pending_queues_free; + } + return 0; + +pending_queues_free: + free_pending_queues(lfs); + return ret; +} + +static void cptvf_lf_shutdown(struct otx2_cptlfs_info *lfs) +{ + atomic_set(&lfs->state, otx2_cptlf_in_reset); + + /* remove interrupts affinity */ + otx2_cptlf_free_irqs_affinity(lfs); + /* disable instruction queue */ + otx2_cptlf_disable_iqueues(lfs); + /* unregister lfs interrupts */ + otx2_cptlf_unregister_interrupts(lfs); + /* cleanup lfs software side */ + lf_sw_cleanup(lfs); + /* send request to detach lfs */ + otx2_cpt_detach_rsrcs_msg(lfs); +} + +static int cptvf_lf_init(struct otx2_cptvf_dev *cptvf) +{ + struct otx2_cptlfs_info *lfs = &cptvf->lfs; + struct device *dev = &cptvf->pdev->dev; + int ret, lfs_num; + u8 eng_grp_msk; + + /* get engine group number for symmetric crypto */ + cptvf->lfs.kcrypto_eng_grp_num = otx2_cpt_invalid_crypto_eng_grp; + ret = otx2_cptvf_send_eng_grp_num_msg(cptvf, otx2_cpt_se_types); + if (ret) + return ret; + + if (cptvf->lfs.kcrypto_eng_grp_num == otx2_cpt_invalid_crypto_eng_grp) { + dev_err(dev, "engine group for kernel crypto not available "); + ret = -enoent; + return ret; + } + eng_grp_msk = 1 << cptvf->lfs.kcrypto_eng_grp_num; + + ret = otx2_cptvf_send_kvf_limits_msg(cptvf); + if (ret) + return ret; + + lfs->reg_base = cptvf->reg_base; + lfs->pdev = cptvf->pdev; + lfs->mbox = &cptvf->pfvf_mbox; + + lfs_num = cptvf->lfs.kvf_limits ? cptvf->lfs.kvf_limits : + num_online_cpus(); + ret = otx2_cptlf_init(lfs, eng_grp_msk, otx2_cpt_queue_hi_prio, + lfs_num); + if (ret) + return ret; + + /* get msix offsets for attached lfs */ + ret = otx2_cpt_msix_offset_msg(lfs); + if (ret) + goto cleanup_lf; + + /* initialize lfs software side */ + ret = lf_sw_init(lfs); + if (ret) + goto cleanup_lf; + + /* register lfs interrupts */ + ret = otx2_cptlf_register_interrupts(lfs); + if (ret) + goto cleanup_lf_sw; + + /* set interrupts affinity */ + ret = otx2_cptlf_set_irqs_affinity(lfs); + if (ret) + goto unregister_intr; + + atomic_set(&lfs->state, otx2_cptlf_started); + + return 0; + +unregister_intr: + otx2_cptlf_unregister_interrupts(lfs); +cleanup_lf_sw: + lf_sw_cleanup(lfs); +cleanup_lf: + otx2_cptlf_shutdown(lfs); + + return ret; +} + + /* initialize cpt lfs */ + ret = cptvf_lf_init(cptvf); + if (ret) + goto unregister_interrupts; + +unregister_interrupts: + cptvf_disable_pfvf_mbox_intrs(cptvf); + cptvf_lf_shutdown(&cptvf->lfs); diff --git a/drivers/crypto/marvell/octeontx2/otx2_cptvf_mbox.c b/drivers/crypto/marvell/octeontx2/otx2_cptvf_mbox.c --- a/drivers/crypto/marvell/octeontx2/otx2_cptvf_mbox.c +++ b/drivers/crypto/marvell/octeontx2/otx2_cptvf_mbox.c + struct otx2_cpt_kvf_limits_rsp *rsp_limits; + struct otx2_cpt_egrp_num_rsp *rsp_grp; + case mbox_msg_get_eng_grp_num: + rsp_grp = (struct otx2_cpt_egrp_num_rsp *) msg; + cptvf->lfs.kcrypto_eng_grp_num = rsp_grp->eng_grp_num; + break; + case mbox_msg_get_kvf_limits: + rsp_limits = (struct otx2_cpt_kvf_limits_rsp *) msg; + cptvf->lfs.kvf_limits = rsp_limits->kvf_limits; + break; + +int otx2_cptvf_send_eng_grp_num_msg(struct otx2_cptvf_dev *cptvf, int eng_type) +{ + struct otx2_mbox *mbox = &cptvf->pfvf_mbox; + struct pci_dev *pdev = cptvf->pdev; + struct otx2_cpt_egrp_num_msg *req; + + req = (struct otx2_cpt_egrp_num_msg *) + otx2_mbox_alloc_msg_rsp(mbox, 0, sizeof(*req), + sizeof(struct otx2_cpt_egrp_num_rsp)); + if (req == null) { + dev_err(&pdev->dev, "rvu mbox failed to get message. "); + return -efault; + } + req->hdr.id = mbox_msg_get_eng_grp_num; + req->hdr.sig = otx2_mbox_req_sig; + req->hdr.pcifunc = otx2_cpt_rvu_pffunc(cptvf->vf_id, 0); + req->eng_type = eng_type; + + return otx2_cpt_send_mbox_msg(mbox, pdev); +} + +int otx2_cptvf_send_kvf_limits_msg(struct otx2_cptvf_dev *cptvf) +{ + struct otx2_mbox *mbox = &cptvf->pfvf_mbox; + struct pci_dev *pdev = cptvf->pdev; + struct mbox_msghdr *req; + int ret; + + req = (struct mbox_msghdr *) + otx2_mbox_alloc_msg_rsp(mbox, 0, sizeof(*req), + sizeof(struct otx2_cpt_kvf_limits_rsp)); + if (req == null) { + dev_err(&pdev->dev, "rvu mbox failed to get message. "); + return -efault; + } + req->id = mbox_msg_get_kvf_limits; + req->sig = otx2_mbox_req_sig; + req->pcifunc = otx2_cpt_rvu_pffunc(cptvf->vf_id, 0); + + ret = otx2_cpt_send_mbox_msg(mbox, pdev); + + return ret; +} diff --git a/drivers/crypto/marvell/octeontx2/otx2_cptvf_reqmgr.c b/drivers/crypto/marvell/octeontx2/otx2_cptvf_reqmgr.c --- /dev/null +++ b/drivers/crypto/marvell/octeontx2/otx2_cptvf_reqmgr.c +// spdx-license-identifier: gpl-2.0-only +/* copyright (c) 2020 marvell. */ + +#include "otx2_cptvf.h" +#include "otx2_cpt_common.h" + +/* sg list header size in bytes */ +#define sg_list_hdr_size 8 + +/* default timeout when waiting for free pending entry in us */ +#define cpt_pentry_timeout 1000 +#define cpt_pentry_step 50 + +/* default threshold for stopping and resuming sender requests */ +#define cpt_iq_stop_margin 128 +#define cpt_iq_resume_margin 512 + +/* default command timeout in seconds */ +#define cpt_command_timeout 4 +#define cpt_time_in_reset_count 5 + +static void otx2_cpt_dump_sg_list(struct pci_dev *pdev, + struct otx2_cpt_req_info *req) +{ + int i; + + pr_debug("gather list size %d ", req->in_cnt); + for (i = 0; i < req->in_cnt; i++) { + pr_debug("buffer %d size %d, vptr 0x%p, dmaptr 0x%p ", i, + req->in[i].size, req->in[i].vptr, + (void *) req->in[i].dma_addr); + pr_debug("buffer hexdump (%d bytes) ", + req->in[i].size); + print_hex_dump_debug("", dump_prefix_none, 16, 1, + req->in[i].vptr, req->in[i].size, false); + } + pr_debug("scatter list size %d ", req->out_cnt); + for (i = 0; i < req->out_cnt; i++) { + pr_debug("buffer %d size %d, vptr 0x%p, dmaptr 0x%p ", i, + req->out[i].size, req->out[i].vptr, + (void *) req->out[i].dma_addr); + pr_debug("buffer hexdump (%d bytes) ", req->out[i].size); + print_hex_dump_debug("", dump_prefix_none, 16, 1, + req->out[i].vptr, req->out[i].size, false); + } +} + +static inline struct otx2_cpt_pending_entry *get_free_pending_entry( + struct otx2_cpt_pending_queue *q, + int qlen) +{ + struct otx2_cpt_pending_entry *ent = null; + + ent = &q->head[q->rear]; + if (unlikely(ent->busy)) + return null; + + q->rear++; + if (unlikely(q->rear == qlen)) + q->rear = 0; + + return ent; +} + +static inline u32 modulo_inc(u32 index, u32 length, u32 inc) +{ + if (warn_on(inc > length)) + inc = length; + + index += inc; + if (unlikely(index >= length)) + index -= length; + + return index; +} + +static inline void free_pentry(struct otx2_cpt_pending_entry *pentry) +{ + pentry->completion_addr = null; + pentry->info = null; + pentry->callback = null; + pentry->areq = null; + pentry->resume_sender = false; + pentry->busy = false; +} + +static inline int setup_sgio_components(struct pci_dev *pdev, + struct otx2_cpt_buf_ptr *list, + int buf_count, u8 *buffer) +{ + struct otx2_cpt_sglist_component *sg_ptr = null; + int ret = 0, i, j; + int components; + + if (unlikely(!list)) { + dev_err(&pdev->dev, "input list pointer is null "); + return -efault; + } + + for (i = 0; i < buf_count; i++) { + if (unlikely(!list[i].vptr)) + continue; + list[i].dma_addr = dma_map_single(&pdev->dev, list[i].vptr, + list[i].size, + dma_bidirectional); + if (unlikely(dma_mapping_error(&pdev->dev, list[i].dma_addr))) { + dev_err(&pdev->dev, "dma mapping failed "); + ret = -eio; + goto sg_cleanup; + } + } + components = buf_count / 4; + sg_ptr = (struct otx2_cpt_sglist_component *)buffer; + for (i = 0; i < components; i++) { + sg_ptr->len0 = cpu_to_be16(list[i * 4 + 0].size); + sg_ptr->len1 = cpu_to_be16(list[i * 4 + 1].size); + sg_ptr->len2 = cpu_to_be16(list[i * 4 + 2].size); + sg_ptr->len3 = cpu_to_be16(list[i * 4 + 3].size); + sg_ptr->ptr0 = cpu_to_be64(list[i * 4 + 0].dma_addr); + sg_ptr->ptr1 = cpu_to_be64(list[i * 4 + 1].dma_addr); + sg_ptr->ptr2 = cpu_to_be64(list[i * 4 + 2].dma_addr); + sg_ptr->ptr3 = cpu_to_be64(list[i * 4 + 3].dma_addr); + sg_ptr++; + } + components = buf_count % 4; + + switch (components) { + case 3: + sg_ptr->len2 = cpu_to_be16(list[i * 4 + 2].size); + sg_ptr->ptr2 = cpu_to_be64(list[i * 4 + 2].dma_addr); + fallthrough; + case 2: + sg_ptr->len1 = cpu_to_be16(list[i * 4 + 1].size); + sg_ptr->ptr1 = cpu_to_be64(list[i * 4 + 1].dma_addr); + fallthrough; + case 1: + sg_ptr->len0 = cpu_to_be16(list[i * 4 + 0].size); + sg_ptr->ptr0 = cpu_to_be64(list[i * 4 + 0].dma_addr); + break; + default: + break; + } + return ret; + +sg_cleanup: + for (j = 0; j < i; j++) { + if (list[j].dma_addr) { + dma_unmap_single(&pdev->dev, list[j].dma_addr, + list[j].size, dma_bidirectional); + } + + list[j].dma_addr = 0; + } + return ret; +} + +static inline struct otx2_cpt_inst_info *info_create(struct pci_dev *pdev, + struct otx2_cpt_req_info *req, + gfp_t gfp) +{ + int align = otx2_cpt_dma_minalign; + struct otx2_cpt_inst_info *info; + u32 dlen, align_dlen, info_len; + u16 g_sz_bytes, s_sz_bytes; + u32 total_mem_len; + + if (unlikely(req->in_cnt > otx2_cpt_max_sg_in_cnt || + req->out_cnt > otx2_cpt_max_sg_out_cnt)) { + dev_err(&pdev->dev, "error too many sg components "); + return null; + } + + g_sz_bytes = ((req->in_cnt + 3) / 4) * + sizeof(struct otx2_cpt_sglist_component); + s_sz_bytes = ((req->out_cnt + 3) / 4) * + sizeof(struct otx2_cpt_sglist_component); + + dlen = g_sz_bytes + s_sz_bytes + sg_list_hdr_size; + align_dlen = align(dlen, align); + info_len = align(sizeof(*info), align); + total_mem_len = align_dlen + info_len + sizeof(union otx2_cpt_res_s); + + info = kzalloc(total_mem_len, gfp); + if (unlikely(!info)) + return null; + + info->dlen = dlen; + info->in_buffer = (u8 *)info + info_len; + + ((u16 *)info->in_buffer)[0] = req->out_cnt; + ((u16 *)info->in_buffer)[1] = req->in_cnt; + ((u16 *)info->in_buffer)[2] = 0; + ((u16 *)info->in_buffer)[3] = 0; + cpu_to_be64s((u64 *)info->in_buffer); + + /* setup gather (input) components */ + if (setup_sgio_components(pdev, req->in, req->in_cnt, + &info->in_buffer[8])) { + dev_err(&pdev->dev, "failed to setup gather list "); + goto destroy_info; + } + + if (setup_sgio_components(pdev, req->out, req->out_cnt, + &info->in_buffer[8 + g_sz_bytes])) { + dev_err(&pdev->dev, "failed to setup scatter list "); + goto destroy_info; + } + + info->dma_len = total_mem_len - info_len; + info->dptr_baddr = dma_map_single(&pdev->dev, info->in_buffer, + info->dma_len, dma_bidirectional); + if (unlikely(dma_mapping_error(&pdev->dev, info->dptr_baddr))) { + dev_err(&pdev->dev, "dma mapping failed for cpt req "); + goto destroy_info; + } + /* + * get buffer for union otx2_cpt_res_s response + * structure and its physical address + */ + info->completion_addr = info->in_buffer + align_dlen; + info->comp_baddr = info->dptr_baddr + align_dlen; + + return info; + +destroy_info: + otx2_cpt_info_destroy(pdev, info); + return null; +} + +static int process_request(struct pci_dev *pdev, struct otx2_cpt_req_info *req, + struct otx2_cpt_pending_queue *pqueue, + struct otx2_cptlf_info *lf) +{ + struct otx2_cptvf_request *cpt_req = &req->req; + struct otx2_cpt_pending_entry *pentry = null; + union otx2_cpt_ctrl_info *ctrl = &req->ctrl; + struct otx2_cpt_inst_info *info = null; + union otx2_cpt_res_s *result = null; + struct otx2_cpt_iq_command iq_cmd; + union otx2_cpt_inst_s cptinst; + int retry, ret = 0; + u8 resume_sender; + gfp_t gfp; + + gfp = (req->areq->flags & crypto_tfm_req_may_sleep) ? gfp_kernel : + gfp_atomic; + if (unlikely(!otx2_cptlf_started(lf->lfs))) + return -enodev; + + info = info_create(pdev, req, gfp); + if (unlikely(!info)) { + dev_err(&pdev->dev, "setting up cpt inst info failed"); + return -enomem; + } + cpt_req->dlen = info->dlen; + + result = info->completion_addr; + result->s.compcode = otx2_cpt_completion_code_init; + + spin_lock_bh(&pqueue->lock); + pentry = get_free_pending_entry(pqueue, pqueue->qlen); + retry = cpt_pentry_timeout / cpt_pentry_step; + while (unlikely(!pentry) && retry--) { + spin_unlock_bh(&pqueue->lock); + udelay(cpt_pentry_step); + spin_lock_bh(&pqueue->lock); + pentry = get_free_pending_entry(pqueue, pqueue->qlen); + } + + if (unlikely(!pentry)) { + ret = -enospc; + goto destroy_info; + } + + /* + * check if we are close to filling in entire pending queue, + * if so then tell the sender to stop/sleep by returning -ebusy + * we do it only for context which can sleep (gfp_kernel) + */ + if (gfp == gfp_kernel && + pqueue->pending_count > (pqueue->qlen - cpt_iq_stop_margin)) { + pentry->resume_sender = true; + } else + pentry->resume_sender = false; + resume_sender = pentry->resume_sender; + pqueue->pending_count++; + + pentry->completion_addr = info->completion_addr; + pentry->info = info; + pentry->callback = req->callback; + pentry->areq = req->areq; + pentry->busy = true; + info->pentry = pentry; + info->time_in = jiffies; + info->req = req; + + /* fill in the command */ + iq_cmd.cmd.u = 0; + iq_cmd.cmd.s.opcode = cpu_to_be16(cpt_req->opcode.flags); + iq_cmd.cmd.s.param1 = cpu_to_be16(cpt_req->param1); + iq_cmd.cmd.s.param2 = cpu_to_be16(cpt_req->param2); + iq_cmd.cmd.s.dlen = cpu_to_be16(cpt_req->dlen); + + /* 64-bit swap for microcode data reads, not needed for addresses*/ + cpu_to_be64s(&iq_cmd.cmd.u); + iq_cmd.dptr = info->dptr_baddr; + iq_cmd.rptr = 0; + iq_cmd.cptr.u = 0; + iq_cmd.cptr.s.grp = ctrl->s.grp; + + /* fill in the cpt_inst_s type command for hw interpretation */ + otx2_cpt_fill_inst(&cptinst, &iq_cmd, info->comp_baddr); + + /* print debug info if enabled */ + otx2_cpt_dump_sg_list(pdev, req); + pr_debug("cpt_inst_s hexdump (%d bytes) ", otx2_cpt_inst_size); + print_hex_dump_debug("", 0, 16, 1, &cptinst, otx2_cpt_inst_size, false); + pr_debug("dptr hexdump (%d bytes) ", cpt_req->dlen); + print_hex_dump_debug("", 0, 16, 1, info->in_buffer, + cpt_req->dlen, false); + + /* send cpt command */ + otx2_cpt_send_cmd(&cptinst, 1, lf); + + /* + * we allocate and prepare pending queue entry in critical section + * together with submitting cpt instruction to cpt instruction queue + * to make sure that order of cpt requests is the same in both + * pending and instruction queues + */ + spin_unlock_bh(&pqueue->lock); + + ret = resume_sender ? -ebusy : -einprogress; + return ret; + +destroy_info: + spin_unlock_bh(&pqueue->lock); + otx2_cpt_info_destroy(pdev, info); + return ret; +} + +int otx2_cpt_do_request(struct pci_dev *pdev, struct otx2_cpt_req_info *req, + int cpu_num) +{ + struct otx2_cptvf_dev *cptvf = pci_get_drvdata(pdev); + struct otx2_cptlfs_info *lfs = &cptvf->lfs; + + return process_request(lfs->pdev, req, &lfs->lf[cpu_num].pqueue, + &lfs->lf[cpu_num]); +} + +static int cpt_process_ccode(struct pci_dev *pdev, + union otx2_cpt_res_s *cpt_status, + struct otx2_cpt_inst_info *info, + u32 *res_code) +{ + u8 uc_ccode = cpt_status->s.uc_compcode; + u8 ccode = cpt_status->s.compcode; + + switch (ccode) { + case otx2_cpt_comp_e_fault: + dev_err(&pdev->dev, + "request failed with dma fault "); + otx2_cpt_dump_sg_list(pdev, info->req); + break; + + case otx2_cpt_comp_e_hwerr: + dev_err(&pdev->dev, + "request failed with hardware error "); + otx2_cpt_dump_sg_list(pdev, info->req); + break; + + case otx2_cpt_comp_e_insterr: + dev_err(&pdev->dev, + "request failed with instruction error "); + otx2_cpt_dump_sg_list(pdev, info->req); + break; + + case otx2_cpt_comp_e_notdone: + /* check for timeout */ + if (time_after_eq(jiffies, info->time_in + + cpt_command_timeout * hz)) + dev_warn(&pdev->dev, + "request timed out 0x%p", info->req); + else if (info->extra_time < cpt_time_in_reset_count) { + info->time_in = jiffies; + info->extra_time++; + } + return 1; + + case otx2_cpt_comp_e_good: + /* + * check microcode completion code, it is only valid + * when completion code is cpt_comp_e::good + */ + if (uc_ccode != otx2_cpt_ucc_success) { + /* + * if requested hmac is truncated and ucode returns + * s/g write length error then we report success + * because ucode writes as many bytes of calculated + * hmac as available in gather buffer and reports + * s/g write length error if number of bytes in gather + * buffer is less than full hmac size. + */ + if (info->req->is_trunc_hmac && + uc_ccode == otx2_cpt_ucc_sg_write_length) { + *res_code = 0; + break; + } + + dev_err(&pdev->dev, + "request failed with software error code 0x%x ", + cpt_status->s.uc_compcode); + otx2_cpt_dump_sg_list(pdev, info->req); + break; + } + /* request has been processed with success */ + *res_code = 0; + break; + + default: + dev_err(&pdev->dev, + "request returned invalid status %d ", ccode); + break; + } + return 0; +} + +static inline void process_pending_queue(struct pci_dev *pdev, + struct otx2_cpt_pending_queue *pqueue) +{ + struct otx2_cpt_pending_entry *resume_pentry = null; + void (*callback)(int status, void *arg, void *req); + struct otx2_cpt_pending_entry *pentry = null; + union otx2_cpt_res_s *cpt_status = null; + struct otx2_cpt_inst_info *info = null; + struct otx2_cpt_req_info *req = null; + struct crypto_async_request *areq; + u32 res_code, resume_index; + + while (1) { + spin_lock_bh(&pqueue->lock); + pentry = &pqueue->head[pqueue->front]; + + if (warn_on(!pentry)) { + spin_unlock_bh(&pqueue->lock); + break; + } + + res_code = -einval; + if (unlikely(!pentry->busy)) { + spin_unlock_bh(&pqueue->lock); + break; + } + + if (unlikely(!pentry->callback)) { + dev_err(&pdev->dev, "callback null "); + goto process_pentry; + } + + info = pentry->info; + if (unlikely(!info)) { + dev_err(&pdev->dev, "pending entry post arg null "); + goto process_pentry; + } + + req = info->req; + if (unlikely(!req)) { + dev_err(&pdev->dev, "request null "); + goto process_pentry; + } + + cpt_status = pentry->completion_addr; + if (unlikely(!cpt_status)) { + dev_err(&pdev->dev, "completion address null "); + goto process_pentry; + } + + if (cpt_process_ccode(pdev, cpt_status, info, &res_code)) { + spin_unlock_bh(&pqueue->lock); + return; + } + info->pdev = pdev; + +process_pentry: + /* + * check if we should inform sending side to resume + * we do it cpt_iq_resume_margin elements in advance before + * pending queue becomes empty + */ + resume_index = modulo_inc(pqueue->front, pqueue->qlen, + cpt_iq_resume_margin); + resume_pentry = &pqueue->head[resume_index]; + if (resume_pentry && + resume_pentry->resume_sender) { + resume_pentry->resume_sender = false; + callback = resume_pentry->callback; + areq = resume_pentry->areq; + + if (callback) { + spin_unlock_bh(&pqueue->lock); + + /* + * einprogress is an indication for sending + * side that it can resume sending requests + */ + callback(-einprogress, areq, info); + spin_lock_bh(&pqueue->lock); + } + } + + callback = pentry->callback; + areq = pentry->areq; + free_pentry(pentry); + + pqueue->pending_count--; + pqueue->front = modulo_inc(pqueue->front, pqueue->qlen, 1); + spin_unlock_bh(&pqueue->lock); + + /* + * call callback after current pending entry has been + * processed, we don't do it if the callback pointer is + * invalid. + */ + if (callback) + callback(res_code, areq, info); + } +} + +void otx2_cpt_post_process(struct otx2_cptlf_wqe *wqe) +{ + process_pending_queue(wqe->lfs->pdev, + &wqe->lfs->lf[wqe->lf_num].pqueue); +}
|
Cryptography hardware acceleration
|
8ec8015a316816b07538635fe9c04c35ad63acfc
|
srujana challa
|
drivers
|
crypto
|
marvell, octeontx2
|
crypto: octeontx2 - register with linux crypto framework
|
cpt offload module utilises the linux crypto framework to offload crypto processing. this patch registers supported algorithms by calling registration functions provided by the kernel crypto api.
|
this release allows to map an uid to a different one in a mount; it also adds support for selecting the preemption model at runtime; support for a low-overhead memory error detector designed to be used in production; support for the acrn hypervisor designed for embedded systems; btrfs initial support for zoned devices, subpage blocks sizes and performance improvements; support for eager nfs writes; support for a thermal power management to control the surface temperature of embedded devices in an unified way; the napi polling can be moved to a kernel thread; and support for non-blocking path lookups. as always, there are many other features, new drivers, improvements and fixes.
|
add support for marvell octeontx2 cpt engine
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'cryptography', 'security', 'networking', 'architectures x86 arm risc-v powerpc mips csky s390 pa-risc c6x']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'device voltage and frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'non-transparent bridge (ntb)', 'thunderbolt', 'clock', 'phy ("physical layer" framework)', 'cxl (compute express link)', 'various']
|
['octeontx2']
|
['h', 'kconfig', 'c', 'makefile']
| 7
| 1,961
| 2
|
- aes block cipher in cbc,ecb and xts mode. - 3des block cipher in cbc and ecb mode. - aead algorithms. --- diff --git a/drivers/crypto/marvell/kconfig b/drivers/crypto/marvell/kconfig --- a/drivers/crypto/marvell/kconfig +++ b/drivers/crypto/marvell/kconfig + depends on crypto_lib_aes + select crypto_skcipher + select crypto_hash + select crypto_aead diff --git a/drivers/crypto/marvell/octeontx2/makefile b/drivers/crypto/marvell/octeontx2/makefile --- a/drivers/crypto/marvell/octeontx2/makefile +++ b/drivers/crypto/marvell/octeontx2/makefile - otx2_cpt_mbox_common.o otx2_cptvf_reqmgr.o + otx2_cpt_mbox_common.o otx2_cptvf_reqmgr.o \ + otx2_cptvf_algs.o diff --git a/drivers/crypto/marvell/octeontx2/otx2_cpt_reqmgr.h b/drivers/crypto/marvell/octeontx2/otx2_cpt_reqmgr.h --- a/drivers/crypto/marvell/octeontx2/otx2_cpt_reqmgr.h +++ b/drivers/crypto/marvell/octeontx2/otx2_cpt_reqmgr.h +int otx2_cpt_get_kcrypto_eng_grp_num(struct pci_dev *pdev); diff --git a/drivers/crypto/marvell/octeontx2/otx2_cptvf_algs.c b/drivers/crypto/marvell/octeontx2/otx2_cptvf_algs.c --- /dev/null +++ b/drivers/crypto/marvell/octeontx2/otx2_cptvf_algs.c +// spdx-license-identifier: gpl-2.0-only +/* copyright (c) 2020 marvell. */ + +#include <crypto/aes.h> +#include <crypto/authenc.h> +#include <crypto/cryptd.h> +#include <crypto/des.h> +#include <crypto/internal/aead.h> +#include <crypto/sha1.h> +#include <crypto/sha2.h> +#include <crypto/xts.h> +#include <crypto/gcm.h> +#include <crypto/scatterwalk.h> +#include <linux/rtnetlink.h> +#include <linux/sort.h> +#include <linux/module.h> +#include "otx2_cptvf.h" +#include "otx2_cptvf_algs.h" +#include "otx2_cpt_reqmgr.h" + +/* size of salt in aes gcm mode */ +#define aes_gcm_salt_size 4 +/* size of iv in aes gcm mode */ +#define aes_gcm_iv_size 8 +/* size of icv (integrity check value) in aes gcm mode */ +#define aes_gcm_icv_size 16 +/* offset of iv in aes gcm mode */ +#define aes_gcm_iv_offset 8 +#define control_word_len 8 +#define key2_offset 48 +#define dma_mode_flag(dma_mode) \ + (((dma_mode) == otx2_cpt_dma_mode_sg) ? (1 << 7) : 0) + +/* truncated sha digest size */ +#define sha1_trunc_digest_size 12 +#define sha256_trunc_digest_size 16 +#define sha384_trunc_digest_size 24 +#define sha512_trunc_digest_size 32 + +static define_mutex(mutex); +static int is_crypto_registered; + +struct cpt_device_desc { + struct pci_dev *dev; + int num_queues; +}; + +struct cpt_device_table { + atomic_t count; + struct cpt_device_desc desc[otx2_cpt_max_lfs_num]; +}; + +static struct cpt_device_table se_devices = { + .count = atomic_init(0) +}; + +static inline int get_se_device(struct pci_dev **pdev, int *cpu_num) +{ + int count; + + count = atomic_read(&se_devices.count); + if (count < 1) + return -enodev; + + *cpu_num = get_cpu(); + /* + * on octeontx2 platform cpt instruction queue is bound to each + * local function lf, in turn lfs can be attached to pf + * or vf therefore we always use first device. we get maximum + * performance if one cpt queue is available for each cpu + * otherwise cpt queues need to be shared between cpus. + */ + if (*cpu_num >= se_devices.desc[0].num_queues) + *cpu_num %= se_devices.desc[0].num_queues; + *pdev = se_devices.desc[0].dev; + + put_cpu(); + + return 0; +} + +static inline int validate_hmac_cipher_null(struct otx2_cpt_req_info *cpt_req) +{ + struct otx2_cpt_req_ctx *rctx; + struct aead_request *req; + struct crypto_aead *tfm; + + req = container_of(cpt_req->areq, struct aead_request, base); + tfm = crypto_aead_reqtfm(req); + rctx = aead_request_ctx(req); + if (memcmp(rctx->fctx.hmac.s.hmac_calc, + rctx->fctx.hmac.s.hmac_recv, + crypto_aead_authsize(tfm)) != 0) + return -ebadmsg; + + return 0; +} + +static void otx2_cpt_aead_callback(int status, void *arg1, void *arg2) +{ + struct otx2_cpt_inst_info *inst_info = arg2; + struct crypto_async_request *areq = arg1; + struct otx2_cpt_req_info *cpt_req; + struct pci_dev *pdev; + + if (inst_info) { + cpt_req = inst_info->req; + if (!status) { + /* + * when selected cipher is null we need to manually + * verify whether calculated hmac value matches + * received hmac value + */ + if (cpt_req->req_type == + otx2_cpt_aead_enc_dec_null_req && + !cpt_req->is_enc) + status = validate_hmac_cipher_null(cpt_req); + } + pdev = inst_info->pdev; + otx2_cpt_info_destroy(pdev, inst_info); + } + if (areq) + areq->complete(areq, status); +} + +static void output_iv_copyback(struct crypto_async_request *areq) +{ + struct otx2_cpt_req_info *req_info; + struct otx2_cpt_req_ctx *rctx; + struct skcipher_request *sreq; + struct crypto_skcipher *stfm; + struct otx2_cpt_enc_ctx *ctx; + u32 start, ivsize; + + sreq = container_of(areq, struct skcipher_request, base); + stfm = crypto_skcipher_reqtfm(sreq); + ctx = crypto_skcipher_ctx(stfm); + if (ctx->cipher_type == otx2_cpt_aes_cbc || + ctx->cipher_type == otx2_cpt_des3_cbc) { + rctx = skcipher_request_ctx(sreq); + req_info = &rctx->cpt_req; + ivsize = crypto_skcipher_ivsize(stfm); + start = sreq->cryptlen - ivsize; + + if (req_info->is_enc) { + scatterwalk_map_and_copy(sreq->iv, sreq->dst, start, + ivsize, 0); + } else { + if (sreq->src != sreq->dst) { + scatterwalk_map_and_copy(sreq->iv, sreq->src, + start, ivsize, 0); + } else { + memcpy(sreq->iv, req_info->iv_out, ivsize); + kfree(req_info->iv_out); + } + } + } +} + +static void otx2_cpt_skcipher_callback(int status, void *arg1, void *arg2) +{ + struct otx2_cpt_inst_info *inst_info = arg2; + struct crypto_async_request *areq = arg1; + struct pci_dev *pdev; + + if (areq) { + if (!status) + output_iv_copyback(areq); + if (inst_info) { + pdev = inst_info->pdev; + otx2_cpt_info_destroy(pdev, inst_info); + } + areq->complete(areq, status); + } +} + +static inline void update_input_data(struct otx2_cpt_req_info *req_info, + struct scatterlist *inp_sg, + u32 nbytes, u32 *argcnt) +{ + req_info->req.dlen += nbytes; + + while (nbytes) { + u32 len = (nbytes < inp_sg->length) ? nbytes : inp_sg->length; + u8 *ptr = sg_virt(inp_sg); + + req_info->in[*argcnt].vptr = (void *)ptr; + req_info->in[*argcnt].size = len; + nbytes -= len; + ++(*argcnt); + inp_sg = sg_next(inp_sg); + } +} + +static inline void update_output_data(struct otx2_cpt_req_info *req_info, + struct scatterlist *outp_sg, + u32 offset, u32 nbytes, u32 *argcnt) +{ + u32 len, sg_len; + u8 *ptr; + + req_info->rlen += nbytes; + + while (nbytes) { + sg_len = outp_sg->length - offset; + len = (nbytes < sg_len) ? nbytes : sg_len; + ptr = sg_virt(outp_sg); + + req_info->out[*argcnt].vptr = (void *) (ptr + offset); + req_info->out[*argcnt].size = len; + nbytes -= len; + ++(*argcnt); + offset = 0; + outp_sg = sg_next(outp_sg); + } +} + +static inline int create_ctx_hdr(struct skcipher_request *req, u32 enc, + u32 *argcnt) +{ + struct crypto_skcipher *stfm = crypto_skcipher_reqtfm(req); + struct otx2_cpt_req_ctx *rctx = skcipher_request_ctx(req); + struct otx2_cpt_enc_ctx *ctx = crypto_skcipher_ctx(stfm); + struct otx2_cpt_req_info *req_info = &rctx->cpt_req; + struct otx2_cpt_fc_ctx *fctx = &rctx->fctx; + int ivsize = crypto_skcipher_ivsize(stfm); + u32 start = req->cryptlen - ivsize; + gfp_t flags; + + flags = (req->base.flags & crypto_tfm_req_may_sleep) ? + gfp_kernel : gfp_atomic; + req_info->ctrl.s.dma_mode = otx2_cpt_dma_mode_sg; + req_info->ctrl.s.se_req = 1; + + req_info->req.opcode.s.major = otx2_cpt_major_op_fc | + dma_mode_flag(otx2_cpt_dma_mode_sg); + if (enc) { + req_info->req.opcode.s.minor = 2; + } else { + req_info->req.opcode.s.minor = 3; + if ((ctx->cipher_type == otx2_cpt_aes_cbc || + ctx->cipher_type == otx2_cpt_des3_cbc) && + req->src == req->dst) { + req_info->iv_out = kmalloc(ivsize, flags); + if (!req_info->iv_out) + return -enomem; + + scatterwalk_map_and_copy(req_info->iv_out, req->src, + start, ivsize, 0); + } + } + /* encryption data length */ + req_info->req.param1 = req->cryptlen; + /* authentication data length */ + req_info->req.param2 = 0; + + fctx->enc.enc_ctrl.e.enc_cipher = ctx->cipher_type; + fctx->enc.enc_ctrl.e.aes_key = ctx->key_type; + fctx->enc.enc_ctrl.e.iv_source = otx2_cpt_from_cptr; + + if (ctx->cipher_type == otx2_cpt_aes_xts) + memcpy(fctx->enc.encr_key, ctx->enc_key, ctx->key_len * 2); + else + memcpy(fctx->enc.encr_key, ctx->enc_key, ctx->key_len); + + memcpy(fctx->enc.encr_iv, req->iv, crypto_skcipher_ivsize(stfm)); + + cpu_to_be64s(&fctx->enc.enc_ctrl.u); + + /* + * storing packet data information in offset + * control word first 8 bytes + */ + req_info->in[*argcnt].vptr = (u8 *)&rctx->ctrl_word; + req_info->in[*argcnt].size = control_word_len; + req_info->req.dlen += control_word_len; + ++(*argcnt); + + req_info->in[*argcnt].vptr = (u8 *)fctx; + req_info->in[*argcnt].size = sizeof(struct otx2_cpt_fc_ctx); + req_info->req.dlen += sizeof(struct otx2_cpt_fc_ctx); + + ++(*argcnt); + + return 0; +} + +static inline int create_input_list(struct skcipher_request *req, u32 enc, + u32 enc_iv_len) +{ + struct otx2_cpt_req_ctx *rctx = skcipher_request_ctx(req); + struct otx2_cpt_req_info *req_info = &rctx->cpt_req; + u32 argcnt = 0; + int ret; + + ret = create_ctx_hdr(req, enc, &argcnt); + if (ret) + return ret; + + update_input_data(req_info, req->src, req->cryptlen, &argcnt); + req_info->in_cnt = argcnt; + + return 0; +} + +static inline void create_output_list(struct skcipher_request *req, + u32 enc_iv_len) +{ + struct otx2_cpt_req_ctx *rctx = skcipher_request_ctx(req); + struct otx2_cpt_req_info *req_info = &rctx->cpt_req; + u32 argcnt = 0; + + /* + * output buffer processing + * aes encryption/decryption output would be + * received in the following format + * + * ------iv--------|------encrypted/decrypted data-----| + * [ 16 bytes/ [ request enc/dec/ data len aes cbc ] + */ + update_output_data(req_info, req->dst, 0, req->cryptlen, &argcnt); + req_info->out_cnt = argcnt; +} + +static int skcipher_do_fallback(struct skcipher_request *req, bool is_enc) +{ + struct crypto_skcipher *stfm = crypto_skcipher_reqtfm(req); + struct otx2_cpt_req_ctx *rctx = skcipher_request_ctx(req); + struct otx2_cpt_enc_ctx *ctx = crypto_skcipher_ctx(stfm); + int ret; + + if (ctx->fbk_cipher) { + skcipher_request_set_tfm(&rctx->sk_fbk_req, ctx->fbk_cipher); + skcipher_request_set_callback(&rctx->sk_fbk_req, + req->base.flags, + req->base.complete, + req->base.data); + skcipher_request_set_crypt(&rctx->sk_fbk_req, req->src, + req->dst, req->cryptlen, req->iv); + ret = is_enc ? crypto_skcipher_encrypt(&rctx->sk_fbk_req) : + crypto_skcipher_decrypt(&rctx->sk_fbk_req); + } else { + ret = -einval; + } + return ret; +} + +static inline int cpt_enc_dec(struct skcipher_request *req, u32 enc) +{ + struct crypto_skcipher *stfm = crypto_skcipher_reqtfm(req); + struct otx2_cpt_req_ctx *rctx = skcipher_request_ctx(req); + struct otx2_cpt_enc_ctx *ctx = crypto_skcipher_ctx(stfm); + struct otx2_cpt_req_info *req_info = &rctx->cpt_req; + u32 enc_iv_len = crypto_skcipher_ivsize(stfm); + struct pci_dev *pdev; + int status, cpu_num; + + if (req->cryptlen == 0) + return 0; + + if (!is_aligned(req->cryptlen, ctx->enc_align_len)) + return -einval; + + if (req->cryptlen > otx2_cpt_max_req_size) + return skcipher_do_fallback(req, enc); + + /* clear control words */ + rctx->ctrl_word.flags = 0; + rctx->fctx.enc.enc_ctrl.u = 0; + + status = create_input_list(req, enc, enc_iv_len); + if (status) + return status; + create_output_list(req, enc_iv_len); + + status = get_se_device(&pdev, &cpu_num); + if (status) + return status; + + req_info->callback = otx2_cpt_skcipher_callback; + req_info->areq = &req->base; + req_info->req_type = otx2_cpt_enc_dec_req; + req_info->is_enc = enc; + req_info->is_trunc_hmac = false; + req_info->ctrl.s.grp = otx2_cpt_get_kcrypto_eng_grp_num(pdev); + + /* + * we perform an asynchronous send and once + * the request is completed the driver would + * intimate through registered call back functions + */ + status = otx2_cpt_do_request(pdev, req_info, cpu_num); + + return status; +} + +static int otx2_cpt_skcipher_encrypt(struct skcipher_request *req) +{ + return cpt_enc_dec(req, true); +} + +static int otx2_cpt_skcipher_decrypt(struct skcipher_request *req) +{ + return cpt_enc_dec(req, false); +} + +static int otx2_cpt_skcipher_xts_setkey(struct crypto_skcipher *tfm, + const u8 *key, u32 keylen) +{ + struct otx2_cpt_enc_ctx *ctx = crypto_skcipher_ctx(tfm); + const u8 *key2 = key + (keylen / 2); + const u8 *key1 = key; + int ret; + + ret = xts_check_key(crypto_skcipher_tfm(tfm), key, keylen); + if (ret) + return ret; + ctx->key_len = keylen; + ctx->enc_align_len = 1; + memcpy(ctx->enc_key, key1, keylen / 2); + memcpy(ctx->enc_key + key2_offset, key2, keylen / 2); + ctx->cipher_type = otx2_cpt_aes_xts; + switch (ctx->key_len) { + case 2 * aes_keysize_128: + ctx->key_type = otx2_cpt_aes_128_bit; + break; + case 2 * aes_keysize_192: + ctx->key_type = otx2_cpt_aes_192_bit; + break; + case 2 * aes_keysize_256: + ctx->key_type = otx2_cpt_aes_256_bit; + break; + default: + return -einval; + } + return crypto_skcipher_setkey(ctx->fbk_cipher, key, keylen); +} + +static int cpt_des_setkey(struct crypto_skcipher *tfm, const u8 *key, + u32 keylen, u8 cipher_type) +{ + struct otx2_cpt_enc_ctx *ctx = crypto_skcipher_ctx(tfm); + + if (keylen != des3_ede_key_size) + return -einval; + + ctx->key_len = keylen; + ctx->cipher_type = cipher_type; + ctx->enc_align_len = 8; + + memcpy(ctx->enc_key, key, keylen); + + return crypto_skcipher_setkey(ctx->fbk_cipher, key, keylen); +} + +static int cpt_aes_setkey(struct crypto_skcipher *tfm, const u8 *key, + u32 keylen, u8 cipher_type) +{ + struct otx2_cpt_enc_ctx *ctx = crypto_skcipher_ctx(tfm); + + switch (keylen) { + case aes_keysize_128: + ctx->key_type = otx2_cpt_aes_128_bit; + break; + case aes_keysize_192: + ctx->key_type = otx2_cpt_aes_192_bit; + break; + case aes_keysize_256: + ctx->key_type = otx2_cpt_aes_256_bit; + break; + default: + return -einval; + } + if (cipher_type == otx2_cpt_aes_cbc || cipher_type == otx2_cpt_aes_ecb) + ctx->enc_align_len = 16; + else + ctx->enc_align_len = 1; + + ctx->key_len = keylen; + ctx->cipher_type = cipher_type; + + memcpy(ctx->enc_key, key, keylen); + + return crypto_skcipher_setkey(ctx->fbk_cipher, key, keylen); +} + +static int otx2_cpt_skcipher_cbc_aes_setkey(struct crypto_skcipher *tfm, + const u8 *key, u32 keylen) +{ + return cpt_aes_setkey(tfm, key, keylen, otx2_cpt_aes_cbc); +} + +static int otx2_cpt_skcipher_ecb_aes_setkey(struct crypto_skcipher *tfm, + const u8 *key, u32 keylen) +{ + return cpt_aes_setkey(tfm, key, keylen, otx2_cpt_aes_ecb); +} + +static int otx2_cpt_skcipher_cbc_des3_setkey(struct crypto_skcipher *tfm, + const u8 *key, u32 keylen) +{ + return cpt_des_setkey(tfm, key, keylen, otx2_cpt_des3_cbc); +} + +static int otx2_cpt_skcipher_ecb_des3_setkey(struct crypto_skcipher *tfm, + const u8 *key, u32 keylen) +{ + return cpt_des_setkey(tfm, key, keylen, otx2_cpt_des3_ecb); +} + +static int cpt_skcipher_fallback_init(struct otx2_cpt_enc_ctx *ctx, + struct crypto_alg *alg) +{ + if (alg->cra_flags & crypto_alg_need_fallback) { + ctx->fbk_cipher = + crypto_alloc_skcipher(alg->cra_name, 0, + crypto_alg_async | + crypto_alg_need_fallback); + if (is_err(ctx->fbk_cipher)) { + pr_err("%s() failed to allocate fallback for %s ", + __func__, alg->cra_name); + return ptr_err(ctx->fbk_cipher); + } + } + return 0; +} + +static int otx2_cpt_enc_dec_init(struct crypto_skcipher *stfm) +{ + struct otx2_cpt_enc_ctx *ctx = crypto_skcipher_ctx(stfm); + struct crypto_tfm *tfm = crypto_skcipher_tfm(stfm); + struct crypto_alg *alg = tfm->__crt_alg; + + memset(ctx, 0, sizeof(*ctx)); + /* + * additional memory for skcipher_request is + * allocated since the cryptd daemon uses + * this memory for request_ctx information + */ + crypto_skcipher_set_reqsize(stfm, sizeof(struct otx2_cpt_req_ctx) + + sizeof(struct skcipher_request)); + + return cpt_skcipher_fallback_init(ctx, alg); +} + +static void otx2_cpt_skcipher_exit(struct crypto_skcipher *tfm) +{ + struct otx2_cpt_enc_ctx *ctx = crypto_skcipher_ctx(tfm); + + if (ctx->fbk_cipher) { + crypto_free_skcipher(ctx->fbk_cipher); + ctx->fbk_cipher = null; + } +} + +static int cpt_aead_fallback_init(struct otx2_cpt_aead_ctx *ctx, + struct crypto_alg *alg) +{ + if (alg->cra_flags & crypto_alg_need_fallback) { + ctx->fbk_cipher = + crypto_alloc_aead(alg->cra_name, 0, + crypto_alg_async | + crypto_alg_need_fallback); + if (is_err(ctx->fbk_cipher)) { + pr_err("%s() failed to allocate fallback for %s ", + __func__, alg->cra_name); + return ptr_err(ctx->fbk_cipher); + } + } + return 0; +} + +static int cpt_aead_init(struct crypto_aead *atfm, u8 cipher_type, u8 mac_type) +{ + struct otx2_cpt_aead_ctx *ctx = crypto_aead_ctx(atfm); + struct crypto_tfm *tfm = crypto_aead_tfm(atfm); + struct crypto_alg *alg = tfm->__crt_alg; + + ctx->cipher_type = cipher_type; + ctx->mac_type = mac_type; + + /* + * when selected cipher is null we use hmac opcode instead of + * flexicrypto opcode therefore we don't need to use hash algorithms + * for calculating ipad and opad + */ + if (ctx->cipher_type != otx2_cpt_cipher_null) { + switch (ctx->mac_type) { + case otx2_cpt_sha1: + ctx->hashalg = crypto_alloc_shash("sha1", 0, + crypto_alg_async); + if (is_err(ctx->hashalg)) + return ptr_err(ctx->hashalg); + break; + + case otx2_cpt_sha256: + ctx->hashalg = crypto_alloc_shash("sha256", 0, + crypto_alg_async); + if (is_err(ctx->hashalg)) + return ptr_err(ctx->hashalg); + break; + + case otx2_cpt_sha384: + ctx->hashalg = crypto_alloc_shash("sha384", 0, + crypto_alg_async); + if (is_err(ctx->hashalg)) + return ptr_err(ctx->hashalg); + break; + + case otx2_cpt_sha512: + ctx->hashalg = crypto_alloc_shash("sha512", 0, + crypto_alg_async); + if (is_err(ctx->hashalg)) + return ptr_err(ctx->hashalg); + break; + } + } + switch (ctx->cipher_type) { + case otx2_cpt_aes_cbc: + case otx2_cpt_aes_ecb: + ctx->enc_align_len = 16; + break; + case otx2_cpt_des3_cbc: + case otx2_cpt_des3_ecb: + ctx->enc_align_len = 8; + break; + case otx2_cpt_aes_gcm: + case otx2_cpt_cipher_null: + ctx->enc_align_len = 1; + break; + } + crypto_aead_set_reqsize(atfm, sizeof(struct otx2_cpt_req_ctx)); + + return cpt_aead_fallback_init(ctx, alg); +} + +static int otx2_cpt_aead_cbc_aes_sha1_init(struct crypto_aead *tfm) +{ + return cpt_aead_init(tfm, otx2_cpt_aes_cbc, otx2_cpt_sha1); +} + +static int otx2_cpt_aead_cbc_aes_sha256_init(struct crypto_aead *tfm) +{ + return cpt_aead_init(tfm, otx2_cpt_aes_cbc, otx2_cpt_sha256); +} + +static int otx2_cpt_aead_cbc_aes_sha384_init(struct crypto_aead *tfm) +{ + return cpt_aead_init(tfm, otx2_cpt_aes_cbc, otx2_cpt_sha384); +} + +static int otx2_cpt_aead_cbc_aes_sha512_init(struct crypto_aead *tfm) +{ + return cpt_aead_init(tfm, otx2_cpt_aes_cbc, otx2_cpt_sha512); +} + +static int otx2_cpt_aead_ecb_null_sha1_init(struct crypto_aead *tfm) +{ + return cpt_aead_init(tfm, otx2_cpt_cipher_null, otx2_cpt_sha1); +} + +static int otx2_cpt_aead_ecb_null_sha256_init(struct crypto_aead *tfm) +{ + return cpt_aead_init(tfm, otx2_cpt_cipher_null, otx2_cpt_sha256); +} + +static int otx2_cpt_aead_ecb_null_sha384_init(struct crypto_aead *tfm) +{ + return cpt_aead_init(tfm, otx2_cpt_cipher_null, otx2_cpt_sha384); +} + +static int otx2_cpt_aead_ecb_null_sha512_init(struct crypto_aead *tfm) +{ + return cpt_aead_init(tfm, otx2_cpt_cipher_null, otx2_cpt_sha512); +} + +static int otx2_cpt_aead_gcm_aes_init(struct crypto_aead *tfm) +{ + return cpt_aead_init(tfm, otx2_cpt_aes_gcm, otx2_cpt_mac_null); +} + +static void otx2_cpt_aead_exit(struct crypto_aead *tfm) +{ + struct otx2_cpt_aead_ctx *ctx = crypto_aead_ctx(tfm); + + kfree(ctx->ipad); + kfree(ctx->opad); + if (ctx->hashalg) + crypto_free_shash(ctx->hashalg); + kfree(ctx->sdesc); + + if (ctx->fbk_cipher) { + crypto_free_aead(ctx->fbk_cipher); + ctx->fbk_cipher = null; + } +} + +static int otx2_cpt_aead_gcm_set_authsize(struct crypto_aead *tfm, + unsigned int authsize) +{ + struct otx2_cpt_aead_ctx *ctx = crypto_aead_ctx(tfm); + + if (crypto_rfc4106_check_authsize(authsize)) + return -einval; + + tfm->authsize = authsize; + /* set authsize for fallback case */ + if (ctx->fbk_cipher) + ctx->fbk_cipher->authsize = authsize; + + return 0; +} + +static int otx2_cpt_aead_set_authsize(struct crypto_aead *tfm, + unsigned int authsize) +{ + tfm->authsize = authsize; + + return 0; +} + +static int otx2_cpt_aead_null_set_authsize(struct crypto_aead *tfm, + unsigned int authsize) +{ + struct otx2_cpt_aead_ctx *ctx = crypto_aead_ctx(tfm); + + ctx->is_trunc_hmac = true; + tfm->authsize = authsize; + + return 0; +} + +static struct otx2_cpt_sdesc *alloc_sdesc(struct crypto_shash *alg) +{ + struct otx2_cpt_sdesc *sdesc; + int size; + + size = sizeof(struct shash_desc) + crypto_shash_descsize(alg); + sdesc = kmalloc(size, gfp_kernel); + if (!sdesc) + return null; + + sdesc->shash.tfm = alg; + + return sdesc; +} + +static inline void swap_data32(void *buf, u32 len) +{ + cpu_to_be32_array(buf, buf, len / 4); +} + +static inline void swap_data64(void *buf, u32 len) +{ + u64 *src = buf; + int i = 0; + + for (i = 0 ; i < len / 8; i++, src++) + cpu_to_be64s(src); +} + +static int copy_pad(u8 mac_type, u8 *out_pad, u8 *in_pad) +{ + struct sha512_state *sha512; + struct sha256_state *sha256; + struct sha1_state *sha1; + + switch (mac_type) { + case otx2_cpt_sha1: + sha1 = (struct sha1_state *) in_pad; + swap_data32(sha1->state, sha1_digest_size); + memcpy(out_pad, &sha1->state, sha1_digest_size); + break; + + case otx2_cpt_sha256: + sha256 = (struct sha256_state *) in_pad; + swap_data32(sha256->state, sha256_digest_size); + memcpy(out_pad, &sha256->state, sha256_digest_size); + break; + + case otx2_cpt_sha384: + case otx2_cpt_sha512: + sha512 = (struct sha512_state *) in_pad; + swap_data64(sha512->state, sha512_digest_size); + memcpy(out_pad, &sha512->state, sha512_digest_size); + break; + + default: + return -einval; + } + + return 0; +} + +static int aead_hmac_init(struct crypto_aead *cipher) +{ + struct otx2_cpt_aead_ctx *ctx = crypto_aead_ctx(cipher); + int state_size = crypto_shash_statesize(ctx->hashalg); + int ds = crypto_shash_digestsize(ctx->hashalg); + int bs = crypto_shash_blocksize(ctx->hashalg); + int authkeylen = ctx->auth_key_len; + u8 *ipad = null, *opad = null; + int ret = 0, icount = 0; + + ctx->sdesc = alloc_sdesc(ctx->hashalg); + if (!ctx->sdesc) + return -enomem; + + ctx->ipad = kzalloc(bs, gfp_kernel); + if (!ctx->ipad) { + ret = -enomem; + goto calc_fail; + } + + ctx->opad = kzalloc(bs, gfp_kernel); + if (!ctx->opad) { + ret = -enomem; + goto calc_fail; + } + + ipad = kzalloc(state_size, gfp_kernel); + if (!ipad) { + ret = -enomem; + goto calc_fail; + } + + opad = kzalloc(state_size, gfp_kernel); + if (!opad) { + ret = -enomem; + goto calc_fail; + } + + if (authkeylen > bs) { + ret = crypto_shash_digest(&ctx->sdesc->shash, ctx->key, + authkeylen, ipad); + if (ret) + goto calc_fail; + + authkeylen = ds; + } else { + memcpy(ipad, ctx->key, authkeylen); + } + + memset(ipad + authkeylen, 0, bs - authkeylen); + memcpy(opad, ipad, bs); + + for (icount = 0; icount < bs; icount++) { + ipad[icount] ^= 0x36; + opad[icount] ^= 0x5c; + } + + /* + * partial hash calculated from the software + * algorithm is retrieved for ipad & opad + */ + + /* ipad calculation */ + crypto_shash_init(&ctx->sdesc->shash); + crypto_shash_update(&ctx->sdesc->shash, ipad, bs); + crypto_shash_export(&ctx->sdesc->shash, ipad); + ret = copy_pad(ctx->mac_type, ctx->ipad, ipad); + if (ret) + goto calc_fail; + + /* opad calculation */ + crypto_shash_init(&ctx->sdesc->shash); + crypto_shash_update(&ctx->sdesc->shash, opad, bs); + crypto_shash_export(&ctx->sdesc->shash, opad); + ret = copy_pad(ctx->mac_type, ctx->opad, opad); + if (ret) + goto calc_fail; + + kfree(ipad); + kfree(opad); + + return 0; + +calc_fail: + kfree(ctx->ipad); + ctx->ipad = null; + kfree(ctx->opad); + ctx->opad = null; + kfree(ipad); + kfree(opad); + kfree(ctx->sdesc); + ctx->sdesc = null; + + return ret; +} + +static int otx2_cpt_aead_cbc_aes_sha_setkey(struct crypto_aead *cipher, + const unsigned char *key, + unsigned int keylen) +{ + struct otx2_cpt_aead_ctx *ctx = crypto_aead_ctx(cipher); + struct crypto_authenc_key_param *param; + int enckeylen = 0, authkeylen = 0; + struct rtattr *rta = (void *)key; + int status; + + if (!rta_ok(rta, keylen)) + return -einval; + + if (rta->rta_type != crypto_authenc_keya_param) + return -einval; + + if (rta_payload(rta) < sizeof(*param)) + return -einval; + + param = rta_data(rta); + enckeylen = be32_to_cpu(param->enckeylen); + key += rta_align(rta->rta_len); + keylen -= rta_align(rta->rta_len); + if (keylen < enckeylen) + return -einval; + + if (keylen > otx2_cpt_max_key_size) + return -einval; + + authkeylen = keylen - enckeylen; + memcpy(ctx->key, key, keylen); + + switch (enckeylen) { + case aes_keysize_128: + ctx->key_type = otx2_cpt_aes_128_bit; + break; + case aes_keysize_192: + ctx->key_type = otx2_cpt_aes_192_bit; + break; + case aes_keysize_256: + ctx->key_type = otx2_cpt_aes_256_bit; + break; + default: + /* invalid key length */ + return -einval; + } + + ctx->enc_key_len = enckeylen; + ctx->auth_key_len = authkeylen; + + status = aead_hmac_init(cipher); + if (status) + return status; + + return 0; +} + +static int otx2_cpt_aead_ecb_null_sha_setkey(struct crypto_aead *cipher, + const unsigned char *key, + unsigned int keylen) +{ + struct otx2_cpt_aead_ctx *ctx = crypto_aead_ctx(cipher); + struct crypto_authenc_key_param *param; + struct rtattr *rta = (void *)key; + int enckeylen = 0; + + if (!rta_ok(rta, keylen)) + return -einval; + + if (rta->rta_type != crypto_authenc_keya_param) + return -einval; + + if (rta_payload(rta) < sizeof(*param)) + return -einval; + + param = rta_data(rta); + enckeylen = be32_to_cpu(param->enckeylen); + key += rta_align(rta->rta_len); + keylen -= rta_align(rta->rta_len); + if (enckeylen != 0) + return -einval; + + if (keylen > otx2_cpt_max_key_size) + return -einval; + + memcpy(ctx->key, key, keylen); + ctx->enc_key_len = enckeylen; + ctx->auth_key_len = keylen; + + return 0; +} + +static int otx2_cpt_aead_gcm_aes_setkey(struct crypto_aead *cipher, + const unsigned char *key, + unsigned int keylen) +{ + struct otx2_cpt_aead_ctx *ctx = crypto_aead_ctx(cipher); + + /* + * for aes gcm we expect to get encryption key (16, 24, 32 bytes) + * and salt (4 bytes) + */ + switch (keylen) { + case aes_keysize_128 + aes_gcm_salt_size: + ctx->key_type = otx2_cpt_aes_128_bit; + ctx->enc_key_len = aes_keysize_128; + break; + case aes_keysize_192 + aes_gcm_salt_size: + ctx->key_type = otx2_cpt_aes_192_bit; + ctx->enc_key_len = aes_keysize_192; + break; + case aes_keysize_256 + aes_gcm_salt_size: + ctx->key_type = otx2_cpt_aes_256_bit; + ctx->enc_key_len = aes_keysize_256; + break; + default: + /* invalid key and salt length */ + return -einval; + } + + /* store encryption key and salt */ + memcpy(ctx->key, key, keylen); + + return crypto_aead_setkey(ctx->fbk_cipher, key, keylen); +} + +static inline int create_aead_ctx_hdr(struct aead_request *req, u32 enc, + u32 *argcnt) +{ + struct otx2_cpt_req_ctx *rctx = aead_request_ctx(req); + struct crypto_aead *tfm = crypto_aead_reqtfm(req); + struct otx2_cpt_aead_ctx *ctx = crypto_aead_ctx(tfm); + struct otx2_cpt_req_info *req_info = &rctx->cpt_req; + struct otx2_cpt_fc_ctx *fctx = &rctx->fctx; + int mac_len = crypto_aead_authsize(tfm); + int ds; + + rctx->ctrl_word.e.enc_data_offset = req->assoclen; + + switch (ctx->cipher_type) { + case otx2_cpt_aes_cbc: + if (req->assoclen > 248 || !is_aligned(req->assoclen, 8)) + return -einval; + + fctx->enc.enc_ctrl.e.iv_source = otx2_cpt_from_cptr; + /* copy encryption key to context */ + memcpy(fctx->enc.encr_key, ctx->key + ctx->auth_key_len, + ctx->enc_key_len); + /* copy iv to context */ + memcpy(fctx->enc.encr_iv, req->iv, crypto_aead_ivsize(tfm)); + + ds = crypto_shash_digestsize(ctx->hashalg); + if (ctx->mac_type == otx2_cpt_sha384) + ds = sha512_digest_size; + if (ctx->ipad) + memcpy(fctx->hmac.e.ipad, ctx->ipad, ds); + if (ctx->opad) + memcpy(fctx->hmac.e.opad, ctx->opad, ds); + break; + + case otx2_cpt_aes_gcm: + if (crypto_ipsec_check_assoclen(req->assoclen)) + return -einval; + + fctx->enc.enc_ctrl.e.iv_source = otx2_cpt_from_dptr; + /* copy encryption key to context */ + memcpy(fctx->enc.encr_key, ctx->key, ctx->enc_key_len); + /* copy salt to context */ + memcpy(fctx->enc.encr_iv, ctx->key + ctx->enc_key_len, + aes_gcm_salt_size); + + rctx->ctrl_word.e.iv_offset = req->assoclen - aes_gcm_iv_offset; + break; + + default: + /* unknown cipher type */ + return -einval; + } + cpu_to_be64s(&rctx->ctrl_word.flags); + + req_info->ctrl.s.dma_mode = otx2_cpt_dma_mode_sg; + req_info->ctrl.s.se_req = 1; + req_info->req.opcode.s.major = otx2_cpt_major_op_fc | + dma_mode_flag(otx2_cpt_dma_mode_sg); + if (enc) { + req_info->req.opcode.s.minor = 2; + req_info->req.param1 = req->cryptlen; + req_info->req.param2 = req->cryptlen + req->assoclen; + } else { + req_info->req.opcode.s.minor = 3; + req_info->req.param1 = req->cryptlen - mac_len; + req_info->req.param2 = req->cryptlen + req->assoclen - mac_len; + } + + fctx->enc.enc_ctrl.e.enc_cipher = ctx->cipher_type; + fctx->enc.enc_ctrl.e.aes_key = ctx->key_type; + fctx->enc.enc_ctrl.e.mac_type = ctx->mac_type; + fctx->enc.enc_ctrl.e.mac_len = mac_len; + cpu_to_be64s(&fctx->enc.enc_ctrl.u); + + /* + * storing packet data information in offset + * control word first 8 bytes + */ + req_info->in[*argcnt].vptr = (u8 *)&rctx->ctrl_word; + req_info->in[*argcnt].size = control_word_len; + req_info->req.dlen += control_word_len; + ++(*argcnt); + + req_info->in[*argcnt].vptr = (u8 *)fctx; + req_info->in[*argcnt].size = sizeof(struct otx2_cpt_fc_ctx); + req_info->req.dlen += sizeof(struct otx2_cpt_fc_ctx); + ++(*argcnt); + + return 0; +} + +static inline void create_hmac_ctx_hdr(struct aead_request *req, u32 *argcnt, + u32 enc) +{ + struct otx2_cpt_req_ctx *rctx = aead_request_ctx(req); + struct crypto_aead *tfm = crypto_aead_reqtfm(req); + struct otx2_cpt_aead_ctx *ctx = crypto_aead_ctx(tfm); + struct otx2_cpt_req_info *req_info = &rctx->cpt_req; + + req_info->ctrl.s.dma_mode = otx2_cpt_dma_mode_sg; + req_info->ctrl.s.se_req = 1; + req_info->req.opcode.s.major = otx2_cpt_major_op_hmac | + dma_mode_flag(otx2_cpt_dma_mode_sg); + req_info->is_trunc_hmac = ctx->is_trunc_hmac; + + req_info->req.opcode.s.minor = 0; + req_info->req.param1 = ctx->auth_key_len; + req_info->req.param2 = ctx->mac_type << 8; + + /* add authentication key */ + req_info->in[*argcnt].vptr = ctx->key; + req_info->in[*argcnt].size = round_up(ctx->auth_key_len, 8); + req_info->req.dlen += round_up(ctx->auth_key_len, 8); + ++(*argcnt); +} + +static inline int create_aead_input_list(struct aead_request *req, u32 enc) +{ + struct otx2_cpt_req_ctx *rctx = aead_request_ctx(req); + struct otx2_cpt_req_info *req_info = &rctx->cpt_req; + u32 inputlen = req->cryptlen + req->assoclen; + u32 status, argcnt = 0; + + status = create_aead_ctx_hdr(req, enc, &argcnt); + if (status) + return status; + update_input_data(req_info, req->src, inputlen, &argcnt); + req_info->in_cnt = argcnt; + + return 0; +} + +static inline void create_aead_output_list(struct aead_request *req, u32 enc, + u32 mac_len) +{ + struct otx2_cpt_req_ctx *rctx = aead_request_ctx(req); + struct otx2_cpt_req_info *req_info = &rctx->cpt_req; + u32 argcnt = 0, outputlen = 0; + + if (enc) + outputlen = req->cryptlen + req->assoclen + mac_len; + else + outputlen = req->cryptlen + req->assoclen - mac_len; + + update_output_data(req_info, req->dst, 0, outputlen, &argcnt); + req_info->out_cnt = argcnt; +} + +static inline void create_aead_null_input_list(struct aead_request *req, + u32 enc, u32 mac_len) +{ + struct otx2_cpt_req_ctx *rctx = aead_request_ctx(req); + struct otx2_cpt_req_info *req_info = &rctx->cpt_req; + u32 inputlen, argcnt = 0; + + if (enc) + inputlen = req->cryptlen + req->assoclen; + else + inputlen = req->cryptlen + req->assoclen - mac_len; + + create_hmac_ctx_hdr(req, &argcnt, enc); + update_input_data(req_info, req->src, inputlen, &argcnt); + req_info->in_cnt = argcnt; +} + +static inline int create_aead_null_output_list(struct aead_request *req, + u32 enc, u32 mac_len) +{ + struct otx2_cpt_req_ctx *rctx = aead_request_ctx(req); + struct otx2_cpt_req_info *req_info = &rctx->cpt_req; + struct scatterlist *dst; + u8 *ptr = null; + int argcnt = 0, status, offset; + u32 inputlen; + + if (enc) + inputlen = req->cryptlen + req->assoclen; + else + inputlen = req->cryptlen + req->assoclen - mac_len; + + /* + * if source and destination are different + * then copy payload to destination + */ + if (req->src != req->dst) { + + ptr = kmalloc(inputlen, (req_info->areq->flags & + crypto_tfm_req_may_sleep) ? + gfp_kernel : gfp_atomic); + if (!ptr) + return -enomem; + + status = sg_copy_to_buffer(req->src, sg_nents(req->src), ptr, + inputlen); + if (status != inputlen) { + status = -einval; + goto error_free; + } + status = sg_copy_from_buffer(req->dst, sg_nents(req->dst), ptr, + inputlen); + if (status != inputlen) { + status = -einval; + goto error_free; + } + kfree(ptr); + } + + if (enc) { + /* + * in an encryption scenario hmac needs + * to be appended after payload + */ + dst = req->dst; + offset = inputlen; + while (offset >= dst->length) { + offset -= dst->length; + dst = sg_next(dst); + if (!dst) + return -enoent; + } + + update_output_data(req_info, dst, offset, mac_len, &argcnt); + } else { + /* + * in a decryption scenario calculated hmac for received + * payload needs to be compare with hmac received + */ + status = sg_copy_buffer(req->src, sg_nents(req->src), + rctx->fctx.hmac.s.hmac_recv, mac_len, + inputlen, true); + if (status != mac_len) + return -einval; + + req_info->out[argcnt].vptr = rctx->fctx.hmac.s.hmac_calc; + req_info->out[argcnt].size = mac_len; + argcnt++; + } + + req_info->out_cnt = argcnt; + return 0; + +error_free: + kfree(ptr); + return status; +} + +static int aead_do_fallback(struct aead_request *req, bool is_enc) +{ + struct otx2_cpt_req_ctx *rctx = aead_request_ctx(req); + struct crypto_aead *aead = crypto_aead_reqtfm(req); + struct otx2_cpt_aead_ctx *ctx = crypto_aead_ctx(aead); + int ret; + + if (ctx->fbk_cipher) { + /* store the cipher tfm and then use the fallback tfm */ + aead_request_set_tfm(&rctx->fbk_req, ctx->fbk_cipher); + aead_request_set_callback(&rctx->fbk_req, req->base.flags, + req->base.complete, req->base.data); + aead_request_set_crypt(&rctx->fbk_req, req->src, + req->dst, req->cryptlen, req->iv); + ret = is_enc ? crypto_aead_encrypt(&rctx->fbk_req) : + crypto_aead_decrypt(&rctx->fbk_req); + } else { + ret = -einval; + } + + return ret; +} + +static int cpt_aead_enc_dec(struct aead_request *req, u8 reg_type, u8 enc) +{ + struct otx2_cpt_req_ctx *rctx = aead_request_ctx(req); + struct otx2_cpt_req_info *req_info = &rctx->cpt_req; + struct crypto_aead *tfm = crypto_aead_reqtfm(req); + struct otx2_cpt_aead_ctx *ctx = crypto_aead_ctx(tfm); + struct pci_dev *pdev; + int status, cpu_num; + + /* clear control words */ + rctx->ctrl_word.flags = 0; + rctx->fctx.enc.enc_ctrl.u = 0; + + req_info->callback = otx2_cpt_aead_callback; + req_info->areq = &req->base; + req_info->req_type = reg_type; + req_info->is_enc = enc; + req_info->is_trunc_hmac = false; + + switch (reg_type) { + case otx2_cpt_aead_enc_dec_req: + status = create_aead_input_list(req, enc); + if (status) + return status; + create_aead_output_list(req, enc, crypto_aead_authsize(tfm)); + break; + + case otx2_cpt_aead_enc_dec_null_req: + create_aead_null_input_list(req, enc, + crypto_aead_authsize(tfm)); + status = create_aead_null_output_list(req, enc, + crypto_aead_authsize(tfm)); + if (status) + return status; + break; + + default: + return -einval; + } + if (!is_aligned(req_info->req.param1, ctx->enc_align_len)) + return -einval; + + if (!req_info->req.param2 || + (req_info->req.param1 > otx2_cpt_max_req_size) || + (req_info->req.param2 > otx2_cpt_max_req_size)) + return aead_do_fallback(req, enc); + + status = get_se_device(&pdev, &cpu_num); + if (status) + return status; + + req_info->ctrl.s.grp = otx2_cpt_get_kcrypto_eng_grp_num(pdev); + + /* + * we perform an asynchronous send and once + * the request is completed the driver would + * intimate through registered call back functions + */ + return otx2_cpt_do_request(pdev, req_info, cpu_num); +} + +static int otx2_cpt_aead_encrypt(struct aead_request *req) +{ + return cpt_aead_enc_dec(req, otx2_cpt_aead_enc_dec_req, true); +} + +static int otx2_cpt_aead_decrypt(struct aead_request *req) +{ + return cpt_aead_enc_dec(req, otx2_cpt_aead_enc_dec_req, false); +} + +static int otx2_cpt_aead_null_encrypt(struct aead_request *req) +{ + return cpt_aead_enc_dec(req, otx2_cpt_aead_enc_dec_null_req, true); +} + +static int otx2_cpt_aead_null_decrypt(struct aead_request *req) +{ + return cpt_aead_enc_dec(req, otx2_cpt_aead_enc_dec_null_req, false); +} + +static struct skcipher_alg otx2_cpt_skciphers[] = { { + .base.cra_name = "xts(aes)", + .base.cra_driver_name = "cpt_xts_aes", + .base.cra_flags = crypto_alg_async | crypto_alg_need_fallback, + .base.cra_blocksize = aes_block_size, + .base.cra_ctxsize = sizeof(struct otx2_cpt_enc_ctx), + .base.cra_alignmask = 7, + .base.cra_priority = 4001, + .base.cra_module = this_module, + + .init = otx2_cpt_enc_dec_init, + .exit = otx2_cpt_skcipher_exit, + .ivsize = aes_block_size, + .min_keysize = 2 * aes_min_key_size, + .max_keysize = 2 * aes_max_key_size, + .setkey = otx2_cpt_skcipher_xts_setkey, + .encrypt = otx2_cpt_skcipher_encrypt, + .decrypt = otx2_cpt_skcipher_decrypt, +}, { + .base.cra_name = "cbc(aes)", + .base.cra_driver_name = "cpt_cbc_aes", + .base.cra_flags = crypto_alg_async | crypto_alg_need_fallback, + .base.cra_blocksize = aes_block_size, + .base.cra_ctxsize = sizeof(struct otx2_cpt_enc_ctx), + .base.cra_alignmask = 7, + .base.cra_priority = 4001, + .base.cra_module = this_module, + + .init = otx2_cpt_enc_dec_init, + .exit = otx2_cpt_skcipher_exit, + .ivsize = aes_block_size, + .min_keysize = aes_min_key_size, + .max_keysize = aes_max_key_size, + .setkey = otx2_cpt_skcipher_cbc_aes_setkey, + .encrypt = otx2_cpt_skcipher_encrypt, + .decrypt = otx2_cpt_skcipher_decrypt, +}, { + .base.cra_name = "ecb(aes)", + .base.cra_driver_name = "cpt_ecb_aes", + .base.cra_flags = crypto_alg_async | crypto_alg_need_fallback, + .base.cra_blocksize = aes_block_size, + .base.cra_ctxsize = sizeof(struct otx2_cpt_enc_ctx), + .base.cra_alignmask = 7, + .base.cra_priority = 4001, + .base.cra_module = this_module, + + .init = otx2_cpt_enc_dec_init, + .exit = otx2_cpt_skcipher_exit, + .ivsize = 0, + .min_keysize = aes_min_key_size, + .max_keysize = aes_max_key_size, + .setkey = otx2_cpt_skcipher_ecb_aes_setkey, + .encrypt = otx2_cpt_skcipher_encrypt, + .decrypt = otx2_cpt_skcipher_decrypt, +}, { + .base.cra_name = "cbc(des3_ede)", + .base.cra_driver_name = "cpt_cbc_des3_ede", + .base.cra_flags = crypto_alg_async | crypto_alg_need_fallback, + .base.cra_blocksize = des3_ede_block_size, + .base.cra_ctxsize = sizeof(struct otx2_cpt_enc_ctx), + .base.cra_alignmask = 7, + .base.cra_priority = 4001, + .base.cra_module = this_module, + + .init = otx2_cpt_enc_dec_init, + .exit = otx2_cpt_skcipher_exit, + .min_keysize = des3_ede_key_size, + .max_keysize = des3_ede_key_size, + .ivsize = des_block_size, + .setkey = otx2_cpt_skcipher_cbc_des3_setkey, + .encrypt = otx2_cpt_skcipher_encrypt, + .decrypt = otx2_cpt_skcipher_decrypt, +}, { + .base.cra_name = "ecb(des3_ede)", + .base.cra_driver_name = "cpt_ecb_des3_ede", + .base.cra_flags = crypto_alg_async | crypto_alg_need_fallback, + .base.cra_blocksize = des3_ede_block_size, + .base.cra_ctxsize = sizeof(struct otx2_cpt_enc_ctx), + .base.cra_alignmask = 7, + .base.cra_priority = 4001, + .base.cra_module = this_module, + + .init = otx2_cpt_enc_dec_init, + .exit = otx2_cpt_skcipher_exit, + .min_keysize = des3_ede_key_size, + .max_keysize = des3_ede_key_size, + .ivsize = 0, + .setkey = otx2_cpt_skcipher_ecb_des3_setkey, + .encrypt = otx2_cpt_skcipher_encrypt, + .decrypt = otx2_cpt_skcipher_decrypt, +} }; + +static struct aead_alg otx2_cpt_aeads[] = { { + .base = { + .cra_name = "authenc(hmac(sha1),cbc(aes))", + .cra_driver_name = "cpt_hmac_sha1_cbc_aes", + .cra_blocksize = aes_block_size, + .cra_flags = crypto_alg_async | crypto_alg_need_fallback, + .cra_ctxsize = sizeof(struct otx2_cpt_aead_ctx), + .cra_priority = 4001, + .cra_alignmask = 0, + .cra_module = this_module, + }, + .init = otx2_cpt_aead_cbc_aes_sha1_init, + .exit = otx2_cpt_aead_exit, + .setkey = otx2_cpt_aead_cbc_aes_sha_setkey, + .setauthsize = otx2_cpt_aead_set_authsize, + .encrypt = otx2_cpt_aead_encrypt, + .decrypt = otx2_cpt_aead_decrypt, + .ivsize = aes_block_size, + .maxauthsize = sha1_digest_size, +}, { + .base = { + .cra_name = "authenc(hmac(sha256),cbc(aes))", + .cra_driver_name = "cpt_hmac_sha256_cbc_aes", + .cra_blocksize = aes_block_size, + .cra_flags = crypto_alg_async | crypto_alg_need_fallback, + .cra_ctxsize = sizeof(struct otx2_cpt_aead_ctx), + .cra_priority = 4001, + .cra_alignmask = 0, + .cra_module = this_module, + }, + .init = otx2_cpt_aead_cbc_aes_sha256_init, + .exit = otx2_cpt_aead_exit, + .setkey = otx2_cpt_aead_cbc_aes_sha_setkey, + .setauthsize = otx2_cpt_aead_set_authsize, + .encrypt = otx2_cpt_aead_encrypt, + .decrypt = otx2_cpt_aead_decrypt, + .ivsize = aes_block_size, + .maxauthsize = sha256_digest_size, +}, { + .base = { + .cra_name = "authenc(hmac(sha384),cbc(aes))", + .cra_driver_name = "cpt_hmac_sha384_cbc_aes", + .cra_blocksize = aes_block_size, + .cra_flags = crypto_alg_async | crypto_alg_need_fallback, + .cra_ctxsize = sizeof(struct otx2_cpt_aead_ctx), + .cra_priority = 4001, + .cra_alignmask = 0, + .cra_module = this_module, + }, + .init = otx2_cpt_aead_cbc_aes_sha384_init, + .exit = otx2_cpt_aead_exit, + .setkey = otx2_cpt_aead_cbc_aes_sha_setkey, + .setauthsize = otx2_cpt_aead_set_authsize, + .encrypt = otx2_cpt_aead_encrypt, + .decrypt = otx2_cpt_aead_decrypt, + .ivsize = aes_block_size, + .maxauthsize = sha384_digest_size, +}, { + .base = { + .cra_name = "authenc(hmac(sha512),cbc(aes))", + .cra_driver_name = "cpt_hmac_sha512_cbc_aes", + .cra_blocksize = aes_block_size, + .cra_flags = crypto_alg_async | crypto_alg_need_fallback, + .cra_ctxsize = sizeof(struct otx2_cpt_aead_ctx), + .cra_priority = 4001, + .cra_alignmask = 0, + .cra_module = this_module, + }, + .init = otx2_cpt_aead_cbc_aes_sha512_init, + .exit = otx2_cpt_aead_exit, + .setkey = otx2_cpt_aead_cbc_aes_sha_setkey, + .setauthsize = otx2_cpt_aead_set_authsize, + .encrypt = otx2_cpt_aead_encrypt, + .decrypt = otx2_cpt_aead_decrypt, + .ivsize = aes_block_size, + .maxauthsize = sha512_digest_size, +}, { + .base = { + .cra_name = "authenc(hmac(sha1),ecb(cipher_null))", + .cra_driver_name = "cpt_hmac_sha1_ecb_null", + .cra_blocksize = 1, + .cra_flags = crypto_alg_async, + .cra_ctxsize = sizeof(struct otx2_cpt_aead_ctx), + .cra_priority = 4001, + .cra_alignmask = 0, + .cra_module = this_module, + }, + .init = otx2_cpt_aead_ecb_null_sha1_init, + .exit = otx2_cpt_aead_exit, + .setkey = otx2_cpt_aead_ecb_null_sha_setkey, + .setauthsize = otx2_cpt_aead_null_set_authsize, + .encrypt = otx2_cpt_aead_null_encrypt, + .decrypt = otx2_cpt_aead_null_decrypt, + .ivsize = 0, + .maxauthsize = sha1_digest_size, +}, { + .base = { + .cra_name = "authenc(hmac(sha256),ecb(cipher_null))", + .cra_driver_name = "cpt_hmac_sha256_ecb_null", + .cra_blocksize = 1, + .cra_flags = crypto_alg_async, + .cra_ctxsize = sizeof(struct otx2_cpt_aead_ctx), + .cra_priority = 4001, + .cra_alignmask = 0, + .cra_module = this_module, + }, + .init = otx2_cpt_aead_ecb_null_sha256_init, + .exit = otx2_cpt_aead_exit, + .setkey = otx2_cpt_aead_ecb_null_sha_setkey, + .setauthsize = otx2_cpt_aead_null_set_authsize, + .encrypt = otx2_cpt_aead_null_encrypt, + .decrypt = otx2_cpt_aead_null_decrypt, + .ivsize = 0, + .maxauthsize = sha256_digest_size, +}, { + .base = { + .cra_name = "authenc(hmac(sha384),ecb(cipher_null))", + .cra_driver_name = "cpt_hmac_sha384_ecb_null", + .cra_blocksize = 1, + .cra_flags = crypto_alg_async, + .cra_ctxsize = sizeof(struct otx2_cpt_aead_ctx), + .cra_priority = 4001, + .cra_alignmask = 0, + .cra_module = this_module, + }, + .init = otx2_cpt_aead_ecb_null_sha384_init, + .exit = otx2_cpt_aead_exit, + .setkey = otx2_cpt_aead_ecb_null_sha_setkey, + .setauthsize = otx2_cpt_aead_null_set_authsize, + .encrypt = otx2_cpt_aead_null_encrypt, + .decrypt = otx2_cpt_aead_null_decrypt, + .ivsize = 0, + .maxauthsize = sha384_digest_size, +}, { + .base = { + .cra_name = "authenc(hmac(sha512),ecb(cipher_null))", + .cra_driver_name = "cpt_hmac_sha512_ecb_null", + .cra_blocksize = 1, + .cra_flags = crypto_alg_async, + .cra_ctxsize = sizeof(struct otx2_cpt_aead_ctx), + .cra_priority = 4001, + .cra_alignmask = 0, + .cra_module = this_module, + }, + .init = otx2_cpt_aead_ecb_null_sha512_init, + .exit = otx2_cpt_aead_exit, + .setkey = otx2_cpt_aead_ecb_null_sha_setkey, + .setauthsize = otx2_cpt_aead_null_set_authsize, + .encrypt = otx2_cpt_aead_null_encrypt, + .decrypt = otx2_cpt_aead_null_decrypt, + .ivsize = 0, + .maxauthsize = sha512_digest_size, +}, { + .base = { + .cra_name = "rfc4106(gcm(aes))", + .cra_driver_name = "cpt_rfc4106_gcm_aes", + .cra_blocksize = 1, + .cra_flags = crypto_alg_async | crypto_alg_need_fallback, + .cra_ctxsize = sizeof(struct otx2_cpt_aead_ctx), + .cra_priority = 4001, + .cra_alignmask = 0, + .cra_module = this_module, + }, + .init = otx2_cpt_aead_gcm_aes_init, + .exit = otx2_cpt_aead_exit, + .setkey = otx2_cpt_aead_gcm_aes_setkey, + .setauthsize = otx2_cpt_aead_gcm_set_authsize, + .encrypt = otx2_cpt_aead_encrypt, + .decrypt = otx2_cpt_aead_decrypt, + .ivsize = aes_gcm_iv_size, + .maxauthsize = aes_gcm_icv_size, +} }; + +static inline int cpt_register_algs(void) +{ + int i, err = 0; + + if (!is_enabled(config_dm_crypt)) { + for (i = 0; i < array_size(otx2_cpt_skciphers); i++) + otx2_cpt_skciphers[i].base.cra_flags &= + ~crypto_alg_dead; + + err = crypto_register_skciphers(otx2_cpt_skciphers, + array_size(otx2_cpt_skciphers)); + if (err) + return err; + } + + for (i = 0; i < array_size(otx2_cpt_aeads); i++) + otx2_cpt_aeads[i].base.cra_flags &= ~crypto_alg_dead; + + err = crypto_register_aeads(otx2_cpt_aeads, + array_size(otx2_cpt_aeads)); + if (err) { + crypto_unregister_skciphers(otx2_cpt_skciphers, + array_size(otx2_cpt_skciphers)); + return err; + } + + return 0; +} + +static inline void cpt_unregister_algs(void) +{ + crypto_unregister_skciphers(otx2_cpt_skciphers, + array_size(otx2_cpt_skciphers)); + crypto_unregister_aeads(otx2_cpt_aeads, array_size(otx2_cpt_aeads)); +} + +static int compare_func(const void *lptr, const void *rptr) +{ + const struct cpt_device_desc *ldesc = (struct cpt_device_desc *) lptr; + const struct cpt_device_desc *rdesc = (struct cpt_device_desc *) rptr; + + if (ldesc->dev->devfn < rdesc->dev->devfn) + return -1; + if (ldesc->dev->devfn > rdesc->dev->devfn) + return 1; + return 0; +} + +static void swap_func(void *lptr, void *rptr, int size) +{ + struct cpt_device_desc *ldesc = lptr; + struct cpt_device_desc *rdesc = rptr; + struct cpt_device_desc desc; + + desc = *ldesc; + *ldesc = *rdesc; + *rdesc = desc; +} + +int otx2_cpt_crypto_init(struct pci_dev *pdev, struct module *mod, + int num_queues, int num_devices) +{ + int ret = 0; + int count; + + mutex_lock(&mutex); + count = atomic_read(&se_devices.count); + if (count >= otx2_cpt_max_lfs_num) { + dev_err(&pdev->dev, "no space to add a new device "); + ret = -enospc; + goto unlock; + } + se_devices.desc[count].num_queues = num_queues; + se_devices.desc[count++].dev = pdev; + atomic_inc(&se_devices.count); + + if (atomic_read(&se_devices.count) == num_devices && + is_crypto_registered == false) { + if (cpt_register_algs()) { + dev_err(&pdev->dev, + "error in registering crypto algorithms "); + ret = -einval; + goto unlock; + } + try_module_get(mod); + is_crypto_registered = true; + } + sort(se_devices.desc, count, sizeof(struct cpt_device_desc), + compare_func, swap_func); + +unlock: + mutex_unlock(&mutex); + return ret; +} + +void otx2_cpt_crypto_exit(struct pci_dev *pdev, struct module *mod) +{ + struct cpt_device_table *dev_tbl; + bool dev_found = false; + int i, j, count; + + mutex_lock(&mutex); + + dev_tbl = &se_devices; + count = atomic_read(&dev_tbl->count); + for (i = 0; i < count; i++) { + if (pdev == dev_tbl->desc[i].dev) { + for (j = i; j < count-1; j++) + dev_tbl->desc[j] = dev_tbl->desc[j+1]; + dev_found = true; + break; + } + } + + if (!dev_found) { + dev_err(&pdev->dev, "%s device not found ", __func__); + goto unlock; + } + if (atomic_dec_and_test(&se_devices.count)) { + cpt_unregister_algs(); + module_put(mod); + is_crypto_registered = false; + } + +unlock: + mutex_unlock(&mutex); +} diff --git a/drivers/crypto/marvell/octeontx2/otx2_cptvf_algs.h b/drivers/crypto/marvell/octeontx2/otx2_cptvf_algs.h --- /dev/null +++ b/drivers/crypto/marvell/octeontx2/otx2_cptvf_algs.h +/* spdx-license-identifier: gpl-2.0-only + * copyright (c) 2020 marvell. + */ + +#ifndef __otx2_cpt_algs_h +#define __otx2_cpt_algs_h + +#include <crypto/hash.h> +#include <crypto/skcipher.h> +#include <crypto/aead.h> +#include "otx2_cpt_common.h" + +#define otx2_cpt_max_enc_key_size 32 +#define otx2_cpt_max_hash_key_size 64 +#define otx2_cpt_max_key_size (otx2_cpt_max_enc_key_size + \ + otx2_cpt_max_hash_key_size) +enum otx2_cpt_request_type { + otx2_cpt_enc_dec_req = 0x1, + otx2_cpt_aead_enc_dec_req = 0x2, + otx2_cpt_aead_enc_dec_null_req = 0x3, + otx2_cpt_passthrough_req = 0x4 +}; + +enum otx2_cpt_major_opcodes { + otx2_cpt_major_op_misc = 0x01, + otx2_cpt_major_op_fc = 0x33, + otx2_cpt_major_op_hmac = 0x35, +}; + +enum otx2_cpt_cipher_type { + otx2_cpt_cipher_null = 0x0, + otx2_cpt_des3_cbc = 0x1, + otx2_cpt_des3_ecb = 0x2, + otx2_cpt_aes_cbc = 0x3, + otx2_cpt_aes_ecb = 0x4, + otx2_cpt_aes_cfb = 0x5, + otx2_cpt_aes_ctr = 0x6, + otx2_cpt_aes_gcm = 0x7, + otx2_cpt_aes_xts = 0x8 +}; + +enum otx2_cpt_mac_type { + otx2_cpt_mac_null = 0x0, + otx2_cpt_md5 = 0x1, + otx2_cpt_sha1 = 0x2, + otx2_cpt_sha224 = 0x3, + otx2_cpt_sha256 = 0x4, + otx2_cpt_sha384 = 0x5, + otx2_cpt_sha512 = 0x6, + otx2_cpt_gmac = 0x7 +}; + +enum otx2_cpt_aes_key_len { + otx2_cpt_aes_128_bit = 0x1, + otx2_cpt_aes_192_bit = 0x2, + otx2_cpt_aes_256_bit = 0x3 +}; + +union otx2_cpt_encr_ctrl { + u64 u; + struct { +#if defined(__big_endian_bitfield) + u64 enc_cipher:4; + u64 reserved_59:1; + u64 aes_key:2; + u64 iv_source:1; + u64 mac_type:4; + u64 reserved_49_51:3; + u64 auth_input_type:1; + u64 mac_len:8; + u64 reserved_32_39:8; + u64 encr_offset:16; + u64 iv_offset:8; + u64 auth_offset:8; +#else + u64 auth_offset:8; + u64 iv_offset:8; + u64 encr_offset:16; + u64 reserved_32_39:8; + u64 mac_len:8; + u64 auth_input_type:1; + u64 reserved_49_51:3; + u64 mac_type:4; + u64 iv_source:1; + u64 aes_key:2; + u64 reserved_59:1; + u64 enc_cipher:4; +#endif + } e; +}; + +struct otx2_cpt_cipher { + const char *name; + u8 value; +}; + +struct otx2_cpt_fc_enc_ctx { + union otx2_cpt_encr_ctrl enc_ctrl; + u8 encr_key[32]; + u8 encr_iv[16]; +}; + +union otx2_cpt_fc_hmac_ctx { + struct { + u8 ipad[64]; + u8 opad[64]; + } e; + struct { + u8 hmac_calc[64]; /* hmac calculated */ + u8 hmac_recv[64]; /* hmac received */ + } s; +}; + +struct otx2_cpt_fc_ctx { + struct otx2_cpt_fc_enc_ctx enc; + union otx2_cpt_fc_hmac_ctx hmac; +}; + +struct otx2_cpt_enc_ctx { + u32 key_len; + u8 enc_key[otx2_cpt_max_key_size]; + u8 cipher_type; + u8 key_type; + u8 enc_align_len; + struct crypto_skcipher *fbk_cipher; +}; + +union otx2_cpt_offset_ctrl { + u64 flags; + struct { +#if defined(__big_endian_bitfield) + u64 reserved:32; + u64 enc_data_offset:16; + u64 iv_offset:8; + u64 auth_offset:8; +#else + u64 auth_offset:8; + u64 iv_offset:8; + u64 enc_data_offset:16; + u64 reserved:32; +#endif + } e; +}; + +struct otx2_cpt_req_ctx { + struct otx2_cpt_req_info cpt_req; + union otx2_cpt_offset_ctrl ctrl_word; + struct otx2_cpt_fc_ctx fctx; + union { + struct skcipher_request sk_fbk_req; + struct aead_request fbk_req; + }; +}; + +struct otx2_cpt_sdesc { + struct shash_desc shash; +}; + +struct otx2_cpt_aead_ctx { + u8 key[otx2_cpt_max_key_size]; + struct crypto_shash *hashalg; + struct otx2_cpt_sdesc *sdesc; + struct crypto_aead *fbk_cipher; + u8 *ipad; + u8 *opad; + u32 enc_key_len; + u32 auth_key_len; + u8 cipher_type; + u8 mac_type; + u8 key_type; + u8 is_trunc_hmac; + u8 enc_align_len; +}; +int otx2_cpt_crypto_init(struct pci_dev *pdev, struct module *mod, + int num_queues, int num_devices); +void otx2_cpt_crypto_exit(struct pci_dev *pdev, struct module *mod); + +#endif /* __otx2_cpt_algs_h */ diff --git a/drivers/crypto/marvell/octeontx2/otx2_cptvf_main.c b/drivers/crypto/marvell/octeontx2/otx2_cptvf_main.c --- a/drivers/crypto/marvell/octeontx2/otx2_cptvf_main.c +++ b/drivers/crypto/marvell/octeontx2/otx2_cptvf_main.c +#include "otx2_cptvf_algs.h" + /* unregister crypto algorithms */ + otx2_cpt_crypto_exit(lfs->pdev, this_module); - + /* register crypto algorithms */ + ret = otx2_cpt_crypto_init(lfs->pdev, this_module, lfs_num, 1); + if (ret) { + dev_err(&lfs->pdev->dev, "algorithms registration failed "); + goto disable_irqs; + } +disable_irqs: + otx2_cptlf_free_irqs_affinity(lfs); diff --git a/drivers/crypto/marvell/octeontx2/otx2_cptvf_reqmgr.c b/drivers/crypto/marvell/octeontx2/otx2_cptvf_reqmgr.c --- a/drivers/crypto/marvell/octeontx2/otx2_cptvf_reqmgr.c +++ b/drivers/crypto/marvell/octeontx2/otx2_cptvf_reqmgr.c + +int otx2_cpt_get_kcrypto_eng_grp_num(struct pci_dev *pdev) +{ + struct otx2_cptvf_dev *cptvf = pci_get_drvdata(pdev); + + return cptvf->lfs.kcrypto_eng_grp_num; +}
|
Cryptography hardware acceleration
|
6f03f0e8b6c8a82d8e740ff3a87ed407ad423243
|
srujana challa
|
drivers
|
crypto
|
marvell, octeontx2
|
crypto: sun4i-ss - enabled stats via debugfs
|
this patch enable to access usage stats for each algorithm.
|
this release allows to map an uid to a different one in a mount; it also adds support for selecting the preemption model at runtime; support for a low-overhead memory error detector designed to be used in production; support for the acrn hypervisor designed for embedded systems; btrfs initial support for zoned devices, subpage blocks sizes and performance improvements; support for eager nfs writes; support for a thermal power management to control the surface temperature of embedded devices in an unified way; the napi polling can be moved to a kernel thread; and support for non-blocking path lookups. as always, there are many other features, new drivers, improvements and fixes.
|
enabled stats via debugfs
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'cryptography', 'security', 'networking', 'architectures x86 arm risc-v powerpc mips csky s390 pa-risc c6x']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'device voltage and frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'non-transparent bridge (ntb)', 'thunderbolt', 'clock', 'phy ("physical layer" framework)', 'cxl (compute express link)', 'various']
|
['sun4i-ss']
|
['h', 'kconfig', 'c']
| 6
| 98
| 0
|
--- diff --git a/drivers/crypto/allwinner/kconfig b/drivers/crypto/allwinner/kconfig --- a/drivers/crypto/allwinner/kconfig +++ b/drivers/crypto/allwinner/kconfig +config crypto_dev_sun4i_ss_debug + bool "enable sun4i-ss stats" + depends on crypto_dev_sun4i_ss + depends on debug_fs + help + say y to enable sun4i-ss debug stats. + this will create /sys/kernel/debug/sun4i-ss/stats for displaying + the number of requests per algorithm. + diff --git a/drivers/crypto/allwinner/sun4i-ss/sun4i-ss-cipher.c b/drivers/crypto/allwinner/sun4i-ss/sun4i-ss-cipher.c --- a/drivers/crypto/allwinner/sun4i-ss/sun4i-ss-cipher.c +++ b/drivers/crypto/allwinner/sun4i-ss/sun4i-ss-cipher.c + struct skcipher_alg *alg = crypto_skcipher_alg(tfm); + struct sun4i_ss_alg_template *algt; + if (is_enabled(config_crypto_dev_sun4i_ss_debug)) { + algt = container_of(alg, struct sun4i_ss_alg_template, alg.crypto); + algt->stat_opti++; + algt->stat_bytes += areq->cryptlen; + } + + struct skcipher_alg *alg = crypto_skcipher_alg(tfm); + struct sun4i_ss_alg_template *algt; + + if (is_enabled(config_crypto_dev_sun4i_ss_debug)) { + algt = container_of(alg, struct sun4i_ss_alg_template, alg.crypto); + algt->stat_fb++; + } + if (is_enabled(config_crypto_dev_sun4i_ss_debug)) { + algt->stat_req++; + algt->stat_bytes += areq->cryptlen; + } + diff --git a/drivers/crypto/allwinner/sun4i-ss/sun4i-ss-core.c b/drivers/crypto/allwinner/sun4i-ss/sun4i-ss-core.c --- a/drivers/crypto/allwinner/sun4i-ss/sun4i-ss-core.c +++ b/drivers/crypto/allwinner/sun4i-ss/sun4i-ss-core.c +#include <linux/debugfs.h> +static int sun4i_ss_dbgfs_read(struct seq_file *seq, void *v) +{ + unsigned int i; + + for (i = 0; i < array_size(ss_algs); i++) { + if (!ss_algs[i].ss) + continue; + switch (ss_algs[i].type) { + case crypto_alg_type_skcipher: + seq_printf(seq, "%s %s reqs=%lu opti=%lu fallback=%lu tsize=%lu ", + ss_algs[i].alg.crypto.base.cra_driver_name, + ss_algs[i].alg.crypto.base.cra_name, + ss_algs[i].stat_req, ss_algs[i].stat_opti, ss_algs[i].stat_fb, + ss_algs[i].stat_bytes); + break; + case crypto_alg_type_rng: + seq_printf(seq, "%s %s reqs=%lu tsize=%lu ", + ss_algs[i].alg.rng.base.cra_driver_name, + ss_algs[i].alg.rng.base.cra_name, + ss_algs[i].stat_req, ss_algs[i].stat_bytes); + break; + case crypto_alg_type_ahash: + seq_printf(seq, "%s %s reqs=%lu ", + ss_algs[i].alg.hash.halg.base.cra_driver_name, + ss_algs[i].alg.hash.halg.base.cra_name, + ss_algs[i].stat_req); + break; + } + } + return 0; +} + +static int sun4i_ss_dbgfs_open(struct inode *inode, struct file *file) +{ + return single_open(file, sun4i_ss_dbgfs_read, inode->i_private); +} + +static const struct file_operations sun4i_ss_debugfs_fops = { + .owner = this_module, + .open = sun4i_ss_dbgfs_open, + .read = seq_read, + .llseek = seq_lseek, + .release = single_release, +}; + + + /* ignore error of debugfs */ + ss->dbgfs_dir = debugfs_create_dir("sun4i-ss", null); + ss->dbgfs_stats = debugfs_create_file("stats", 0444, ss->dbgfs_dir, ss, + &sun4i_ss_debugfs_fops); + diff --git a/drivers/crypto/allwinner/sun4i-ss/sun4i-ss-hash.c b/drivers/crypto/allwinner/sun4i-ss/sun4i-ss-hash.c --- a/drivers/crypto/allwinner/sun4i-ss/sun4i-ss-hash.c +++ b/drivers/crypto/allwinner/sun4i-ss/sun4i-ss-hash.c + struct ahash_alg *alg = __crypto_ahash_alg(tfm->base.__crt_alg); + struct sun4i_ss_alg_template *algt; + if (is_enabled(config_crypto_dev_sun4i_ss_debug)) { + algt = container_of(alg, struct sun4i_ss_alg_template, alg.hash); + algt->stat_req++; + } diff --git a/drivers/crypto/allwinner/sun4i-ss/sun4i-ss-prng.c b/drivers/crypto/allwinner/sun4i-ss/sun4i-ss-prng.c --- a/drivers/crypto/allwinner/sun4i-ss/sun4i-ss-prng.c +++ b/drivers/crypto/allwinner/sun4i-ss/sun4i-ss-prng.c + if (is_enabled(config_crypto_dev_sun4i_ss_debug)) { + algt->stat_req++; + algt->stat_bytes += todo; + } + diff --git a/drivers/crypto/allwinner/sun4i-ss/sun4i-ss.h b/drivers/crypto/allwinner/sun4i-ss/sun4i-ss.h --- a/drivers/crypto/allwinner/sun4i-ss/sun4i-ss.h +++ b/drivers/crypto/allwinner/sun4i-ss/sun4i-ss.h + struct dentry *dbgfs_dir; + struct dentry *dbgfs_stats; + unsigned long stat_req; + unsigned long stat_fb; + unsigned long stat_bytes; + unsigned long stat_opti;
|
Cryptography hardware acceleration
|
b1f578b85a13c4228d7862a203b428e774f87653
|
corentin labbe
|
drivers
|
crypto
|
allwinner, sun4i-ss
|
crypto: picoxcell - remove picoxcell driver
|
picoxcell has had nothing but treewide cleanups for at least the last 8 years and no signs of activity. the most recent activity is a yocto vendor kernel based on v3.0 in 2015.
|
this release allows to map an uid to a different one in a mount; it also adds support for selecting the preemption model at runtime; support for a low-overhead memory error detector designed to be used in production; support for the acrn hypervisor designed for embedded systems; btrfs initial support for zoned devices, subpage blocks sizes and performance improvements; support for eager nfs writes; support for a thermal power management to control the surface temperature of embedded devices in an unified way; the napi polling can be moved to a kernel thread; and support for non-blocking path lookups. as always, there are many other features, new drivers, improvements and fixes.
|
remove picoxcell driver
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'cryptography', 'security', 'networking', 'architectures x86 arm risc-v powerpc mips csky s390 pa-risc c6x']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'device voltage and frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'non-transparent bridge (ntb)', 'thunderbolt', 'clock', 'phy ("physical layer" framework)', 'cxl (compute express link)', 'various']
|
['picoxcell']
|
['h', 'kconfig', 'c', 'makefile']
| 4
| 0
| 1,941
|
--- diff --git a/drivers/crypto/kconfig b/drivers/crypto/kconfig --- a/drivers/crypto/kconfig +++ b/drivers/crypto/kconfig -config crypto_dev_picoxcell - tristate "support for picoxcell ipsec and layer2 crypto engines" - depends on (arch_picoxcell || compile_test) && have_clk - select crypto_aead - select crypto_aes - select crypto_authenc - select crypto_skcipher - select crypto_lib_des - select crypto_cbc - select crypto_ecb - select crypto_seqiv - help - this option enables support for the hardware offload engines in the - picochip picoxcell soc devices. select this for ipsec esp offload - and for 3gpp layer 2 ciphering support. - - saying m here will build a module named picoxcell_crypto. - diff --git a/drivers/crypto/makefile b/drivers/crypto/makefile --- a/drivers/crypto/makefile +++ b/drivers/crypto/makefile -obj-$(config_crypto_dev_picoxcell) += picoxcell_crypto.o diff --git a/drivers/crypto/picoxcell_crypto.c b/drivers/crypto/picoxcell_crypto.c --- a/drivers/crypto/picoxcell_crypto.c +++ /dev/null -// spdx-license-identifier: gpl-2.0-or-later -/* - * copyright (c) 2010-2011 picochip ltd., jamie iles - */ -#include <crypto/internal/aead.h> -#include <crypto/aes.h> -#include <crypto/algapi.h> -#include <crypto/authenc.h> -#include <crypto/internal/des.h> -#include <crypto/md5.h> -#include <crypto/sha1.h> -#include <crypto/sha2.h> -#include <crypto/internal/skcipher.h> -#include <linux/clk.h> -#include <linux/crypto.h> -#include <linux/delay.h> -#include <linux/dma-mapping.h> -#include <linux/dmapool.h> -#include <linux/err.h> -#include <linux/init.h> -#include <linux/interrupt.h> -#include <linux/io.h> -#include <linux/list.h> -#include <linux/module.h> -#include <linux/of.h> -#include <linux/platform_device.h> -#include <linux/pm.h> -#include <linux/rtnetlink.h> -#include <linux/scatterlist.h> -#include <linux/sched.h> -#include <linux/sizes.h> -#include <linux/slab.h> -#include <linux/timer.h> - -#include "picoxcell_crypto_regs.h" - -/* - * the threshold for the number of entries in the cmd fifo available before - * the cmd0_cnt interrupt is raised. increasing this value will reduce the - * number of interrupts raised to the cpu. - */ -#define cmd0_irq_threshold 1 - -/* - * the timeout period (in jiffies) for a pdu. when the the number of pdus in - * flight is greater than the stat_irq_threshold or 0 the timer is disabled. - * when there are packets in flight but lower than the threshold, we enable - * the timer and at expiry, attempt to remove any processed packets from the - * queue and if there are still packets left, schedule the timer again. - */ -#define packet_timeout 1 - -/* the priority to register each algorithm with. */ -#define spacc_crypto_alg_priority 10000 - -#define spacc_crypto_kasumi_f8_key_len 16 -#define spacc_crypto_ipsec_cipher_pg_sz 64 -#define spacc_crypto_ipsec_hash_pg_sz 64 -#define spacc_crypto_ipsec_max_ctxs 32 -#define spacc_crypto_ipsec_fifo_sz 32 -#define spacc_crypto_l2_cipher_pg_sz 64 -#define spacc_crypto_l2_hash_pg_sz 64 -#define spacc_crypto_l2_max_ctxs 128 -#define spacc_crypto_l2_fifo_sz 128 - -#define max_ddt_len 16 - -/* ddt format. this must match the hardware ddt format exactly. */ -struct spacc_ddt { - dma_addr_t p; - u32 len; -}; - -/* - * asynchronous crypto request structure. - * - * this structure defines a request that is either queued for processing or - * being processed. - */ -struct spacc_req { - struct list_head list; - struct spacc_engine *engine; - struct crypto_async_request *req; - int result; - bool is_encrypt; - unsigned ctx_id; - dma_addr_t src_addr, dst_addr; - struct spacc_ddt *src_ddt, *dst_ddt; - void (*complete)(struct spacc_req *req); - struct skcipher_request fallback_req; // keep at the end -}; - -struct spacc_aead { - unsigned long ctrl_default; - unsigned long type; - struct aead_alg alg; - struct spacc_engine *engine; - struct list_head entry; - int key_offs; - int iv_offs; -}; - -struct spacc_engine { - void __iomem *regs; - struct list_head pending; - int next_ctx; - spinlock_t hw_lock; - int in_flight; - struct list_head completed; - struct list_head in_progress; - struct tasklet_struct complete; - unsigned long fifo_sz; - void __iomem *cipher_ctx_base; - void __iomem *hash_key_base; - struct spacc_alg *algs; - unsigned num_algs; - struct list_head registered_algs; - struct spacc_aead *aeads; - unsigned num_aeads; - struct list_head registered_aeads; - size_t cipher_pg_sz; - size_t hash_pg_sz; - const char *name; - struct clk *clk; - struct device *dev; - unsigned max_ctxs; - struct timer_list packet_timeout; - unsigned stat_irq_thresh; - struct dma_pool *req_pool; -}; - -/* algorithm type mask. */ -#define spacc_crypto_alg_mask 0x7 - -/* spacc definition of a crypto algorithm. */ -struct spacc_alg { - unsigned long ctrl_default; - unsigned long type; - struct skcipher_alg alg; - struct spacc_engine *engine; - struct list_head entry; - int key_offs; - int iv_offs; -}; - -/* generic context structure for any algorithm type. */ -struct spacc_generic_ctx { - struct spacc_engine *engine; - int flags; - int key_offs; - int iv_offs; -}; - -/* block cipher context. */ -struct spacc_ablk_ctx { - struct spacc_generic_ctx generic; - u8 key[aes_max_key_size]; - u8 key_len; - /* - * the fallback cipher. if the operation can't be done in hardware, - * fallback to a software version. - */ - struct crypto_skcipher *sw_cipher; -}; - -/* aead cipher context. */ -struct spacc_aead_ctx { - struct spacc_generic_ctx generic; - u8 cipher_key[aes_max_key_size]; - u8 hash_ctx[spacc_crypto_ipsec_hash_pg_sz]; - u8 cipher_key_len; - u8 hash_key_len; - struct crypto_aead *sw_cipher; -}; - -static int spacc_ablk_submit(struct spacc_req *req); - -static inline struct spacc_alg *to_spacc_skcipher(struct skcipher_alg *alg) -{ - return alg ? container_of(alg, struct spacc_alg, alg) : null; -} - -static inline struct spacc_aead *to_spacc_aead(struct aead_alg *alg) -{ - return container_of(alg, struct spacc_aead, alg); -} - -static inline int spacc_fifo_cmd_full(struct spacc_engine *engine) -{ - u32 fifo_stat = readl(engine->regs + spa_fifo_stat_reg_offset); - - return fifo_stat & spa_fifo_cmd_full; -} - -/* - * given a cipher context, and a context number, get the base address of the - * context page. - * - * returns the address of the context page where the key/context may - * be written. - */ -static inline void __iomem *spacc_ctx_page_addr(struct spacc_generic_ctx *ctx, - unsigned indx, - bool is_cipher_ctx) -{ - return is_cipher_ctx ? ctx->engine->cipher_ctx_base + - (indx * ctx->engine->cipher_pg_sz) : - ctx->engine->hash_key_base + (indx * ctx->engine->hash_pg_sz); -} - -/* the context pages can only be written with 32-bit accesses. */ -static inline void memcpy_toio32(u32 __iomem *dst, const void *src, - unsigned count) -{ - const u32 *src32 = (const u32 *) src; - - while (count--) - writel(*src32++, dst++); -} - -static void spacc_cipher_write_ctx(struct spacc_generic_ctx *ctx, - void __iomem *page_addr, const u8 *key, - size_t key_len, const u8 *iv, size_t iv_len) -{ - void __iomem *key_ptr = page_addr + ctx->key_offs; - void __iomem *iv_ptr = page_addr + ctx->iv_offs; - - memcpy_toio32(key_ptr, key, key_len / 4); - memcpy_toio32(iv_ptr, iv, iv_len / 4); -} - -/* - * load a context into the engines context memory. - * - * returns the index of the context page where the context was loaded. - */ -static unsigned spacc_load_ctx(struct spacc_generic_ctx *ctx, - const u8 *ciph_key, size_t ciph_len, - const u8 *iv, size_t ivlen, const u8 *hash_key, - size_t hash_len) -{ - unsigned indx = ctx->engine->next_ctx++; - void __iomem *ciph_page_addr, *hash_page_addr; - - ciph_page_addr = spacc_ctx_page_addr(ctx, indx, 1); - hash_page_addr = spacc_ctx_page_addr(ctx, indx, 0); - - ctx->engine->next_ctx &= ctx->engine->fifo_sz - 1; - spacc_cipher_write_ctx(ctx, ciph_page_addr, ciph_key, ciph_len, iv, - ivlen); - writel(ciph_len | (indx << spa_key_sz_ctx_index_offset) | - (1 << spa_key_sz_cipher_offset), - ctx->engine->regs + spa_key_sz_reg_offset); - - if (hash_key) { - memcpy_toio32(hash_page_addr, hash_key, hash_len / 4); - writel(hash_len | (indx << spa_key_sz_ctx_index_offset), - ctx->engine->regs + spa_key_sz_reg_offset); - } - - return indx; -} - -static inline void ddt_set(struct spacc_ddt *ddt, dma_addr_t phys, size_t len) -{ - ddt->p = phys; - ddt->len = len; -} - -/* - * take a crypto request and scatterlists for the data and turn them into ddts - * for passing to the crypto engines. this also dma maps the data so that the - * crypto engines can dma to/from them. - */ -static struct spacc_ddt *spacc_sg_to_ddt(struct spacc_engine *engine, - struct scatterlist *payload, - unsigned nbytes, - enum dma_data_direction dir, - dma_addr_t *ddt_phys) -{ - unsigned mapped_ents; - struct scatterlist *cur; - struct spacc_ddt *ddt; - int i; - int nents; - - nents = sg_nents_for_len(payload, nbytes); - if (nents < 0) { - dev_err(engine->dev, "invalid numbers of sg. "); - return null; - } - mapped_ents = dma_map_sg(engine->dev, payload, nents, dir); - - if (mapped_ents + 1 > max_ddt_len) - goto out; - - ddt = dma_pool_alloc(engine->req_pool, gfp_atomic, ddt_phys); - if (!ddt) - goto out; - - for_each_sg(payload, cur, mapped_ents, i) - ddt_set(&ddt[i], sg_dma_address(cur), sg_dma_len(cur)); - ddt_set(&ddt[mapped_ents], 0, 0); - - return ddt; - -out: - dma_unmap_sg(engine->dev, payload, nents, dir); - return null; -} - -static int spacc_aead_make_ddts(struct aead_request *areq) -{ - struct crypto_aead *aead = crypto_aead_reqtfm(areq); - struct spacc_req *req = aead_request_ctx(areq); - struct spacc_engine *engine = req->engine; - struct spacc_ddt *src_ddt, *dst_ddt; - unsigned total; - int src_nents, dst_nents; - struct scatterlist *cur; - int i, dst_ents, src_ents; - - total = areq->assoclen + areq->cryptlen; - if (req->is_encrypt) - total += crypto_aead_authsize(aead); - - src_nents = sg_nents_for_len(areq->src, total); - if (src_nents < 0) { - dev_err(engine->dev, "invalid numbers of src sg. "); - return src_nents; - } - if (src_nents + 1 > max_ddt_len) - return -e2big; - - dst_nents = 0; - if (areq->src != areq->dst) { - dst_nents = sg_nents_for_len(areq->dst, total); - if (dst_nents < 0) { - dev_err(engine->dev, "invalid numbers of dst sg. "); - return dst_nents; - } - if (src_nents + 1 > max_ddt_len) - return -e2big; - } - - src_ddt = dma_pool_alloc(engine->req_pool, gfp_atomic, &req->src_addr); - if (!src_ddt) - goto err; - - dst_ddt = dma_pool_alloc(engine->req_pool, gfp_atomic, &req->dst_addr); - if (!dst_ddt) - goto err_free_src; - - req->src_ddt = src_ddt; - req->dst_ddt = dst_ddt; - - if (dst_nents) { - src_ents = dma_map_sg(engine->dev, areq->src, src_nents, - dma_to_device); - if (!src_ents) - goto err_free_dst; - - dst_ents = dma_map_sg(engine->dev, areq->dst, dst_nents, - dma_from_device); - - if (!dst_ents) { - dma_unmap_sg(engine->dev, areq->src, src_nents, - dma_to_device); - goto err_free_dst; - } - } else { - src_ents = dma_map_sg(engine->dev, areq->src, src_nents, - dma_bidirectional); - if (!src_ents) - goto err_free_dst; - dst_ents = src_ents; - } - - /* - * now map in the payload for the source and destination and terminate - * with the null pointers. - */ - for_each_sg(areq->src, cur, src_ents, i) - ddt_set(src_ddt++, sg_dma_address(cur), sg_dma_len(cur)); - - /* for decryption we need to skip the associated data. */ - total = req->is_encrypt ? 0 : areq->assoclen; - for_each_sg(areq->dst, cur, dst_ents, i) { - unsigned len = sg_dma_len(cur); - - if (len <= total) { - total -= len; - continue; - } - - ddt_set(dst_ddt++, sg_dma_address(cur) + total, len - total); - } - - ddt_set(src_ddt, 0, 0); - ddt_set(dst_ddt, 0, 0); - - return 0; - -err_free_dst: - dma_pool_free(engine->req_pool, dst_ddt, req->dst_addr); -err_free_src: - dma_pool_free(engine->req_pool, src_ddt, req->src_addr); -err: - return -enomem; -} - -static void spacc_aead_free_ddts(struct spacc_req *req) -{ - struct aead_request *areq = container_of(req->req, struct aead_request, - base); - struct crypto_aead *aead = crypto_aead_reqtfm(areq); - unsigned total = areq->assoclen + areq->cryptlen + - (req->is_encrypt ? crypto_aead_authsize(aead) : 0); - struct spacc_aead_ctx *aead_ctx = crypto_aead_ctx(aead); - struct spacc_engine *engine = aead_ctx->generic.engine; - int nents = sg_nents_for_len(areq->src, total); - - /* sg_nents_for_len should not fail since it works when mapping sg */ - if (unlikely(nents < 0)) { - dev_err(engine->dev, "invalid numbers of src sg. "); - return; - } - - if (areq->src != areq->dst) { - dma_unmap_sg(engine->dev, areq->src, nents, dma_to_device); - nents = sg_nents_for_len(areq->dst, total); - if (unlikely(nents < 0)) { - dev_err(engine->dev, "invalid numbers of dst sg. "); - return; - } - dma_unmap_sg(engine->dev, areq->dst, nents, dma_from_device); - } else - dma_unmap_sg(engine->dev, areq->src, nents, dma_bidirectional); - - dma_pool_free(engine->req_pool, req->src_ddt, req->src_addr); - dma_pool_free(engine->req_pool, req->dst_ddt, req->dst_addr); -} - -static void spacc_free_ddt(struct spacc_req *req, struct spacc_ddt *ddt, - dma_addr_t ddt_addr, struct scatterlist *payload, - unsigned nbytes, enum dma_data_direction dir) -{ - int nents = sg_nents_for_len(payload, nbytes); - - if (nents < 0) { - dev_err(req->engine->dev, "invalid numbers of sg. "); - return; - } - - dma_unmap_sg(req->engine->dev, payload, nents, dir); - dma_pool_free(req->engine->req_pool, ddt, ddt_addr); -} - -static int spacc_aead_setkey(struct crypto_aead *tfm, const u8 *key, - unsigned int keylen) -{ - struct spacc_aead_ctx *ctx = crypto_aead_ctx(tfm); - struct crypto_authenc_keys keys; - int err; - - crypto_aead_clear_flags(ctx->sw_cipher, crypto_tfm_req_mask); - crypto_aead_set_flags(ctx->sw_cipher, crypto_aead_get_flags(tfm) & - crypto_tfm_req_mask); - err = crypto_aead_setkey(ctx->sw_cipher, key, keylen); - if (err) - return err; - - if (crypto_authenc_extractkeys(&keys, key, keylen) != 0) - goto badkey; - - if (keys.enckeylen > aes_max_key_size) - goto badkey; - - if (keys.authkeylen > sizeof(ctx->hash_ctx)) - goto badkey; - - memcpy(ctx->cipher_key, keys.enckey, keys.enckeylen); - ctx->cipher_key_len = keys.enckeylen; - - memcpy(ctx->hash_ctx, keys.authkey, keys.authkeylen); - ctx->hash_key_len = keys.authkeylen; - - memzero_explicit(&keys, sizeof(keys)); - return 0; - -badkey: - memzero_explicit(&keys, sizeof(keys)); - return -einval; -} - -static int spacc_aead_setauthsize(struct crypto_aead *tfm, - unsigned int authsize) -{ - struct spacc_aead_ctx *ctx = crypto_tfm_ctx(crypto_aead_tfm(tfm)); - - return crypto_aead_setauthsize(ctx->sw_cipher, authsize); -} - -/* - * check if an aead request requires a fallback operation. some requests can't - * be completed in hardware because the hardware may not support certain key - * sizes. in these cases we need to complete the request in software. - */ -static int spacc_aead_need_fallback(struct aead_request *aead_req) -{ - struct crypto_aead *aead = crypto_aead_reqtfm(aead_req); - struct aead_alg *alg = crypto_aead_alg(aead); - struct spacc_aead *spacc_alg = to_spacc_aead(alg); - struct spacc_aead_ctx *ctx = crypto_aead_ctx(aead); - - /* - * if we have a non-supported key-length, then we need to do a - * software fallback. - */ - if ((spacc_alg->ctrl_default & spacc_crypto_alg_mask) == - spa_ctrl_ciph_alg_aes && - ctx->cipher_key_len != aes_keysize_128 && - ctx->cipher_key_len != aes_keysize_256) - return 1; - - return 0; -} - -static int spacc_aead_do_fallback(struct aead_request *req, unsigned alg_type, - bool is_encrypt) -{ - struct crypto_tfm *old_tfm = crypto_aead_tfm(crypto_aead_reqtfm(req)); - struct spacc_aead_ctx *ctx = crypto_tfm_ctx(old_tfm); - struct aead_request *subreq = aead_request_ctx(req); - - aead_request_set_tfm(subreq, ctx->sw_cipher); - aead_request_set_callback(subreq, req->base.flags, - req->base.complete, req->base.data); - aead_request_set_crypt(subreq, req->src, req->dst, req->cryptlen, - req->iv); - aead_request_set_ad(subreq, req->assoclen); - - return is_encrypt ? crypto_aead_encrypt(subreq) : - crypto_aead_decrypt(subreq); -} - -static void spacc_aead_complete(struct spacc_req *req) -{ - spacc_aead_free_ddts(req); - req->req->complete(req->req, req->result); -} - -static int spacc_aead_submit(struct spacc_req *req) -{ - struct aead_request *aead_req = - container_of(req->req, struct aead_request, base); - struct crypto_aead *aead = crypto_aead_reqtfm(aead_req); - unsigned int authsize = crypto_aead_authsize(aead); - struct spacc_aead_ctx *ctx = crypto_aead_ctx(aead); - struct aead_alg *alg = crypto_aead_alg(aead); - struct spacc_aead *spacc_alg = to_spacc_aead(alg); - struct spacc_engine *engine = ctx->generic.engine; - u32 ctrl, proc_len, assoc_len; - - req->result = -einprogress; - req->ctx_id = spacc_load_ctx(&ctx->generic, ctx->cipher_key, - ctx->cipher_key_len, aead_req->iv, crypto_aead_ivsize(aead), - ctx->hash_ctx, ctx->hash_key_len); - - /* set the source and destination ddt pointers. */ - writel(req->src_addr, engine->regs + spa_src_ptr_reg_offset); - writel(req->dst_addr, engine->regs + spa_dst_ptr_reg_offset); - writel(0, engine->regs + spa_offset_reg_offset); - - assoc_len = aead_req->assoclen; - proc_len = aead_req->cryptlen + assoc_len; - - /* - * if we are decrypting, we need to take the length of the icv out of - * the processing length. - */ - if (!req->is_encrypt) - proc_len -= authsize; - - writel(proc_len, engine->regs + spa_proc_len_reg_offset); - writel(assoc_len, engine->regs + spa_aad_len_reg_offset); - writel(authsize, engine->regs + spa_icv_len_reg_offset); - writel(0, engine->regs + spa_icv_offset_reg_offset); - writel(0, engine->regs + spa_aux_info_reg_offset); - - ctrl = spacc_alg->ctrl_default | (req->ctx_id << spa_ctrl_ctx_idx) | - (1 << spa_ctrl_icv_append); - if (req->is_encrypt) - ctrl |= (1 << spa_ctrl_encrypt_idx) | (1 << spa_ctrl_aad_copy); - else - ctrl |= (1 << spa_ctrl_key_exp); - - mod_timer(&engine->packet_timeout, jiffies + packet_timeout); - - writel(ctrl, engine->regs + spa_ctrl_reg_offset); - - return -einprogress; -} - -static int spacc_req_submit(struct spacc_req *req); - -static void spacc_push(struct spacc_engine *engine) -{ - struct spacc_req *req; - - while (!list_empty(&engine->pending) && - engine->in_flight + 1 <= engine->fifo_sz) { - - ++engine->in_flight; - req = list_first_entry(&engine->pending, struct spacc_req, - list); - list_move_tail(&req->list, &engine->in_progress); - - req->result = spacc_req_submit(req); - } -} - -/* - * setup an aead request for processing. this will configure the engine, load - * the context and then start the packet processing. - */ -static int spacc_aead_setup(struct aead_request *req, - unsigned alg_type, bool is_encrypt) -{ - struct crypto_aead *aead = crypto_aead_reqtfm(req); - struct aead_alg *alg = crypto_aead_alg(aead); - struct spacc_engine *engine = to_spacc_aead(alg)->engine; - struct spacc_req *dev_req = aead_request_ctx(req); - int err; - unsigned long flags; - - dev_req->req = &req->base; - dev_req->is_encrypt = is_encrypt; - dev_req->result = -ebusy; - dev_req->engine = engine; - dev_req->complete = spacc_aead_complete; - - if (unlikely(spacc_aead_need_fallback(req) || - ((err = spacc_aead_make_ddts(req)) == -e2big))) - return spacc_aead_do_fallback(req, alg_type, is_encrypt); - - if (err) - goto out; - - err = -einprogress; - spin_lock_irqsave(&engine->hw_lock, flags); - if (unlikely(spacc_fifo_cmd_full(engine)) || - engine->in_flight + 1 > engine->fifo_sz) { - if (!(req->base.flags & crypto_tfm_req_may_backlog)) { - err = -ebusy; - spin_unlock_irqrestore(&engine->hw_lock, flags); - goto out_free_ddts; - } - list_add_tail(&dev_req->list, &engine->pending); - } else { - list_add_tail(&dev_req->list, &engine->pending); - spacc_push(engine); - } - spin_unlock_irqrestore(&engine->hw_lock, flags); - - goto out; - -out_free_ddts: - spacc_aead_free_ddts(dev_req); -out: - return err; -} - -static int spacc_aead_encrypt(struct aead_request *req) -{ - struct crypto_aead *aead = crypto_aead_reqtfm(req); - struct spacc_aead *alg = to_spacc_aead(crypto_aead_alg(aead)); - - return spacc_aead_setup(req, alg->type, 1); -} - -static int spacc_aead_decrypt(struct aead_request *req) -{ - struct crypto_aead *aead = crypto_aead_reqtfm(req); - struct spacc_aead *alg = to_spacc_aead(crypto_aead_alg(aead)); - - return spacc_aead_setup(req, alg->type, 0); -} - -/* - * initialise a new aead context. this is responsible for allocating the - * fallback cipher and initialising the context. - */ -static int spacc_aead_cra_init(struct crypto_aead *tfm) -{ - struct spacc_aead_ctx *ctx = crypto_aead_ctx(tfm); - struct aead_alg *alg = crypto_aead_alg(tfm); - struct spacc_aead *spacc_alg = to_spacc_aead(alg); - struct spacc_engine *engine = spacc_alg->engine; - - ctx->generic.flags = spacc_alg->type; - ctx->generic.engine = engine; - ctx->sw_cipher = crypto_alloc_aead(alg->base.cra_name, 0, - crypto_alg_need_fallback); - if (is_err(ctx->sw_cipher)) - return ptr_err(ctx->sw_cipher); - ctx->generic.key_offs = spacc_alg->key_offs; - ctx->generic.iv_offs = spacc_alg->iv_offs; - - crypto_aead_set_reqsize( - tfm, - max(sizeof(struct spacc_req), - sizeof(struct aead_request) + - crypto_aead_reqsize(ctx->sw_cipher))); - - return 0; -} - -/* - * destructor for an aead context. this is called when the transform is freed - * and must free the fallback cipher. - */ -static void spacc_aead_cra_exit(struct crypto_aead *tfm) -{ - struct spacc_aead_ctx *ctx = crypto_aead_ctx(tfm); - - crypto_free_aead(ctx->sw_cipher); -} - -/* - * set the des key for a block cipher transform. this also performs weak key - * checking if the transform has requested it. - */ -static int spacc_des_setkey(struct crypto_skcipher *cipher, const u8 *key, - unsigned int len) -{ - struct spacc_ablk_ctx *ctx = crypto_skcipher_ctx(cipher); - int err; - - err = verify_skcipher_des_key(cipher, key); - if (err) - return err; - - memcpy(ctx->key, key, len); - ctx->key_len = len; - - return 0; -} - -/* - * set the 3des key for a block cipher transform. this also performs weak key - * checking if the transform has requested it. - */ -static int spacc_des3_setkey(struct crypto_skcipher *cipher, const u8 *key, - unsigned int len) -{ - struct spacc_ablk_ctx *ctx = crypto_skcipher_ctx(cipher); - int err; - - err = verify_skcipher_des3_key(cipher, key); - if (err) - return err; - - memcpy(ctx->key, key, len); - ctx->key_len = len; - - return 0; -} - -/* - * set the key for an aes block cipher. some key lengths are not supported in - * hardware so this must also check whether a fallback is needed. - */ -static int spacc_aes_setkey(struct crypto_skcipher *cipher, const u8 *key, - unsigned int len) -{ - struct crypto_tfm *tfm = crypto_skcipher_tfm(cipher); - struct spacc_ablk_ctx *ctx = crypto_tfm_ctx(tfm); - int err = 0; - - if (len > aes_max_key_size) - return -einval; - - /* - * ipsec engine only supports 128 and 256 bit aes keys. if we get a - * request for any other size (192 bits) then we need to do a software - * fallback. - */ - if (len != aes_keysize_128 && len != aes_keysize_256) { - if (!ctx->sw_cipher) - return -einval; - - /* - * set the fallback transform to use the same request flags as - * the hardware transform. - */ - crypto_skcipher_clear_flags(ctx->sw_cipher, - crypto_tfm_req_mask); - crypto_skcipher_set_flags(ctx->sw_cipher, - cipher->base.crt_flags & - crypto_tfm_req_mask); - - err = crypto_skcipher_setkey(ctx->sw_cipher, key, len); - if (err) - goto sw_setkey_failed; - } - - memcpy(ctx->key, key, len); - ctx->key_len = len; - -sw_setkey_failed: - return err; -} - -static int spacc_kasumi_f8_setkey(struct crypto_skcipher *cipher, - const u8 *key, unsigned int len) -{ - struct crypto_tfm *tfm = crypto_skcipher_tfm(cipher); - struct spacc_ablk_ctx *ctx = crypto_tfm_ctx(tfm); - int err = 0; - - if (len > aes_max_key_size) { - err = -einval; - goto out; - } - - memcpy(ctx->key, key, len); - ctx->key_len = len; - -out: - return err; -} - -static int spacc_ablk_need_fallback(struct spacc_req *req) -{ - struct skcipher_request *ablk_req = skcipher_request_cast(req->req); - struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(ablk_req); - struct spacc_alg *spacc_alg = to_spacc_skcipher(crypto_skcipher_alg(tfm)); - struct spacc_ablk_ctx *ctx; - - ctx = crypto_skcipher_ctx(tfm); - - return (spacc_alg->ctrl_default & spacc_crypto_alg_mask) == - spa_ctrl_ciph_alg_aes && - ctx->key_len != aes_keysize_128 && - ctx->key_len != aes_keysize_256; -} - -static void spacc_ablk_complete(struct spacc_req *req) -{ - struct skcipher_request *ablk_req = skcipher_request_cast(req->req); - - if (ablk_req->src != ablk_req->dst) { - spacc_free_ddt(req, req->src_ddt, req->src_addr, ablk_req->src, - ablk_req->cryptlen, dma_to_device); - spacc_free_ddt(req, req->dst_ddt, req->dst_addr, ablk_req->dst, - ablk_req->cryptlen, dma_from_device); - } else - spacc_free_ddt(req, req->dst_ddt, req->dst_addr, ablk_req->dst, - ablk_req->cryptlen, dma_bidirectional); - - req->req->complete(req->req, req->result); -} - -static int spacc_ablk_submit(struct spacc_req *req) -{ - struct skcipher_request *ablk_req = skcipher_request_cast(req->req); - struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(ablk_req); - struct skcipher_alg *alg = crypto_skcipher_alg(tfm); - struct spacc_alg *spacc_alg = to_spacc_skcipher(alg); - struct spacc_ablk_ctx *ctx = crypto_skcipher_ctx(tfm); - struct spacc_engine *engine = ctx->generic.engine; - u32 ctrl; - - req->ctx_id = spacc_load_ctx(&ctx->generic, ctx->key, - ctx->key_len, ablk_req->iv, alg->ivsize, - null, 0); - - writel(req->src_addr, engine->regs + spa_src_ptr_reg_offset); - writel(req->dst_addr, engine->regs + spa_dst_ptr_reg_offset); - writel(0, engine->regs + spa_offset_reg_offset); - - writel(ablk_req->cryptlen, engine->regs + spa_proc_len_reg_offset); - writel(0, engine->regs + spa_icv_offset_reg_offset); - writel(0, engine->regs + spa_aux_info_reg_offset); - writel(0, engine->regs + spa_aad_len_reg_offset); - - ctrl = spacc_alg->ctrl_default | (req->ctx_id << spa_ctrl_ctx_idx) | - (req->is_encrypt ? (1 << spa_ctrl_encrypt_idx) : - (1 << spa_ctrl_key_exp)); - - mod_timer(&engine->packet_timeout, jiffies + packet_timeout); - - writel(ctrl, engine->regs + spa_ctrl_reg_offset); - - return -einprogress; -} - -static int spacc_ablk_do_fallback(struct skcipher_request *req, - unsigned alg_type, bool is_encrypt) -{ - struct crypto_tfm *old_tfm = - crypto_skcipher_tfm(crypto_skcipher_reqtfm(req)); - struct spacc_ablk_ctx *ctx = crypto_tfm_ctx(old_tfm); - struct spacc_req *dev_req = skcipher_request_ctx(req); - int err; - - /* - * change the request to use the software fallback transform, and once - * the ciphering has completed, put the old transform back into the - * request. - */ - skcipher_request_set_tfm(&dev_req->fallback_req, ctx->sw_cipher); - skcipher_request_set_callback(&dev_req->fallback_req, req->base.flags, - req->base.complete, req->base.data); - skcipher_request_set_crypt(&dev_req->fallback_req, req->src, req->dst, - req->cryptlen, req->iv); - err = is_encrypt ? crypto_skcipher_encrypt(&dev_req->fallback_req) : - crypto_skcipher_decrypt(&dev_req->fallback_req); - - return err; -} - -static int spacc_ablk_setup(struct skcipher_request *req, unsigned alg_type, - bool is_encrypt) -{ - struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); - struct skcipher_alg *alg = crypto_skcipher_alg(tfm); - struct spacc_engine *engine = to_spacc_skcipher(alg)->engine; - struct spacc_req *dev_req = skcipher_request_ctx(req); - unsigned long flags; - int err = -enomem; - - dev_req->req = &req->base; - dev_req->is_encrypt = is_encrypt; - dev_req->engine = engine; - dev_req->complete = spacc_ablk_complete; - dev_req->result = -einprogress; - - if (unlikely(spacc_ablk_need_fallback(dev_req))) - return spacc_ablk_do_fallback(req, alg_type, is_encrypt); - - /* - * create the ddt's for the engine. if we share the same source and - * destination then we can optimize by reusing the ddt's. - */ - if (req->src != req->dst) { - dev_req->src_ddt = spacc_sg_to_ddt(engine, req->src, - req->cryptlen, dma_to_device, &dev_req->src_addr); - if (!dev_req->src_ddt) - goto out; - - dev_req->dst_ddt = spacc_sg_to_ddt(engine, req->dst, - req->cryptlen, dma_from_device, &dev_req->dst_addr); - if (!dev_req->dst_ddt) - goto out_free_src; - } else { - dev_req->dst_ddt = spacc_sg_to_ddt(engine, req->dst, - req->cryptlen, dma_bidirectional, &dev_req->dst_addr); - if (!dev_req->dst_ddt) - goto out; - - dev_req->src_ddt = null; - dev_req->src_addr = dev_req->dst_addr; - } - - err = -einprogress; - spin_lock_irqsave(&engine->hw_lock, flags); - /* - * check if the engine will accept the operation now. if it won't then - * we either stick it on the end of a pending list if we can backlog, - * or bailout with an error if not. - */ - if (unlikely(spacc_fifo_cmd_full(engine)) || - engine->in_flight + 1 > engine->fifo_sz) { - if (!(req->base.flags & crypto_tfm_req_may_backlog)) { - err = -ebusy; - spin_unlock_irqrestore(&engine->hw_lock, flags); - goto out_free_ddts; - } - list_add_tail(&dev_req->list, &engine->pending); - } else { - list_add_tail(&dev_req->list, &engine->pending); - spacc_push(engine); - } - spin_unlock_irqrestore(&engine->hw_lock, flags); - - goto out; - -out_free_ddts: - spacc_free_ddt(dev_req, dev_req->dst_ddt, dev_req->dst_addr, req->dst, - req->cryptlen, req->src == req->dst ? - dma_bidirectional : dma_from_device); -out_free_src: - if (req->src != req->dst) - spacc_free_ddt(dev_req, dev_req->src_ddt, dev_req->src_addr, - req->src, req->cryptlen, dma_to_device); -out: - return err; -} - -static int spacc_ablk_init_tfm(struct crypto_skcipher *tfm) -{ - struct spacc_ablk_ctx *ctx = crypto_skcipher_ctx(tfm); - struct skcipher_alg *alg = crypto_skcipher_alg(tfm); - struct spacc_alg *spacc_alg = to_spacc_skcipher(alg); - struct spacc_engine *engine = spacc_alg->engine; - - ctx->generic.flags = spacc_alg->type; - ctx->generic.engine = engine; - if (alg->base.cra_flags & crypto_alg_need_fallback) { - ctx->sw_cipher = crypto_alloc_skcipher(alg->base.cra_name, 0, - crypto_alg_need_fallback); - if (is_err(ctx->sw_cipher)) { - dev_warn(engine->dev, "failed to allocate fallback for %s ", - alg->base.cra_name); - return ptr_err(ctx->sw_cipher); - } - crypto_skcipher_set_reqsize(tfm, sizeof(struct spacc_req) + - crypto_skcipher_reqsize(ctx->sw_cipher)); - } else { - /* take the size without the fallback skcipher_request at the end */ - crypto_skcipher_set_reqsize(tfm, offsetof(struct spacc_req, - fallback_req)); - } - - ctx->generic.key_offs = spacc_alg->key_offs; - ctx->generic.iv_offs = spacc_alg->iv_offs; - - return 0; -} - -static void spacc_ablk_exit_tfm(struct crypto_skcipher *tfm) -{ - struct spacc_ablk_ctx *ctx = crypto_skcipher_ctx(tfm); - - crypto_free_skcipher(ctx->sw_cipher); -} - -static int spacc_ablk_encrypt(struct skcipher_request *req) -{ - struct crypto_skcipher *cipher = crypto_skcipher_reqtfm(req); - struct skcipher_alg *alg = crypto_skcipher_alg(cipher); - struct spacc_alg *spacc_alg = to_spacc_skcipher(alg); - - return spacc_ablk_setup(req, spacc_alg->type, 1); -} - -static int spacc_ablk_decrypt(struct skcipher_request *req) -{ - struct crypto_skcipher *cipher = crypto_skcipher_reqtfm(req); - struct skcipher_alg *alg = crypto_skcipher_alg(cipher); - struct spacc_alg *spacc_alg = to_spacc_skcipher(alg); - - return spacc_ablk_setup(req, spacc_alg->type, 0); -} - -static inline int spacc_fifo_stat_empty(struct spacc_engine *engine) -{ - return readl(engine->regs + spa_fifo_stat_reg_offset) & - spa_fifo_stat_empty; -} - -static void spacc_process_done(struct spacc_engine *engine) -{ - struct spacc_req *req; - unsigned long flags; - - spin_lock_irqsave(&engine->hw_lock, flags); - - while (!spacc_fifo_stat_empty(engine)) { - req = list_first_entry(&engine->in_progress, struct spacc_req, - list); - list_move_tail(&req->list, &engine->completed); - --engine->in_flight; - - /* pop the status register. */ - writel(~0, engine->regs + spa_stat_pop_reg_offset); - req->result = (readl(engine->regs + spa_status_reg_offset) & - spa_status_res_code_mask) >> spa_status_res_code_offset; - - /* - * convert the spacc error status into the standard posix error - * codes. - */ - if (unlikely(req->result)) { - switch (req->result) { - case spa_status_icv_fail: - req->result = -ebadmsg; - break; - - case spa_status_memory_error: - dev_warn(engine->dev, - "memory error triggered "); - req->result = -efault; - break; - - case spa_status_block_error: - dev_warn(engine->dev, - "block error triggered "); - req->result = -eio; - break; - } - } - } - - tasklet_schedule(&engine->complete); - - spin_unlock_irqrestore(&engine->hw_lock, flags); -} - -static irqreturn_t spacc_spacc_irq(int irq, void *dev) -{ - struct spacc_engine *engine = (struct spacc_engine *)dev; - u32 spacc_irq_stat = readl(engine->regs + spa_irq_stat_reg_offset); - - writel(spacc_irq_stat, engine->regs + spa_irq_stat_reg_offset); - spacc_process_done(engine); - - return irq_handled; -} - -static void spacc_packet_timeout(struct timer_list *t) -{ - struct spacc_engine *engine = from_timer(engine, t, packet_timeout); - - spacc_process_done(engine); -} - -static int spacc_req_submit(struct spacc_req *req) -{ - struct crypto_alg *alg = req->req->tfm->__crt_alg; - - if (crypto_alg_type_aead == (crypto_alg_type_mask & alg->cra_flags)) - return spacc_aead_submit(req); - else - return spacc_ablk_submit(req); -} - -static void spacc_spacc_complete(unsigned long data) -{ - struct spacc_engine *engine = (struct spacc_engine *)data; - struct spacc_req *req, *tmp; - unsigned long flags; - list_head(completed); - - spin_lock_irqsave(&engine->hw_lock, flags); - - list_splice_init(&engine->completed, &completed); - spacc_push(engine); - if (engine->in_flight) - mod_timer(&engine->packet_timeout, jiffies + packet_timeout); - - spin_unlock_irqrestore(&engine->hw_lock, flags); - - list_for_each_entry_safe(req, tmp, &completed, list) { - list_del(&req->list); - req->complete(req); - } -} - -#ifdef config_pm -static int spacc_suspend(struct device *dev) -{ - struct spacc_engine *engine = dev_get_drvdata(dev); - - /* - * we only support standby mode. all we have to do is gate the clock to - * the spacc. the hardware will preserve state until we turn it back - * on again. - */ - clk_disable(engine->clk); - - return 0; -} - -static int spacc_resume(struct device *dev) -{ - struct spacc_engine *engine = dev_get_drvdata(dev); - - return clk_enable(engine->clk); -} - -static const struct dev_pm_ops spacc_pm_ops = { - .suspend = spacc_suspend, - .resume = spacc_resume, -}; -#endif /* config_pm */ - -static inline struct spacc_engine *spacc_dev_to_engine(struct device *dev) -{ - return dev ? dev_get_drvdata(dev) : null; -} - -static ssize_t spacc_stat_irq_thresh_show(struct device *dev, - struct device_attribute *attr, - char *buf) -{ - struct spacc_engine *engine = spacc_dev_to_engine(dev); - - return snprintf(buf, page_size, "%u ", engine->stat_irq_thresh); -} - -static ssize_t spacc_stat_irq_thresh_store(struct device *dev, - struct device_attribute *attr, - const char *buf, size_t len) -{ - struct spacc_engine *engine = spacc_dev_to_engine(dev); - unsigned long thresh; - - if (kstrtoul(buf, 0, &thresh)) - return -einval; - - thresh = clamp(thresh, 1ul, engine->fifo_sz - 1); - - engine->stat_irq_thresh = thresh; - writel(engine->stat_irq_thresh << spa_irq_ctrl_stat_cnt_offset, - engine->regs + spa_irq_ctrl_reg_offset); - - return len; -} -static device_attr(stat_irq_thresh, 0644, spacc_stat_irq_thresh_show, - spacc_stat_irq_thresh_store); - -static struct spacc_alg ipsec_engine_algs[] = { - { - .ctrl_default = spa_ctrl_ciph_alg_aes | spa_ctrl_ciph_mode_cbc, - .key_offs = 0, - .iv_offs = aes_max_key_size, - .alg = { - .base.cra_name = "cbc(aes)", - .base.cra_driver_name = "cbc-aes-picoxcell", - .base.cra_priority = spacc_crypto_alg_priority, - .base.cra_flags = crypto_alg_kern_driver_only | - crypto_alg_async | - crypto_alg_allocates_memory | - crypto_alg_need_fallback, - .base.cra_blocksize = aes_block_size, - .base.cra_ctxsize = sizeof(struct spacc_ablk_ctx), - .base.cra_module = this_module, - - .setkey = spacc_aes_setkey, - .encrypt = spacc_ablk_encrypt, - .decrypt = spacc_ablk_decrypt, - .min_keysize = aes_min_key_size, - .max_keysize = aes_max_key_size, - .ivsize = aes_block_size, - .init = spacc_ablk_init_tfm, - .exit = spacc_ablk_exit_tfm, - }, - }, - { - .key_offs = 0, - .iv_offs = aes_max_key_size, - .ctrl_default = spa_ctrl_ciph_alg_aes | spa_ctrl_ciph_mode_ecb, - .alg = { - .base.cra_name = "ecb(aes)", - .base.cra_driver_name = "ecb-aes-picoxcell", - .base.cra_priority = spacc_crypto_alg_priority, - .base.cra_flags = crypto_alg_kern_driver_only | - crypto_alg_async | - crypto_alg_allocates_memory | - crypto_alg_need_fallback, - .base.cra_blocksize = aes_block_size, - .base.cra_ctxsize = sizeof(struct spacc_ablk_ctx), - .base.cra_module = this_module, - - .setkey = spacc_aes_setkey, - .encrypt = spacc_ablk_encrypt, - .decrypt = spacc_ablk_decrypt, - .min_keysize = aes_min_key_size, - .max_keysize = aes_max_key_size, - .init = spacc_ablk_init_tfm, - .exit = spacc_ablk_exit_tfm, - }, - }, - { - .key_offs = des_block_size, - .iv_offs = 0, - .ctrl_default = spa_ctrl_ciph_alg_des | spa_ctrl_ciph_mode_cbc, - .alg = { - .base.cra_name = "cbc(des)", - .base.cra_driver_name = "cbc-des-picoxcell", - .base.cra_priority = spacc_crypto_alg_priority, - .base.cra_flags = crypto_alg_kern_driver_only | - crypto_alg_async | - crypto_alg_allocates_memory, - .base.cra_blocksize = des_block_size, - .base.cra_ctxsize = sizeof(struct spacc_ablk_ctx), - .base.cra_module = this_module, - - .setkey = spacc_des_setkey, - .encrypt = spacc_ablk_encrypt, - .decrypt = spacc_ablk_decrypt, - .min_keysize = des_key_size, - .max_keysize = des_key_size, - .ivsize = des_block_size, - .init = spacc_ablk_init_tfm, - .exit = spacc_ablk_exit_tfm, - }, - }, - { - .key_offs = des_block_size, - .iv_offs = 0, - .ctrl_default = spa_ctrl_ciph_alg_des | spa_ctrl_ciph_mode_ecb, - .alg = { - .base.cra_name = "ecb(des)", - .base.cra_driver_name = "ecb-des-picoxcell", - .base.cra_priority = spacc_crypto_alg_priority, - .base.cra_flags = crypto_alg_kern_driver_only | - crypto_alg_async | - crypto_alg_allocates_memory, - .base.cra_blocksize = des_block_size, - .base.cra_ctxsize = sizeof(struct spacc_ablk_ctx), - .base.cra_module = this_module, - - .setkey = spacc_des_setkey, - .encrypt = spacc_ablk_encrypt, - .decrypt = spacc_ablk_decrypt, - .min_keysize = des_key_size, - .max_keysize = des_key_size, - .init = spacc_ablk_init_tfm, - .exit = spacc_ablk_exit_tfm, - }, - }, - { - .key_offs = des_block_size, - .iv_offs = 0, - .ctrl_default = spa_ctrl_ciph_alg_des | spa_ctrl_ciph_mode_cbc, - .alg = { - .base.cra_name = "cbc(des3_ede)", - .base.cra_driver_name = "cbc-des3-ede-picoxcell", - .base.cra_priority = spacc_crypto_alg_priority, - .base.cra_flags = crypto_alg_async | - crypto_alg_allocates_memory | - crypto_alg_kern_driver_only, - .base.cra_blocksize = des3_ede_block_size, - .base.cra_ctxsize = sizeof(struct spacc_ablk_ctx), - .base.cra_module = this_module, - - .setkey = spacc_des3_setkey, - .encrypt = spacc_ablk_encrypt, - .decrypt = spacc_ablk_decrypt, - .min_keysize = des3_ede_key_size, - .max_keysize = des3_ede_key_size, - .ivsize = des3_ede_block_size, - .init = spacc_ablk_init_tfm, - .exit = spacc_ablk_exit_tfm, - }, - }, - { - .key_offs = des_block_size, - .iv_offs = 0, - .ctrl_default = spa_ctrl_ciph_alg_des | spa_ctrl_ciph_mode_ecb, - .alg = { - .base.cra_name = "ecb(des3_ede)", - .base.cra_driver_name = "ecb-des3-ede-picoxcell", - .base.cra_priority = spacc_crypto_alg_priority, - .base.cra_flags = crypto_alg_async | - crypto_alg_allocates_memory | - crypto_alg_kern_driver_only, - .base.cra_blocksize = des3_ede_block_size, - .base.cra_ctxsize = sizeof(struct spacc_ablk_ctx), - .base.cra_module = this_module, - - .setkey = spacc_des3_setkey, - .encrypt = spacc_ablk_encrypt, - .decrypt = spacc_ablk_decrypt, - .min_keysize = des3_ede_key_size, - .max_keysize = des3_ede_key_size, - .init = spacc_ablk_init_tfm, - .exit = spacc_ablk_exit_tfm, - }, - }, -}; - -static struct spacc_aead ipsec_engine_aeads[] = { - { - .ctrl_default = spa_ctrl_ciph_alg_aes | - spa_ctrl_ciph_mode_cbc | - spa_ctrl_hash_alg_sha | - spa_ctrl_hash_mode_hmac, - .key_offs = 0, - .iv_offs = aes_max_key_size, - .alg = { - .base = { - .cra_name = "authenc(hmac(sha1),cbc(aes))", - .cra_driver_name = "authenc-hmac-sha1-" - "cbc-aes-picoxcell", - .cra_priority = spacc_crypto_alg_priority, - .cra_flags = crypto_alg_async | - crypto_alg_allocates_memory | - crypto_alg_need_fallback | - crypto_alg_kern_driver_only, - .cra_blocksize = aes_block_size, - .cra_ctxsize = sizeof(struct spacc_aead_ctx), - .cra_module = this_module, - }, - .setkey = spacc_aead_setkey, - .setauthsize = spacc_aead_setauthsize, - .encrypt = spacc_aead_encrypt, - .decrypt = spacc_aead_decrypt, - .ivsize = aes_block_size, - .maxauthsize = sha1_digest_size, - .init = spacc_aead_cra_init, - .exit = spacc_aead_cra_exit, - }, - }, - { - .ctrl_default = spa_ctrl_ciph_alg_aes | - spa_ctrl_ciph_mode_cbc | - spa_ctrl_hash_alg_sha256 | - spa_ctrl_hash_mode_hmac, - .key_offs = 0, - .iv_offs = aes_max_key_size, - .alg = { - .base = { - .cra_name = "authenc(hmac(sha256),cbc(aes))", - .cra_driver_name = "authenc-hmac-sha256-" - "cbc-aes-picoxcell", - .cra_priority = spacc_crypto_alg_priority, - .cra_flags = crypto_alg_async | - crypto_alg_allocates_memory | - crypto_alg_need_fallback | - crypto_alg_kern_driver_only, - .cra_blocksize = aes_block_size, - .cra_ctxsize = sizeof(struct spacc_aead_ctx), - .cra_module = this_module, - }, - .setkey = spacc_aead_setkey, - .setauthsize = spacc_aead_setauthsize, - .encrypt = spacc_aead_encrypt, - .decrypt = spacc_aead_decrypt, - .ivsize = aes_block_size, - .maxauthsize = sha256_digest_size, - .init = spacc_aead_cra_init, - .exit = spacc_aead_cra_exit, - }, - }, - { - .key_offs = 0, - .iv_offs = aes_max_key_size, - .ctrl_default = spa_ctrl_ciph_alg_aes | - spa_ctrl_ciph_mode_cbc | - spa_ctrl_hash_alg_md5 | - spa_ctrl_hash_mode_hmac, - .alg = { - .base = { - .cra_name = "authenc(hmac(md5),cbc(aes))", - .cra_driver_name = "authenc-hmac-md5-" - "cbc-aes-picoxcell", - .cra_priority = spacc_crypto_alg_priority, - .cra_flags = crypto_alg_async | - crypto_alg_allocates_memory | - crypto_alg_need_fallback | - crypto_alg_kern_driver_only, - .cra_blocksize = aes_block_size, - .cra_ctxsize = sizeof(struct spacc_aead_ctx), - .cra_module = this_module, - }, - .setkey = spacc_aead_setkey, - .setauthsize = spacc_aead_setauthsize, - .encrypt = spacc_aead_encrypt, - .decrypt = spacc_aead_decrypt, - .ivsize = aes_block_size, - .maxauthsize = md5_digest_size, - .init = spacc_aead_cra_init, - .exit = spacc_aead_cra_exit, - }, - }, - { - .key_offs = des_block_size, - .iv_offs = 0, - .ctrl_default = spa_ctrl_ciph_alg_des | - spa_ctrl_ciph_mode_cbc | - spa_ctrl_hash_alg_sha | - spa_ctrl_hash_mode_hmac, - .alg = { - .base = { - .cra_name = "authenc(hmac(sha1),cbc(des3_ede))", - .cra_driver_name = "authenc-hmac-sha1-" - "cbc-3des-picoxcell", - .cra_priority = spacc_crypto_alg_priority, - .cra_flags = crypto_alg_async | - crypto_alg_allocates_memory | - crypto_alg_need_fallback | - crypto_alg_kern_driver_only, - .cra_blocksize = des3_ede_block_size, - .cra_ctxsize = sizeof(struct spacc_aead_ctx), - .cra_module = this_module, - }, - .setkey = spacc_aead_setkey, - .setauthsize = spacc_aead_setauthsize, - .encrypt = spacc_aead_encrypt, - .decrypt = spacc_aead_decrypt, - .ivsize = des3_ede_block_size, - .maxauthsize = sha1_digest_size, - .init = spacc_aead_cra_init, - .exit = spacc_aead_cra_exit, - }, - }, - { - .key_offs = des_block_size, - .iv_offs = 0, - .ctrl_default = spa_ctrl_ciph_alg_aes | - spa_ctrl_ciph_mode_cbc | - spa_ctrl_hash_alg_sha256 | - spa_ctrl_hash_mode_hmac, - .alg = { - .base = { - .cra_name = "authenc(hmac(sha256)," - "cbc(des3_ede))", - .cra_driver_name = "authenc-hmac-sha256-" - "cbc-3des-picoxcell", - .cra_priority = spacc_crypto_alg_priority, - .cra_flags = crypto_alg_async | - crypto_alg_allocates_memory | - crypto_alg_need_fallback | - crypto_alg_kern_driver_only, - .cra_blocksize = des3_ede_block_size, - .cra_ctxsize = sizeof(struct spacc_aead_ctx), - .cra_module = this_module, - }, - .setkey = spacc_aead_setkey, - .setauthsize = spacc_aead_setauthsize, - .encrypt = spacc_aead_encrypt, - .decrypt = spacc_aead_decrypt, - .ivsize = des3_ede_block_size, - .maxauthsize = sha256_digest_size, - .init = spacc_aead_cra_init, - .exit = spacc_aead_cra_exit, - }, - }, - { - .key_offs = des_block_size, - .iv_offs = 0, - .ctrl_default = spa_ctrl_ciph_alg_des | - spa_ctrl_ciph_mode_cbc | - spa_ctrl_hash_alg_md5 | - spa_ctrl_hash_mode_hmac, - .alg = { - .base = { - .cra_name = "authenc(hmac(md5),cbc(des3_ede))", - .cra_driver_name = "authenc-hmac-md5-" - "cbc-3des-picoxcell", - .cra_priority = spacc_crypto_alg_priority, - .cra_flags = crypto_alg_async | - crypto_alg_allocates_memory | - crypto_alg_need_fallback | - crypto_alg_kern_driver_only, - .cra_blocksize = des3_ede_block_size, - .cra_ctxsize = sizeof(struct spacc_aead_ctx), - .cra_module = this_module, - }, - .setkey = spacc_aead_setkey, - .setauthsize = spacc_aead_setauthsize, - .encrypt = spacc_aead_encrypt, - .decrypt = spacc_aead_decrypt, - .ivsize = des3_ede_block_size, - .maxauthsize = md5_digest_size, - .init = spacc_aead_cra_init, - .exit = spacc_aead_cra_exit, - }, - }, -}; - -static struct spacc_alg l2_engine_algs[] = { - { - .key_offs = 0, - .iv_offs = spacc_crypto_kasumi_f8_key_len, - .ctrl_default = spa_ctrl_ciph_alg_kasumi | - spa_ctrl_ciph_mode_f8, - .alg = { - .base.cra_name = "f8(kasumi)", - .base.cra_driver_name = "f8-kasumi-picoxcell", - .base.cra_priority = spacc_crypto_alg_priority, - .base.cra_flags = crypto_alg_async | - crypto_alg_allocates_memory | - crypto_alg_kern_driver_only, - .base.cra_blocksize = 8, - .base.cra_ctxsize = sizeof(struct spacc_ablk_ctx), - .base.cra_module = this_module, - - .setkey = spacc_kasumi_f8_setkey, - .encrypt = spacc_ablk_encrypt, - .decrypt = spacc_ablk_decrypt, - .min_keysize = 16, - .max_keysize = 16, - .ivsize = 8, - .init = spacc_ablk_init_tfm, - .exit = spacc_ablk_exit_tfm, - }, - }, -}; - -#ifdef config_of -static const struct of_device_id spacc_of_id_table[] = { - { .compatible = "picochip,spacc-ipsec" }, - { .compatible = "picochip,spacc-l2" }, - {} -}; -module_device_table(of, spacc_of_id_table); -#endif /* config_of */ - -static void spacc_tasklet_kill(void *data) -{ - tasklet_kill(data); -} - -static int spacc_probe(struct platform_device *pdev) -{ - int i, err, ret; - struct resource *irq; - struct device_node *np = pdev->dev.of_node; - struct spacc_engine *engine = devm_kzalloc(&pdev->dev, sizeof(*engine), - gfp_kernel); - if (!engine) - return -enomem; - - if (of_device_is_compatible(np, "picochip,spacc-ipsec")) { - engine->max_ctxs = spacc_crypto_ipsec_max_ctxs; - engine->cipher_pg_sz = spacc_crypto_ipsec_cipher_pg_sz; - engine->hash_pg_sz = spacc_crypto_ipsec_hash_pg_sz; - engine->fifo_sz = spacc_crypto_ipsec_fifo_sz; - engine->algs = ipsec_engine_algs; - engine->num_algs = array_size(ipsec_engine_algs); - engine->aeads = ipsec_engine_aeads; - engine->num_aeads = array_size(ipsec_engine_aeads); - } else if (of_device_is_compatible(np, "picochip,spacc-l2")) { - engine->max_ctxs = spacc_crypto_l2_max_ctxs; - engine->cipher_pg_sz = spacc_crypto_l2_cipher_pg_sz; - engine->hash_pg_sz = spacc_crypto_l2_hash_pg_sz; - engine->fifo_sz = spacc_crypto_l2_fifo_sz; - engine->algs = l2_engine_algs; - engine->num_algs = array_size(l2_engine_algs); - } else { - return -einval; - } - - engine->name = dev_name(&pdev->dev); - - engine->regs = devm_platform_ioremap_resource(pdev, 0); - if (is_err(engine->regs)) - return ptr_err(engine->regs); - - irq = platform_get_resource(pdev, ioresource_irq, 0); - if (!irq) { - dev_err(&pdev->dev, "no memory/irq resource for engine "); - return -enxio; - } - - tasklet_init(&engine->complete, spacc_spacc_complete, - (unsigned long)engine); - - ret = devm_add_action(&pdev->dev, spacc_tasklet_kill, - &engine->complete); - if (ret) - return ret; - - if (devm_request_irq(&pdev->dev, irq->start, spacc_spacc_irq, 0, - engine->name, engine)) { - dev_err(engine->dev, "failed to request irq "); - return -ebusy; - } - - engine->dev = &pdev->dev; - engine->cipher_ctx_base = engine->regs + spa_ciph_key_base_reg_offset; - engine->hash_key_base = engine->regs + spa_hash_key_base_reg_offset; - - engine->req_pool = dmam_pool_create(engine->name, engine->dev, - max_ddt_len * sizeof(struct spacc_ddt), 8, sz_64k); - if (!engine->req_pool) - return -enomem; - - spin_lock_init(&engine->hw_lock); - - engine->clk = clk_get(&pdev->dev, "ref"); - if (is_err(engine->clk)) { - dev_info(&pdev->dev, "clk unavailable "); - return ptr_err(engine->clk); - } - - if (clk_prepare_enable(engine->clk)) { - dev_info(&pdev->dev, "unable to prepare/enable clk "); - ret = -eio; - goto err_clk_put; - } - - /* - * use an irq threshold of 50% as a default. this seems to be a - * reasonable trade off of latency against throughput but can be - * changed at runtime. - */ - engine->stat_irq_thresh = (engine->fifo_sz / 2); - - ret = device_create_file(&pdev->dev, &dev_attr_stat_irq_thresh); - if (ret) - goto err_clk_disable; - - /* - * configure the interrupts. we only use the stat_cnt interrupt as we - * only submit a new packet for processing when we complete another in - * the queue. this minimizes time spent in the interrupt handler. - */ - writel(engine->stat_irq_thresh << spa_irq_ctrl_stat_cnt_offset, - engine->regs + spa_irq_ctrl_reg_offset); - writel(spa_irq_en_stat_en | spa_irq_en_glbl_en, - engine->regs + spa_irq_en_reg_offset); - - timer_setup(&engine->packet_timeout, spacc_packet_timeout, 0); - - init_list_head(&engine->pending); - init_list_head(&engine->completed); - init_list_head(&engine->in_progress); - engine->in_flight = 0; - - platform_set_drvdata(pdev, engine); - - ret = -einval; - init_list_head(&engine->registered_algs); - for (i = 0; i < engine->num_algs; ++i) { - engine->algs[i].engine = engine; - err = crypto_register_skcipher(&engine->algs[i].alg); - if (!err) { - list_add_tail(&engine->algs[i].entry, - &engine->registered_algs); - ret = 0; - } - if (err) - dev_err(engine->dev, "failed to register alg "%s" ", - engine->algs[i].alg.base.cra_name); - else - dev_dbg(engine->dev, "registered alg "%s" ", - engine->algs[i].alg.base.cra_name); - } - - init_list_head(&engine->registered_aeads); - for (i = 0; i < engine->num_aeads; ++i) { - engine->aeads[i].engine = engine; - err = crypto_register_aead(&engine->aeads[i].alg); - if (!err) { - list_add_tail(&engine->aeads[i].entry, - &engine->registered_aeads); - ret = 0; - } - if (err) - dev_err(engine->dev, "failed to register alg "%s" ", - engine->aeads[i].alg.base.cra_name); - else - dev_dbg(engine->dev, "registered alg "%s" ", - engine->aeads[i].alg.base.cra_name); - } - - if (!ret) - return 0; - - del_timer_sync(&engine->packet_timeout); - device_remove_file(&pdev->dev, &dev_attr_stat_irq_thresh); -err_clk_disable: - clk_disable_unprepare(engine->clk); -err_clk_put: - clk_put(engine->clk); - - return ret; -} - -static int spacc_remove(struct platform_device *pdev) -{ - struct spacc_aead *aead, *an; - struct spacc_alg *alg, *next; - struct spacc_engine *engine = platform_get_drvdata(pdev); - - del_timer_sync(&engine->packet_timeout); - device_remove_file(&pdev->dev, &dev_attr_stat_irq_thresh); - - list_for_each_entry_safe(aead, an, &engine->registered_aeads, entry) { - list_del(&aead->entry); - crypto_unregister_aead(&aead->alg); - } - - list_for_each_entry_safe(alg, next, &engine->registered_algs, entry) { - list_del(&alg->entry); - crypto_unregister_skcipher(&alg->alg); - } - - clk_disable_unprepare(engine->clk); - clk_put(engine->clk); - - return 0; -} - -static struct platform_driver spacc_driver = { - .probe = spacc_probe, - .remove = spacc_remove, - .driver = { - .name = "picochip,spacc", -#ifdef config_pm - .pm = &spacc_pm_ops, -#endif /* config_pm */ - .of_match_table = of_match_ptr(spacc_of_id_table), - }, -}; - -module_platform_driver(spacc_driver); - -module_license("gpl"); -module_author("jamie iles"); diff --git a/drivers/crypto/picoxcell_crypto_regs.h b/drivers/crypto/picoxcell_crypto_regs.h --- a/drivers/crypto/picoxcell_crypto_regs.h +++ /dev/null -/* spdx-license-identifier: gpl-2.0-or-later */ -/* - * copyright (c) 2010 picochip ltd., jamie iles - */ -#ifndef __picoxcell_crypto_regs_h__ -#define __picoxcell_crypto_regs_h__ - -#define spa_status_ok 0 -#define spa_status_icv_fail 1 -#define spa_status_memory_error 2 -#define spa_status_block_error 3 - -#define spa_irq_ctrl_stat_cnt_offset 16 -#define spa_irq_stat_stat_mask (1 << 4) -#define spa_fifo_stat_stat_offset 16 -#define spa_fifo_stat_stat_cnt_mask (0x3f << spa_fifo_stat_stat_offset) -#define spa_status_res_code_offset 24 -#define spa_status_res_code_mask (0x3 << spa_status_res_code_offset) -#define spa_key_sz_ctx_index_offset 8 -#define spa_key_sz_cipher_offset 31 - -#define spa_irq_en_reg_offset 0x00000000 -#define spa_irq_stat_reg_offset 0x00000004 -#define spa_irq_ctrl_reg_offset 0x00000008 -#define spa_fifo_stat_reg_offset 0x0000000c -#define spa_sdma_brst_sz_reg_offset 0x00000010 -#define spa_src_ptr_reg_offset 0x00000020 -#define spa_dst_ptr_reg_offset 0x00000024 -#define spa_offset_reg_offset 0x00000028 -#define spa_aad_len_reg_offset 0x0000002c -#define spa_proc_len_reg_offset 0x00000030 -#define spa_icv_len_reg_offset 0x00000034 -#define spa_icv_offset_reg_offset 0x00000038 -#define spa_sw_ctrl_reg_offset 0x0000003c -#define spa_ctrl_reg_offset 0x00000040 -#define spa_aux_info_reg_offset 0x0000004c -#define spa_stat_pop_reg_offset 0x00000050 -#define spa_status_reg_offset 0x00000054 -#define spa_key_sz_reg_offset 0x00000100 -#define spa_ciph_key_base_reg_offset 0x00004000 -#define spa_hash_key_base_reg_offset 0x00008000 -#define spa_rc4_ctx_base_reg_offset 0x00020000 - -#define spa_irq_en_reg_reset 0x00000000 -#define spa_irq_ctrl_reg_reset 0x00000000 -#define spa_fifo_stat_reg_reset 0x00000000 -#define spa_sdma_brst_sz_reg_reset 0x00000000 -#define spa_src_ptr_reg_reset 0x00000000 -#define spa_dst_ptr_reg_reset 0x00000000 -#define spa_offset_reg_reset 0x00000000 -#define spa_aad_len_reg_reset 0x00000000 -#define spa_proc_len_reg_reset 0x00000000 -#define spa_icv_len_reg_reset 0x00000000 -#define spa_icv_offset_reg_reset 0x00000000 -#define spa_sw_ctrl_reg_reset 0x00000000 -#define spa_ctrl_reg_reset 0x00000000 -#define spa_aux_info_reg_reset 0x00000000 -#define spa_stat_pop_reg_reset 0x00000000 -#define spa_status_reg_reset 0x00000000 -#define spa_key_sz_reg_reset 0x00000000 - -#define spa_ctrl_hash_alg_idx 4 -#define spa_ctrl_ciph_mode_idx 8 -#define spa_ctrl_hash_mode_idx 12 -#define spa_ctrl_ctx_idx 16 -#define spa_ctrl_encrypt_idx 24 -#define spa_ctrl_aad_copy 25 -#define spa_ctrl_icv_pt 26 -#define spa_ctrl_icv_enc 27 -#define spa_ctrl_icv_append 28 -#define spa_ctrl_key_exp 29 - -#define spa_key_sz_cxt_idx 8 -#define spa_key_sz_cipher_idx 31 - -#define spa_irq_en_cmd0_en (1 << 0) -#define spa_irq_en_stat_en (1 << 4) -#define spa_irq_en_glbl_en (1 << 31) - -#define spa_ctrl_ciph_alg_null 0x00 -#define spa_ctrl_ciph_alg_des 0x01 -#define spa_ctrl_ciph_alg_aes 0x02 -#define spa_ctrl_ciph_alg_rc4 0x03 -#define spa_ctrl_ciph_alg_multi2 0x04 -#define spa_ctrl_ciph_alg_kasumi 0x05 - -#define spa_ctrl_hash_alg_null (0x00 << spa_ctrl_hash_alg_idx) -#define spa_ctrl_hash_alg_md5 (0x01 << spa_ctrl_hash_alg_idx) -#define spa_ctrl_hash_alg_sha (0x02 << spa_ctrl_hash_alg_idx) -#define spa_ctrl_hash_alg_sha224 (0x03 << spa_ctrl_hash_alg_idx) -#define spa_ctrl_hash_alg_sha256 (0x04 << spa_ctrl_hash_alg_idx) -#define spa_ctrl_hash_alg_sha384 (0x05 << spa_ctrl_hash_alg_idx) -#define spa_ctrl_hash_alg_sha512 (0x06 << spa_ctrl_hash_alg_idx) -#define spa_ctrl_hash_alg_aesmac (0x07 << spa_ctrl_hash_alg_idx) -#define spa_ctrl_hash_alg_aescmac (0x08 << spa_ctrl_hash_alg_idx) -#define spa_ctrl_hash_alg_kasf9 (0x09 << spa_ctrl_hash_alg_idx) - -#define spa_ctrl_ciph_mode_null (0x00 << spa_ctrl_ciph_mode_idx) -#define spa_ctrl_ciph_mode_ecb (0x00 << spa_ctrl_ciph_mode_idx) -#define spa_ctrl_ciph_mode_cbc (0x01 << spa_ctrl_ciph_mode_idx) -#define spa_ctrl_ciph_mode_ctr (0x02 << spa_ctrl_ciph_mode_idx) -#define spa_ctrl_ciph_mode_ccm (0x03 << spa_ctrl_ciph_mode_idx) -#define spa_ctrl_ciph_mode_gcm (0x05 << spa_ctrl_ciph_mode_idx) -#define spa_ctrl_ciph_mode_ofb (0x07 << spa_ctrl_ciph_mode_idx) -#define spa_ctrl_ciph_mode_cfb (0x08 << spa_ctrl_ciph_mode_idx) -#define spa_ctrl_ciph_mode_f8 (0x09 << spa_ctrl_ciph_mode_idx) - -#define spa_ctrl_hash_mode_raw (0x00 << spa_ctrl_hash_mode_idx) -#define spa_ctrl_hash_mode_sslmac (0x01 << spa_ctrl_hash_mode_idx) -#define spa_ctrl_hash_mode_hmac (0x02 << spa_ctrl_hash_mode_idx) - -#define spa_fifo_stat_empty (1 << 31) -#define spa_fifo_cmd_full (1 << 7) - -#endif /* __picoxcell_crypto_regs_h__ */
|
Cryptography hardware acceleration
|
fecff3b931a52c8d5263fb1537161f0214acb44a
|
rob herring ard biesheuvel ardb kernel org
|
drivers
|
crypto
| |
crypto: mediatek - remove obsolete driver
|
the crypto mediatek driver has been replaced by the inside-secure driver now. remove this driver to avoid having duplicate drivers.
|
this release allows to map an uid to a different one in a mount; it also adds support for selecting the preemption model at runtime; support for a low-overhead memory error detector designed to be used in production; support for the acrn hypervisor designed for embedded systems; btrfs initial support for zoned devices, subpage blocks sizes and performance improvements; support for eager nfs writes; support for a thermal power management to control the surface temperature of embedded devices in an unified way; the napi polling can be moved to a kernel thread; and support for non-blocking path lookups. as always, there are many other features, new drivers, improvements and fixes.
|
remove obsolete driver
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'cryptography', 'security', 'networking', 'architectures x86 arm risc-v powerpc mips csky s390 pa-risc c6x']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'device voltage and frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'non-transparent bridge (ntb)', 'thunderbolt', 'clock', 'phy ("physical layer" framework)', 'cxl (compute express link)', 'various']
|
['mediatek']
|
['h', 'kconfig', 'c', 'makefile']
| 8
| 0
| 3,650
|
--- diff --git a/drivers/crypto/kconfig b/drivers/crypto/kconfig --- a/drivers/crypto/kconfig +++ b/drivers/crypto/kconfig -config crypto_dev_mediatek - tristate "mediatek's eip97 cryptographic engine driver" - depends on (arm && arch_mediatek) || compile_test - select crypto_lib_aes - select crypto_aead - select crypto_skcipher - select crypto_sha1 - select crypto_sha256 - select crypto_sha512 - select crypto_hmac - help - this driver allows you to utilize the hardware crypto accelerator - eip97 which can be found on the mt7623 mt2701, mt8521p, etc .... - select this if you want to use it for aes/sha1/sha2 algorithms. - diff --git a/drivers/crypto/makefile b/drivers/crypto/makefile --- a/drivers/crypto/makefile +++ b/drivers/crypto/makefile -obj-$(config_crypto_dev_mediatek) += mediatek/ diff --git a/drivers/crypto/mediatek/makefile b/drivers/crypto/mediatek/makefile --- a/drivers/crypto/mediatek/makefile +++ /dev/null -# spdx-license-identifier: gpl-2.0-only -obj-$(config_crypto_dev_mediatek) += mtk-crypto.o -mtk-crypto-objs:= mtk-platform.o mtk-aes.o mtk-sha.o diff --git a/drivers/crypto/mediatek/mtk-aes.c b/drivers/crypto/mediatek/mtk-aes.c --- a/drivers/crypto/mediatek/mtk-aes.c +++ /dev/null -// spdx-license-identifier: gpl-2.0-only -/* - * cryptographic api. - * - * driver for eip97 aes acceleration. - * - * copyright (c) 2016 ryder lee <ryder.lee@mediatek.com> - * - * some ideas are from atmel-aes.c drivers. - */ - -#include <crypto/aes.h> -#include <crypto/gcm.h> -#include <crypto/internal/skcipher.h> -#include "mtk-platform.h" - -#define aes_queue_size 512 -#define aes_buf_order 2 -#define aes_buf_size ((page_size << aes_buf_order) \ - & ~(aes_block_size - 1)) -#define aes_max_state_buf_size size_in_words(aes_keysize_256 + \ - aes_block_size * 2) -#define aes_max_ct_size 6 - -#define aes_ct_ctrl_hdr cpu_to_le32(0x00220000) - -/* aes-cbc/ecb/ctr/ofb/cfb command token */ -#define aes_cmd0 cpu_to_le32(0x05000000) -#define aes_cmd1 cpu_to_le32(0x2d060000) -#define aes_cmd2 cpu_to_le32(0xe4a63806) -/* aes-gcm command token */ -#define aes_gcm_cmd0 cpu_to_le32(0x0b000000) -#define aes_gcm_cmd1 cpu_to_le32(0xa0800000) -#define aes_gcm_cmd2 cpu_to_le32(0x25000010) -#define aes_gcm_cmd3 cpu_to_le32(0x0f020000) -#define aes_gcm_cmd4 cpu_to_le32(0x21e60000) -#define aes_gcm_cmd5 cpu_to_le32(0x40e60000) -#define aes_gcm_cmd6 cpu_to_le32(0xd0070000) - -/* aes transform information word 0 fields */ -#define aes_tfm_basic_out cpu_to_le32(0x4 << 0) -#define aes_tfm_basic_in cpu_to_le32(0x5 << 0) -#define aes_tfm_gcm_out cpu_to_le32(0x6 << 0) -#define aes_tfm_gcm_in cpu_to_le32(0xf << 0) -#define aes_tfm_size(x) cpu_to_le32((x) << 8) -#define aes_tfm_128bits cpu_to_le32(0xb << 16) -#define aes_tfm_192bits cpu_to_le32(0xd << 16) -#define aes_tfm_256bits cpu_to_le32(0xf << 16) -#define aes_tfm_ghash_digest cpu_to_le32(0x2 << 21) -#define aes_tfm_ghash cpu_to_le32(0x4 << 23) -/* aes transform information word 1 fields */ -#define aes_tfm_ecb cpu_to_le32(0x0 << 0) -#define aes_tfm_cbc cpu_to_le32(0x1 << 0) -#define aes_tfm_ofb cpu_to_le32(0x4 << 0) -#define aes_tfm_cfb128 cpu_to_le32(0x5 << 0) -#define aes_tfm_ctr_init cpu_to_le32(0x2 << 0) /* init counter to 1 */ -#define aes_tfm_ctr_load cpu_to_le32(0x6 << 0) /* load/reuse counter */ -#define aes_tfm_3iv cpu_to_le32(0x7 << 5) /* using iv 0-2 */ -#define aes_tfm_full_iv cpu_to_le32(0xf << 5) /* using iv 0-3 */ -#define aes_tfm_iv_ctr_mode cpu_to_le32(0x1 << 10) -#define aes_tfm_enc_hash cpu_to_le32(0x1 << 17) - -/* aes flags */ -#define aes_flags_cipher_msk genmask(4, 0) -#define aes_flags_ecb bit(0) -#define aes_flags_cbc bit(1) -#define aes_flags_ctr bit(2) -#define aes_flags_ofb bit(3) -#define aes_flags_cfb128 bit(4) -#define aes_flags_gcm bit(5) -#define aes_flags_encrypt bit(6) -#define aes_flags_busy bit(7) - -#define aes_auth_tag_err cpu_to_le32(bit(26)) - -/** - * mtk_aes_info - hardware information of aes - * @cmd: command token, hardware instruction - * @tfm: transform state of cipher algorithm. - * @state: contains keys and initial vectors. - * - * memory layout of gcm buffer: - * /-----------\ - * | aes key | 128/196/256 bits - * |-----------| - * | hash key | a string 128 zero bits encrypted using the block cipher - * |-----------| - * | ivs | 4 * 4 bytes - * \-----------/ - * - * the engine requires all these info to do: - * - commands decoding and control of the engine's data path. - * - coordinating hardware data fetch and store operations. - * - result token construction and output. - */ -struct mtk_aes_info { - __le32 cmd[aes_max_ct_size]; - __le32 tfm[2]; - __le32 state[aes_max_state_buf_size]; -}; - -struct mtk_aes_reqctx { - u64 mode; -}; - -struct mtk_aes_base_ctx { - struct mtk_cryp *cryp; - u32 keylen; - __le32 key[12]; - __le32 keymode; - - mtk_aes_fn start; - - struct mtk_aes_info info; - dma_addr_t ct_dma; - dma_addr_t tfm_dma; - - __le32 ct_hdr; - u32 ct_size; -}; - -struct mtk_aes_ctx { - struct mtk_aes_base_ctx base; -}; - -struct mtk_aes_ctr_ctx { - struct mtk_aes_base_ctx base; - - __be32 iv[aes_block_size / sizeof(u32)]; - size_t offset; - struct scatterlist src[2]; - struct scatterlist dst[2]; -}; - -struct mtk_aes_gcm_ctx { - struct mtk_aes_base_ctx base; - - u32 authsize; - size_t textlen; -}; - -struct mtk_aes_drv { - struct list_head dev_list; - /* device list lock */ - spinlock_t lock; -}; - -static struct mtk_aes_drv mtk_aes = { - .dev_list = list_head_init(mtk_aes.dev_list), - .lock = __spin_lock_unlocked(mtk_aes.lock), -}; - -static inline u32 mtk_aes_read(struct mtk_cryp *cryp, u32 offset) -{ - return readl_relaxed(cryp->base + offset); -} - -static inline void mtk_aes_write(struct mtk_cryp *cryp, - u32 offset, u32 value) -{ - writel_relaxed(value, cryp->base + offset); -} - -static struct mtk_cryp *mtk_aes_find_dev(struct mtk_aes_base_ctx *ctx) -{ - struct mtk_cryp *cryp = null; - struct mtk_cryp *tmp; - - spin_lock_bh(&mtk_aes.lock); - if (!ctx->cryp) { - list_for_each_entry(tmp, &mtk_aes.dev_list, aes_list) { - cryp = tmp; - break; - } - ctx->cryp = cryp; - } else { - cryp = ctx->cryp; - } - spin_unlock_bh(&mtk_aes.lock); - - return cryp; -} - -static inline size_t mtk_aes_padlen(size_t len) -{ - len &= aes_block_size - 1; - return len ? aes_block_size - len : 0; -} - -static bool mtk_aes_check_aligned(struct scatterlist *sg, size_t len, - struct mtk_aes_dma *dma) -{ - int nents; - - if (!is_aligned(len, aes_block_size)) - return false; - - for (nents = 0; sg; sg = sg_next(sg), ++nents) { - if (!is_aligned(sg->offset, sizeof(u32))) - return false; - - if (len <= sg->length) { - if (!is_aligned(len, aes_block_size)) - return false; - - dma->nents = nents + 1; - dma->remainder = sg->length - len; - sg->length = len; - return true; - } - - if (!is_aligned(sg->length, aes_block_size)) - return false; - - len -= sg->length; - } - - return false; -} - -static inline void mtk_aes_set_mode(struct mtk_aes_rec *aes, - const struct mtk_aes_reqctx *rctx) -{ - /* clear all but persistent flags and set request flags. */ - aes->flags = (aes->flags & aes_flags_busy) | rctx->mode; -} - -static inline void mtk_aes_restore_sg(const struct mtk_aes_dma *dma) -{ - struct scatterlist *sg = dma->sg; - int nents = dma->nents; - - if (!dma->remainder) - return; - - while (--nents > 0 && sg) - sg = sg_next(sg); - - if (!sg) - return; - - sg->length += dma->remainder; -} - -static inline int mtk_aes_complete(struct mtk_cryp *cryp, - struct mtk_aes_rec *aes, - int err) -{ - aes->flags &= ~aes_flags_busy; - aes->areq->complete(aes->areq, err); - /* handle new request */ - tasklet_schedule(&aes->queue_task); - return err; -} - -/* - * write descriptors for processing. this will configure the engine, load - * the transform information and then start the packet processing. - */ -static int mtk_aes_xmit(struct mtk_cryp *cryp, struct mtk_aes_rec *aes) -{ - struct mtk_ring *ring = cryp->ring[aes->id]; - struct mtk_desc *cmd = null, *res = null; - struct scatterlist *ssg = aes->src.sg, *dsg = aes->dst.sg; - u32 slen = aes->src.sg_len, dlen = aes->dst.sg_len; - int nents; - - /* write command descriptors */ - for (nents = 0; nents < slen; ++nents, ssg = sg_next(ssg)) { - cmd = ring->cmd_next; - cmd->hdr = mtk_desc_buf_len(ssg->length); - cmd->buf = cpu_to_le32(sg_dma_address(ssg)); - - if (nents == 0) { - cmd->hdr |= mtk_desc_first | - mtk_desc_ct_len(aes->ctx->ct_size); - cmd->ct = cpu_to_le32(aes->ctx->ct_dma); - cmd->ct_hdr = aes->ctx->ct_hdr; - cmd->tfm = cpu_to_le32(aes->ctx->tfm_dma); - } - - /* shift ring buffer and check boundary */ - if (++ring->cmd_next == ring->cmd_base + mtk_desc_num) - ring->cmd_next = ring->cmd_base; - } - cmd->hdr |= mtk_desc_last; - - /* prepare result descriptors */ - for (nents = 0; nents < dlen; ++nents, dsg = sg_next(dsg)) { - res = ring->res_next; - res->hdr = mtk_desc_buf_len(dsg->length); - res->buf = cpu_to_le32(sg_dma_address(dsg)); - - if (nents == 0) - res->hdr |= mtk_desc_first; - - /* shift ring buffer and check boundary */ - if (++ring->res_next == ring->res_base + mtk_desc_num) - ring->res_next = ring->res_base; - } - res->hdr |= mtk_desc_last; - - /* pointer to current result descriptor */ - ring->res_prev = res; - - /* prepare enough space for authenticated tag */ - if (aes->flags & aes_flags_gcm) - le32_add_cpu(&res->hdr, aes_block_size); - - /* - * make sure that all changes to the dma ring are done before we - * start engine. - */ - wmb(); - /* start dma transfer */ - mtk_aes_write(cryp, rdr_prep_count(aes->id), mtk_desc_cnt(dlen)); - mtk_aes_write(cryp, cdr_prep_count(aes->id), mtk_desc_cnt(slen)); - - return -einprogress; -} - -static void mtk_aes_unmap(struct mtk_cryp *cryp, struct mtk_aes_rec *aes) -{ - struct mtk_aes_base_ctx *ctx = aes->ctx; - - dma_unmap_single(cryp->dev, ctx->ct_dma, sizeof(ctx->info), - dma_to_device); - - if (aes->src.sg == aes->dst.sg) { - dma_unmap_sg(cryp->dev, aes->src.sg, aes->src.nents, - dma_bidirectional); - - if (aes->src.sg != &aes->aligned_sg) - mtk_aes_restore_sg(&aes->src); - } else { - dma_unmap_sg(cryp->dev, aes->dst.sg, aes->dst.nents, - dma_from_device); - - if (aes->dst.sg != &aes->aligned_sg) - mtk_aes_restore_sg(&aes->dst); - - dma_unmap_sg(cryp->dev, aes->src.sg, aes->src.nents, - dma_to_device); - - if (aes->src.sg != &aes->aligned_sg) - mtk_aes_restore_sg(&aes->src); - } - - if (aes->dst.sg == &aes->aligned_sg) - sg_copy_from_buffer(aes->real_dst, sg_nents(aes->real_dst), - aes->buf, aes->total); -} - -static int mtk_aes_map(struct mtk_cryp *cryp, struct mtk_aes_rec *aes) -{ - struct mtk_aes_base_ctx *ctx = aes->ctx; - struct mtk_aes_info *info = &ctx->info; - - ctx->ct_dma = dma_map_single(cryp->dev, info, sizeof(*info), - dma_to_device); - if (unlikely(dma_mapping_error(cryp->dev, ctx->ct_dma))) - goto exit; - - ctx->tfm_dma = ctx->ct_dma + sizeof(info->cmd); - - if (aes->src.sg == aes->dst.sg) { - aes->src.sg_len = dma_map_sg(cryp->dev, aes->src.sg, - aes->src.nents, - dma_bidirectional); - aes->dst.sg_len = aes->src.sg_len; - if (unlikely(!aes->src.sg_len)) - goto sg_map_err; - } else { - aes->src.sg_len = dma_map_sg(cryp->dev, aes->src.sg, - aes->src.nents, dma_to_device); - if (unlikely(!aes->src.sg_len)) - goto sg_map_err; - - aes->dst.sg_len = dma_map_sg(cryp->dev, aes->dst.sg, - aes->dst.nents, dma_from_device); - if (unlikely(!aes->dst.sg_len)) { - dma_unmap_sg(cryp->dev, aes->src.sg, aes->src.nents, - dma_to_device); - goto sg_map_err; - } - } - - return mtk_aes_xmit(cryp, aes); - -sg_map_err: - dma_unmap_single(cryp->dev, ctx->ct_dma, sizeof(*info), dma_to_device); -exit: - return mtk_aes_complete(cryp, aes, -einval); -} - -/* initialize transform information of cbc/ecb/ctr/ofb/cfb mode */ -static void mtk_aes_info_init(struct mtk_cryp *cryp, struct mtk_aes_rec *aes, - size_t len) -{ - struct skcipher_request *req = skcipher_request_cast(aes->areq); - struct mtk_aes_base_ctx *ctx = aes->ctx; - struct mtk_aes_info *info = &ctx->info; - u32 cnt = 0; - - ctx->ct_hdr = aes_ct_ctrl_hdr | cpu_to_le32(len); - info->cmd[cnt++] = aes_cmd0 | cpu_to_le32(len); - info->cmd[cnt++] = aes_cmd1; - - info->tfm[0] = aes_tfm_size(ctx->keylen) | ctx->keymode; - if (aes->flags & aes_flags_encrypt) - info->tfm[0] |= aes_tfm_basic_out; - else - info->tfm[0] |= aes_tfm_basic_in; - - switch (aes->flags & aes_flags_cipher_msk) { - case aes_flags_cbc: - info->tfm[1] = aes_tfm_cbc; - break; - case aes_flags_ecb: - info->tfm[1] = aes_tfm_ecb; - goto ecb; - case aes_flags_ctr: - info->tfm[1] = aes_tfm_ctr_load; - goto ctr; - case aes_flags_ofb: - info->tfm[1] = aes_tfm_ofb; - break; - case aes_flags_cfb128: - info->tfm[1] = aes_tfm_cfb128; - break; - default: - /* should not happen... */ - return; - } - - memcpy(info->state + ctx->keylen, req->iv, aes_block_size); -ctr: - le32_add_cpu(&info->tfm[0], - le32_to_cpu(aes_tfm_size(size_in_words(aes_block_size)))); - info->tfm[1] |= aes_tfm_full_iv; - info->cmd[cnt++] = aes_cmd2; -ecb: - ctx->ct_size = cnt; -} - -static int mtk_aes_dma(struct mtk_cryp *cryp, struct mtk_aes_rec *aes, - struct scatterlist *src, struct scatterlist *dst, - size_t len) -{ - size_t padlen = 0; - bool src_aligned, dst_aligned; - - aes->total = len; - aes->src.sg = src; - aes->dst.sg = dst; - aes->real_dst = dst; - - src_aligned = mtk_aes_check_aligned(src, len, &aes->src); - if (src == dst) - dst_aligned = src_aligned; - else - dst_aligned = mtk_aes_check_aligned(dst, len, &aes->dst); - - if (!src_aligned || !dst_aligned) { - padlen = mtk_aes_padlen(len); - - if (len + padlen > aes_buf_size) - return mtk_aes_complete(cryp, aes, -enomem); - - if (!src_aligned) { - sg_copy_to_buffer(src, sg_nents(src), aes->buf, len); - aes->src.sg = &aes->aligned_sg; - aes->src.nents = 1; - aes->src.remainder = 0; - } - - if (!dst_aligned) { - aes->dst.sg = &aes->aligned_sg; - aes->dst.nents = 1; - aes->dst.remainder = 0; - } - - sg_init_table(&aes->aligned_sg, 1); - sg_set_buf(&aes->aligned_sg, aes->buf, len + padlen); - } - - mtk_aes_info_init(cryp, aes, len + padlen); - - return mtk_aes_map(cryp, aes); -} - -static int mtk_aes_handle_queue(struct mtk_cryp *cryp, u8 id, - struct crypto_async_request *new_areq) -{ - struct mtk_aes_rec *aes = cryp->aes[id]; - struct crypto_async_request *areq, *backlog; - struct mtk_aes_base_ctx *ctx; - unsigned long flags; - int ret = 0; - - spin_lock_irqsave(&aes->lock, flags); - if (new_areq) - ret = crypto_enqueue_request(&aes->queue, new_areq); - if (aes->flags & aes_flags_busy) { - spin_unlock_irqrestore(&aes->lock, flags); - return ret; - } - backlog = crypto_get_backlog(&aes->queue); - areq = crypto_dequeue_request(&aes->queue); - if (areq) - aes->flags |= aes_flags_busy; - spin_unlock_irqrestore(&aes->lock, flags); - - if (!areq) - return ret; - - if (backlog) - backlog->complete(backlog, -einprogress); - - ctx = crypto_tfm_ctx(areq->tfm); - /* write key into state buffer */ - memcpy(ctx->info.state, ctx->key, sizeof(ctx->key)); - - aes->areq = areq; - aes->ctx = ctx; - - return ctx->start(cryp, aes); -} - -static int mtk_aes_transfer_complete(struct mtk_cryp *cryp, - struct mtk_aes_rec *aes) -{ - return mtk_aes_complete(cryp, aes, 0); -} - -static int mtk_aes_start(struct mtk_cryp *cryp, struct mtk_aes_rec *aes) -{ - struct skcipher_request *req = skcipher_request_cast(aes->areq); - struct mtk_aes_reqctx *rctx = skcipher_request_ctx(req); - - mtk_aes_set_mode(aes, rctx); - aes->resume = mtk_aes_transfer_complete; - - return mtk_aes_dma(cryp, aes, req->src, req->dst, req->cryptlen); -} - -static inline struct mtk_aes_ctr_ctx * -mtk_aes_ctr_ctx_cast(struct mtk_aes_base_ctx *ctx) -{ - return container_of(ctx, struct mtk_aes_ctr_ctx, base); -} - -static int mtk_aes_ctr_transfer(struct mtk_cryp *cryp, struct mtk_aes_rec *aes) -{ - struct mtk_aes_base_ctx *ctx = aes->ctx; - struct mtk_aes_ctr_ctx *cctx = mtk_aes_ctr_ctx_cast(ctx); - struct skcipher_request *req = skcipher_request_cast(aes->areq); - struct scatterlist *src, *dst; - u32 start, end, ctr, blocks; - size_t datalen; - bool fragmented = false; - - /* check for transfer completion. */ - cctx->offset += aes->total; - if (cctx->offset >= req->cryptlen) - return mtk_aes_transfer_complete(cryp, aes); - - /* compute data length. */ - datalen = req->cryptlen - cctx->offset; - blocks = div_round_up(datalen, aes_block_size); - ctr = be32_to_cpu(cctx->iv[3]); - - /* check 32bit counter overflow. */ - start = ctr; - end = start + blocks - 1; - if (end < start) { - ctr = 0xffffffff; - datalen = aes_block_size * -start; - fragmented = true; - } - - /* jump to offset. */ - src = scatterwalk_ffwd(cctx->src, req->src, cctx->offset); - dst = ((req->src == req->dst) ? src : - scatterwalk_ffwd(cctx->dst, req->dst, cctx->offset)); - - /* write ivs into transform state buffer. */ - memcpy(ctx->info.state + ctx->keylen, cctx->iv, aes_block_size); - - if (unlikely(fragmented)) { - /* - * increment the counter manually to cope with the hardware - * counter overflow. - */ - cctx->iv[3] = cpu_to_be32(ctr); - crypto_inc((u8 *)cctx->iv, aes_block_size); - } - - return mtk_aes_dma(cryp, aes, src, dst, datalen); -} - -static int mtk_aes_ctr_start(struct mtk_cryp *cryp, struct mtk_aes_rec *aes) -{ - struct mtk_aes_ctr_ctx *cctx = mtk_aes_ctr_ctx_cast(aes->ctx); - struct skcipher_request *req = skcipher_request_cast(aes->areq); - struct mtk_aes_reqctx *rctx = skcipher_request_ctx(req); - - mtk_aes_set_mode(aes, rctx); - - memcpy(cctx->iv, req->iv, aes_block_size); - cctx->offset = 0; - aes->total = 0; - aes->resume = mtk_aes_ctr_transfer; - - return mtk_aes_ctr_transfer(cryp, aes); -} - -/* check and set the aes key to transform state buffer */ -static int mtk_aes_setkey(struct crypto_skcipher *tfm, - const u8 *key, u32 keylen) -{ - struct mtk_aes_base_ctx *ctx = crypto_skcipher_ctx(tfm); - - switch (keylen) { - case aes_keysize_128: - ctx->keymode = aes_tfm_128bits; - break; - case aes_keysize_192: - ctx->keymode = aes_tfm_192bits; - break; - case aes_keysize_256: - ctx->keymode = aes_tfm_256bits; - break; - - default: - return -einval; - } - - ctx->keylen = size_in_words(keylen); - memcpy(ctx->key, key, keylen); - - return 0; -} - -static int mtk_aes_crypt(struct skcipher_request *req, u64 mode) -{ - struct crypto_skcipher *skcipher = crypto_skcipher_reqtfm(req); - struct mtk_aes_base_ctx *ctx = crypto_skcipher_ctx(skcipher); - struct mtk_aes_reqctx *rctx; - struct mtk_cryp *cryp; - - cryp = mtk_aes_find_dev(ctx); - if (!cryp) - return -enodev; - - rctx = skcipher_request_ctx(req); - rctx->mode = mode; - - return mtk_aes_handle_queue(cryp, !(mode & aes_flags_encrypt), - &req->base); -} - -static int mtk_aes_ecb_encrypt(struct skcipher_request *req) -{ - return mtk_aes_crypt(req, aes_flags_encrypt | aes_flags_ecb); -} - -static int mtk_aes_ecb_decrypt(struct skcipher_request *req) -{ - return mtk_aes_crypt(req, aes_flags_ecb); -} - -static int mtk_aes_cbc_encrypt(struct skcipher_request *req) -{ - return mtk_aes_crypt(req, aes_flags_encrypt | aes_flags_cbc); -} - -static int mtk_aes_cbc_decrypt(struct skcipher_request *req) -{ - return mtk_aes_crypt(req, aes_flags_cbc); -} - -static int mtk_aes_ctr_encrypt(struct skcipher_request *req) -{ - return mtk_aes_crypt(req, aes_flags_encrypt | aes_flags_ctr); -} - -static int mtk_aes_ctr_decrypt(struct skcipher_request *req) -{ - return mtk_aes_crypt(req, aes_flags_ctr); -} - -static int mtk_aes_ofb_encrypt(struct skcipher_request *req) -{ - return mtk_aes_crypt(req, aes_flags_encrypt | aes_flags_ofb); -} - -static int mtk_aes_ofb_decrypt(struct skcipher_request *req) -{ - return mtk_aes_crypt(req, aes_flags_ofb); -} - -static int mtk_aes_cfb_encrypt(struct skcipher_request *req) -{ - return mtk_aes_crypt(req, aes_flags_encrypt | aes_flags_cfb128); -} - -static int mtk_aes_cfb_decrypt(struct skcipher_request *req) -{ - return mtk_aes_crypt(req, aes_flags_cfb128); -} - -static int mtk_aes_init_tfm(struct crypto_skcipher *tfm) -{ - struct mtk_aes_ctx *ctx = crypto_skcipher_ctx(tfm); - - crypto_skcipher_set_reqsize(tfm, sizeof(struct mtk_aes_reqctx)); - ctx->base.start = mtk_aes_start; - return 0; -} - -static int mtk_aes_ctr_init_tfm(struct crypto_skcipher *tfm) -{ - struct mtk_aes_ctx *ctx = crypto_skcipher_ctx(tfm); - - crypto_skcipher_set_reqsize(tfm, sizeof(struct mtk_aes_reqctx)); - ctx->base.start = mtk_aes_ctr_start; - return 0; -} - -static struct skcipher_alg aes_algs[] = { -{ - .base.cra_name = "cbc(aes)", - .base.cra_driver_name = "cbc-aes-mtk", - .base.cra_priority = 400, - .base.cra_flags = crypto_alg_async, - .base.cra_blocksize = aes_block_size, - .base.cra_ctxsize = sizeof(struct mtk_aes_ctx), - .base.cra_alignmask = 0xf, - .base.cra_module = this_module, - - .min_keysize = aes_min_key_size, - .max_keysize = aes_max_key_size, - .setkey = mtk_aes_setkey, - .encrypt = mtk_aes_cbc_encrypt, - .decrypt = mtk_aes_cbc_decrypt, - .ivsize = aes_block_size, - .init = mtk_aes_init_tfm, -}, -{ - .base.cra_name = "ecb(aes)", - .base.cra_driver_name = "ecb-aes-mtk", - .base.cra_priority = 400, - .base.cra_flags = crypto_alg_async, - .base.cra_blocksize = aes_block_size, - .base.cra_ctxsize = sizeof(struct mtk_aes_ctx), - .base.cra_alignmask = 0xf, - .base.cra_module = this_module, - - .min_keysize = aes_min_key_size, - .max_keysize = aes_max_key_size, - .setkey = mtk_aes_setkey, - .encrypt = mtk_aes_ecb_encrypt, - .decrypt = mtk_aes_ecb_decrypt, - .init = mtk_aes_init_tfm, -}, -{ - .base.cra_name = "ctr(aes)", - .base.cra_driver_name = "ctr-aes-mtk", - .base.cra_priority = 400, - .base.cra_flags = crypto_alg_async, - .base.cra_blocksize = 1, - .base.cra_ctxsize = sizeof(struct mtk_aes_ctx), - .base.cra_alignmask = 0xf, - .base.cra_module = this_module, - - .min_keysize = aes_min_key_size, - .max_keysize = aes_max_key_size, - .ivsize = aes_block_size, - .setkey = mtk_aes_setkey, - .encrypt = mtk_aes_ctr_encrypt, - .decrypt = mtk_aes_ctr_decrypt, - .init = mtk_aes_ctr_init_tfm, -}, -{ - .base.cra_name = "ofb(aes)", - .base.cra_driver_name = "ofb-aes-mtk", - .base.cra_priority = 400, - .base.cra_flags = crypto_alg_async, - .base.cra_blocksize = aes_block_size, - .base.cra_ctxsize = sizeof(struct mtk_aes_ctx), - .base.cra_alignmask = 0xf, - .base.cra_module = this_module, - - .min_keysize = aes_min_key_size, - .max_keysize = aes_max_key_size, - .ivsize = aes_block_size, - .setkey = mtk_aes_setkey, - .encrypt = mtk_aes_ofb_encrypt, - .decrypt = mtk_aes_ofb_decrypt, -}, -{ - .base.cra_name = "cfb(aes)", - .base.cra_driver_name = "cfb-aes-mtk", - .base.cra_priority = 400, - .base.cra_flags = crypto_alg_async, - .base.cra_blocksize = 1, - .base.cra_ctxsize = sizeof(struct mtk_aes_ctx), - .base.cra_alignmask = 0xf, - .base.cra_module = this_module, - - .min_keysize = aes_min_key_size, - .max_keysize = aes_max_key_size, - .ivsize = aes_block_size, - .setkey = mtk_aes_setkey, - .encrypt = mtk_aes_cfb_encrypt, - .decrypt = mtk_aes_cfb_decrypt, -}, -}; - -static inline struct mtk_aes_gcm_ctx * -mtk_aes_gcm_ctx_cast(struct mtk_aes_base_ctx *ctx) -{ - return container_of(ctx, struct mtk_aes_gcm_ctx, base); -} - -/* - * engine will verify and compare tag automatically, so we just need - * to check returned status which stored in the result descriptor. - */ -static int mtk_aes_gcm_tag_verify(struct mtk_cryp *cryp, - struct mtk_aes_rec *aes) -{ - __le32 status = cryp->ring[aes->id]->res_prev->ct; - - return mtk_aes_complete(cryp, aes, (status & aes_auth_tag_err) ? - -ebadmsg : 0); -} - -/* initialize transform information of gcm mode */ -static void mtk_aes_gcm_info_init(struct mtk_cryp *cryp, - struct mtk_aes_rec *aes, - size_t len) -{ - struct aead_request *req = aead_request_cast(aes->areq); - struct mtk_aes_base_ctx *ctx = aes->ctx; - struct mtk_aes_gcm_ctx *gctx = mtk_aes_gcm_ctx_cast(ctx); - struct mtk_aes_info *info = &ctx->info; - u32 ivsize = crypto_aead_ivsize(crypto_aead_reqtfm(req)); - u32 cnt = 0; - - ctx->ct_hdr = aes_ct_ctrl_hdr | cpu_to_le32(len); - - info->cmd[cnt++] = aes_gcm_cmd0 | cpu_to_le32(req->assoclen); - info->cmd[cnt++] = aes_gcm_cmd1 | cpu_to_le32(req->assoclen); - info->cmd[cnt++] = aes_gcm_cmd2; - info->cmd[cnt++] = aes_gcm_cmd3 | cpu_to_le32(gctx->textlen); - - if (aes->flags & aes_flags_encrypt) { - info->cmd[cnt++] = aes_gcm_cmd4 | cpu_to_le32(gctx->authsize); - info->tfm[0] = aes_tfm_gcm_out; - } else { - info->cmd[cnt++] = aes_gcm_cmd5 | cpu_to_le32(gctx->authsize); - info->cmd[cnt++] = aes_gcm_cmd6 | cpu_to_le32(gctx->authsize); - info->tfm[0] = aes_tfm_gcm_in; - } - ctx->ct_size = cnt; - - info->tfm[0] |= aes_tfm_ghash_digest | aes_tfm_ghash | aes_tfm_size( - ctx->keylen + size_in_words(aes_block_size + ivsize)) | - ctx->keymode; - info->tfm[1] = aes_tfm_ctr_init | aes_tfm_iv_ctr_mode | aes_tfm_3iv | - aes_tfm_enc_hash; - - memcpy(info->state + ctx->keylen + size_in_words(aes_block_size), - req->iv, ivsize); -} - -static int mtk_aes_gcm_dma(struct mtk_cryp *cryp, struct mtk_aes_rec *aes, - struct scatterlist *src, struct scatterlist *dst, - size_t len) -{ - bool src_aligned, dst_aligned; - - aes->src.sg = src; - aes->dst.sg = dst; - aes->real_dst = dst; - - src_aligned = mtk_aes_check_aligned(src, len, &aes->src); - if (src == dst) - dst_aligned = src_aligned; - else - dst_aligned = mtk_aes_check_aligned(dst, len, &aes->dst); - - if (!src_aligned || !dst_aligned) { - if (aes->total > aes_buf_size) - return mtk_aes_complete(cryp, aes, -enomem); - - if (!src_aligned) { - sg_copy_to_buffer(src, sg_nents(src), aes->buf, len); - aes->src.sg = &aes->aligned_sg; - aes->src.nents = 1; - aes->src.remainder = 0; - } - - if (!dst_aligned) { - aes->dst.sg = &aes->aligned_sg; - aes->dst.nents = 1; - aes->dst.remainder = 0; - } - - sg_init_table(&aes->aligned_sg, 1); - sg_set_buf(&aes->aligned_sg, aes->buf, aes->total); - } - - mtk_aes_gcm_info_init(cryp, aes, len); - - return mtk_aes_map(cryp, aes); -} - -/* todo: gmac */ -static int mtk_aes_gcm_start(struct mtk_cryp *cryp, struct mtk_aes_rec *aes) -{ - struct mtk_aes_gcm_ctx *gctx = mtk_aes_gcm_ctx_cast(aes->ctx); - struct aead_request *req = aead_request_cast(aes->areq); - struct mtk_aes_reqctx *rctx = aead_request_ctx(req); - u32 len = req->assoclen + req->cryptlen; - - mtk_aes_set_mode(aes, rctx); - - if (aes->flags & aes_flags_encrypt) { - u32 tag[4]; - - aes->resume = mtk_aes_transfer_complete; - /* compute total process length. */ - aes->total = len + gctx->authsize; - /* hardware will append authenticated tag to output buffer */ - scatterwalk_map_and_copy(tag, req->dst, len, gctx->authsize, 1); - } else { - aes->resume = mtk_aes_gcm_tag_verify; - aes->total = len; - } - - return mtk_aes_gcm_dma(cryp, aes, req->src, req->dst, len); -} - -static int mtk_aes_gcm_crypt(struct aead_request *req, u64 mode) -{ - struct mtk_aes_base_ctx *ctx = crypto_aead_ctx(crypto_aead_reqtfm(req)); - struct mtk_aes_gcm_ctx *gctx = mtk_aes_gcm_ctx_cast(ctx); - struct mtk_aes_reqctx *rctx = aead_request_ctx(req); - struct mtk_cryp *cryp; - bool enc = !!(mode & aes_flags_encrypt); - - cryp = mtk_aes_find_dev(ctx); - if (!cryp) - return -enodev; - - /* compute text length. */ - gctx->textlen = req->cryptlen - (enc ? 0 : gctx->authsize); - - /* empty messages are not supported yet */ - if (!gctx->textlen && !req->assoclen) - return -einval; - - rctx->mode = aes_flags_gcm | mode; - - return mtk_aes_handle_queue(cryp, enc, &req->base); -} - -/* - * because of the hardware limitation, we need to pre-calculate key(h) - * for the ghash operation. the result of the encryption operation - * need to be stored in the transform state buffer. - */ -static int mtk_aes_gcm_setkey(struct crypto_aead *aead, const u8 *key, - u32 keylen) -{ - struct mtk_aes_base_ctx *ctx = crypto_aead_ctx(aead); - union { - u32 x32[size_in_words(aes_block_size)]; - u8 x8[aes_block_size]; - } hash = {}; - struct crypto_aes_ctx aes_ctx; - int err; - int i; - - switch (keylen) { - case aes_keysize_128: - ctx->keymode = aes_tfm_128bits; - break; - case aes_keysize_192: - ctx->keymode = aes_tfm_192bits; - break; - case aes_keysize_256: - ctx->keymode = aes_tfm_256bits; - break; - - default: - return -einval; - } - - ctx->keylen = size_in_words(keylen); - - err = aes_expandkey(&aes_ctx, key, keylen); - if (err) - return err; - - aes_encrypt(&aes_ctx, hash.x8, hash.x8); - memzero_explicit(&aes_ctx, sizeof(aes_ctx)); - - memcpy(ctx->key, key, keylen); - - /* why do we need to do this? */ - for (i = 0; i < size_in_words(aes_block_size); i++) - hash.x32[i] = swab32(hash.x32[i]); - - memcpy(ctx->key + ctx->keylen, &hash, aes_block_size); - - return 0; -} - -static int mtk_aes_gcm_setauthsize(struct crypto_aead *aead, - u32 authsize) -{ - struct mtk_aes_base_ctx *ctx = crypto_aead_ctx(aead); - struct mtk_aes_gcm_ctx *gctx = mtk_aes_gcm_ctx_cast(ctx); - - /* same as crypto_gcm_authsize() from crypto/gcm.c */ - switch (authsize) { - case 8: - case 12: - case 16: - break; - default: - return -einval; - } - - gctx->authsize = authsize; - return 0; -} - -static int mtk_aes_gcm_encrypt(struct aead_request *req) -{ - return mtk_aes_gcm_crypt(req, aes_flags_encrypt); -} - -static int mtk_aes_gcm_decrypt(struct aead_request *req) -{ - return mtk_aes_gcm_crypt(req, 0); -} - -static int mtk_aes_gcm_init(struct crypto_aead *aead) -{ - struct mtk_aes_gcm_ctx *ctx = crypto_aead_ctx(aead); - - crypto_aead_set_reqsize(aead, sizeof(struct mtk_aes_reqctx)); - ctx->base.start = mtk_aes_gcm_start; - return 0; -} - -static struct aead_alg aes_gcm_alg = { - .setkey = mtk_aes_gcm_setkey, - .setauthsize = mtk_aes_gcm_setauthsize, - .encrypt = mtk_aes_gcm_encrypt, - .decrypt = mtk_aes_gcm_decrypt, - .init = mtk_aes_gcm_init, - .ivsize = gcm_aes_iv_size, - .maxauthsize = aes_block_size, - - .base = { - .cra_name = "gcm(aes)", - .cra_driver_name = "gcm-aes-mtk", - .cra_priority = 400, - .cra_flags = crypto_alg_async, - .cra_blocksize = 1, - .cra_ctxsize = sizeof(struct mtk_aes_gcm_ctx), - .cra_alignmask = 0xf, - .cra_module = this_module, - }, -}; - -static void mtk_aes_queue_task(unsigned long data) -{ - struct mtk_aes_rec *aes = (struct mtk_aes_rec *)data; - - mtk_aes_handle_queue(aes->cryp, aes->id, null); -} - -static void mtk_aes_done_task(unsigned long data) -{ - struct mtk_aes_rec *aes = (struct mtk_aes_rec *)data; - struct mtk_cryp *cryp = aes->cryp; - - mtk_aes_unmap(cryp, aes); - aes->resume(cryp, aes); -} - -static irqreturn_t mtk_aes_irq(int irq, void *dev_id) -{ - struct mtk_aes_rec *aes = (struct mtk_aes_rec *)dev_id; - struct mtk_cryp *cryp = aes->cryp; - u32 val = mtk_aes_read(cryp, rdr_stat(aes->id)); - - mtk_aes_write(cryp, rdr_stat(aes->id), val); - - if (likely(aes_flags_busy & aes->flags)) { - mtk_aes_write(cryp, rdr_proc_count(aes->id), mtk_cnt_rst); - mtk_aes_write(cryp, rdr_thresh(aes->id), - mtk_rdr_proc_thresh | mtk_rdr_proc_mode); - - tasklet_schedule(&aes->done_task); - } else { - dev_warn(cryp->dev, "aes interrupt when no active requests. "); - } - return irq_handled; -} - -/* - * the purpose of creating encryption and decryption records is - * to process outbound/inbound data in parallel, it can improve - * performance in most use cases, such as ipsec vpn, especially - * under heavy network traffic. - */ -static int mtk_aes_record_init(struct mtk_cryp *cryp) -{ - struct mtk_aes_rec **aes = cryp->aes; - int i, err = -enomem; - - for (i = 0; i < mtk_rec_num; i++) { - aes[i] = kzalloc(sizeof(**aes), gfp_kernel); - if (!aes[i]) - goto err_cleanup; - - aes[i]->buf = (void *)__get_free_pages(gfp_kernel, - aes_buf_order); - if (!aes[i]->buf) - goto err_cleanup; - - aes[i]->cryp = cryp; - - spin_lock_init(&aes[i]->lock); - crypto_init_queue(&aes[i]->queue, aes_queue_size); - - tasklet_init(&aes[i]->queue_task, mtk_aes_queue_task, - (unsigned long)aes[i]); - tasklet_init(&aes[i]->done_task, mtk_aes_done_task, - (unsigned long)aes[i]); - } - - /* link to ring0 and ring1 respectively */ - aes[0]->id = mtk_ring0; - aes[1]->id = mtk_ring1; - - return 0; - -err_cleanup: - for (; i--; ) { - free_page((unsigned long)aes[i]->buf); - kfree(aes[i]); - } - - return err; -} - -static void mtk_aes_record_free(struct mtk_cryp *cryp) -{ - int i; - - for (i = 0; i < mtk_rec_num; i++) { - tasklet_kill(&cryp->aes[i]->done_task); - tasklet_kill(&cryp->aes[i]->queue_task); - - free_page((unsigned long)cryp->aes[i]->buf); - kfree(cryp->aes[i]); - } -} - -static void mtk_aes_unregister_algs(void) -{ - int i; - - crypto_unregister_aead(&aes_gcm_alg); - - for (i = 0; i < array_size(aes_algs); i++) - crypto_unregister_skcipher(&aes_algs[i]); -} - -static int mtk_aes_register_algs(void) -{ - int err, i; - - for (i = 0; i < array_size(aes_algs); i++) { - err = crypto_register_skcipher(&aes_algs[i]); - if (err) - goto err_aes_algs; - } - - err = crypto_register_aead(&aes_gcm_alg); - if (err) - goto err_aes_algs; - - return 0; - -err_aes_algs: - for (; i--; ) - crypto_unregister_skcipher(&aes_algs[i]); - - return err; -} - -int mtk_cipher_alg_register(struct mtk_cryp *cryp) -{ - int ret; - - init_list_head(&cryp->aes_list); - - /* initialize two cipher records */ - ret = mtk_aes_record_init(cryp); - if (ret) - goto err_record; - - ret = devm_request_irq(cryp->dev, cryp->irq[mtk_ring0], mtk_aes_irq, - 0, "mtk-aes", cryp->aes[0]); - if (ret) { - dev_err(cryp->dev, "unable to request aes irq. "); - goto err_res; - } - - ret = devm_request_irq(cryp->dev, cryp->irq[mtk_ring1], mtk_aes_irq, - 0, "mtk-aes", cryp->aes[1]); - if (ret) { - dev_err(cryp->dev, "unable to request aes irq. "); - goto err_res; - } - - /* enable ring0 and ring1 interrupt */ - mtk_aes_write(cryp, aic_enable_set(mtk_ring0), mtk_irq_rdr0); - mtk_aes_write(cryp, aic_enable_set(mtk_ring1), mtk_irq_rdr1); - - spin_lock(&mtk_aes.lock); - list_add_tail(&cryp->aes_list, &mtk_aes.dev_list); - spin_unlock(&mtk_aes.lock); - - ret = mtk_aes_register_algs(); - if (ret) - goto err_algs; - - return 0; - -err_algs: - spin_lock(&mtk_aes.lock); - list_del(&cryp->aes_list); - spin_unlock(&mtk_aes.lock); -err_res: - mtk_aes_record_free(cryp); -err_record: - - dev_err(cryp->dev, "mtk-aes initialization failed. "); - return ret; -} - -void mtk_cipher_alg_release(struct mtk_cryp *cryp) -{ - spin_lock(&mtk_aes.lock); - list_del(&cryp->aes_list); - spin_unlock(&mtk_aes.lock); - - mtk_aes_unregister_algs(); - mtk_aes_record_free(cryp); -} diff --git a/drivers/crypto/mediatek/mtk-platform.c b/drivers/crypto/mediatek/mtk-platform.c --- a/drivers/crypto/mediatek/mtk-platform.c +++ /dev/null -// spdx-license-identifier: gpl-2.0-only -/* - * driver for eip97 cryptographic accelerator. - * - * copyright (c) 2016 ryder lee <ryder.lee@mediatek.com> - */ - -#include <linux/clk.h> -#include <linux/init.h> -#include <linux/kernel.h> -#include <linux/module.h> -#include <linux/mod_devicetable.h> -#include <linux/platform_device.h> -#include <linux/pm_runtime.h> -#include "mtk-platform.h" - -#define mtk_burst_size_msk genmask(7, 4) -#define mtk_burst_size(x) ((x) << 4) -#define mtk_desc_size(x) ((x) << 0) -#define mtk_desc_offset(x) ((x) << 16) -#define mtk_desc_fetch_size(x) ((x) << 0) -#define mtk_desc_fetch_thresh(x) ((x) << 16) -#define mtk_desc_ovl_irq_en bit(25) -#define mtk_desc_atp_present bit(30) - -#define mtk_dfse_idle genmask(3, 0) -#define mtk_dfse_thr_ctrl_en bit(30) -#define mtk_dfse_thr_ctrl_reset bit(31) -#define mtk_dfse_ring_id(x) (((x) >> 12) & genmask(3, 0)) -#define mtk_dfse_min_data(x) ((x) << 0) -#define mtk_dfse_max_data(x) ((x) << 8) -#define mtk_dfe_min_ctrl(x) ((x) << 16) -#define mtk_dfe_max_ctrl(x) ((x) << 24) - -#define mtk_in_buf_min_thresh(x) ((x) << 8) -#define mtk_in_buf_max_thresh(x) ((x) << 12) -#define mtk_out_buf_min_thresh(x) ((x) << 0) -#define mtk_out_buf_max_thresh(x) ((x) << 4) -#define mtk_in_tbuf_size(x) (((x) >> 4) & genmask(3, 0)) -#define mtk_in_dbuf_size(x) (((x) >> 8) & genmask(3, 0)) -#define mtk_out_dbuf_size(x) (((x) >> 16) & genmask(3, 0)) -#define mtk_cmd_fifo_size(x) (((x) >> 8) & genmask(3, 0)) -#define mtk_res_fifo_size(x) (((x) >> 12) & genmask(3, 0)) - -#define mtk_pe_tk_loc_avl bit(2) -#define mtk_pe_proc_held bit(14) -#define mtk_pe_tk_timeout_en bit(22) -#define mtk_pe_input_dma_err bit(0) -#define mtk_pe_output_dma_err bit(1) -#define mtk_pe_pkt_porc_err bit(2) -#define mtk_pe_pkt_timeout bit(3) -#define mtk_pe_fatal_err bit(14) -#define mtk_pe_input_dma_err_en bit(16) -#define mtk_pe_output_dma_err_en bit(17) -#define mtk_pe_pkt_porc_err_en bit(18) -#define mtk_pe_pkt_timeout_en bit(19) -#define mtk_pe_fatal_err_en bit(30) -#define mtk_pe_int_out_en bit(31) - -#define mtk_hia_signature ((u16)0x35ca) -#define mtk_hia_data_width(x) (((x) >> 25) & genmask(1, 0)) -#define mtk_hia_dma_length(x) (((x) >> 20) & genmask(4, 0)) -#define mtk_cdr_stat_clr genmask(4, 0) -#define mtk_rdr_stat_clr genmask(7, 0) - -#define mtk_aic_int_msk genmask(5, 0) -#define mtk_aic_ver_msk (genmask(15, 0) | genmask(27, 20)) -#define mtk_aic_ver11 0x011036c9 -#define mtk_aic_ver12 0x012036c9 -#define mtk_aic_g_clr genmask(30, 20) - -/** - * eip97 is an integrated security subsystem to accelerate cryptographic - * functions and protocols to offload the host processor. - * some important hardware modules are briefly introduced below: - * - * host interface adapter(hia) - the main interface between the host - * system and the hardware subsystem. it is responsible for attaching - * processing engine to the specific host bus interface and provides a - * standardized software view for off loading tasks to the engine. - * - * command descriptor ring manager(cdr manager) - keeps track of how many - * cd the host has prepared in the cdr. it monitors the fill level of its - * cd-fifo and if there's sufficient space for the next block of descriptors, - * then it fires off a dma request to fetch a block of cds. - * - * data fetch engine(dfe) - it is responsible for parsing the cd and - * setting up the required control and packet data dma transfers from - * system memory to the processing engine. - * - * result descriptor ring manager(rdr manager) - same as cdr manager, - * but target is result descriptors, moreover, it also handles the rd - * updates under control of the dse. for each packet data segment - * processed, the dse triggers the rdr manager to write the updated rd. - * if triggered to update, the rdr manager sets up a dma operation to - * copy the rd from the dse to the correct location in the rdr. - * - * data store engine(dse) - it is responsible for parsing the prepared rd - * and setting up the required control and packet data dma transfers from - * the processing engine to system memory. - * - * advanced interrupt controllers(aics) - receive interrupt request signals - * from various sources and combine them into one interrupt output. - * the aics are used by: - * - one for the hia global and processing engine interrupts. - * - the others for the descriptor ring interrupts. - */ - -/* cryptographic engine capabilities */ -struct mtk_sys_cap { - /* host interface adapter */ - u32 hia_ver; - u32 hia_opt; - /* packet engine */ - u32 pkt_eng_opt; - /* global hardware */ - u32 hw_opt; -}; - -static void mtk_desc_ring_link(struct mtk_cryp *cryp, u32 mask) -{ - /* assign rings to dfe/dse thread and enable it */ - writel(mtk_dfse_thr_ctrl_en | mask, cryp->base + dfe_thr_ctrl); - writel(mtk_dfse_thr_ctrl_en | mask, cryp->base + dse_thr_ctrl); -} - -static void mtk_dfe_dse_buf_setup(struct mtk_cryp *cryp, - struct mtk_sys_cap *cap) -{ - u32 width = mtk_hia_data_width(cap->hia_opt) + 2; - u32 len = mtk_hia_dma_length(cap->hia_opt) - 1; - u32 ipbuf = min((u32)mtk_in_dbuf_size(cap->hw_opt) + width, len); - u32 opbuf = min((u32)mtk_out_dbuf_size(cap->hw_opt) + width, len); - u32 itbuf = min((u32)mtk_in_tbuf_size(cap->hw_opt) + width, len); - - writel(mtk_dfse_min_data(ipbuf - 1) | - mtk_dfse_max_data(ipbuf) | - mtk_dfe_min_ctrl(itbuf - 1) | - mtk_dfe_max_ctrl(itbuf), - cryp->base + dfe_cfg); - - writel(mtk_dfse_min_data(opbuf - 1) | - mtk_dfse_max_data(opbuf), - cryp->base + dse_cfg); - - writel(mtk_in_buf_min_thresh(ipbuf - 1) | - mtk_in_buf_max_thresh(ipbuf), - cryp->base + pe_in_dbuf_thresh); - - writel(mtk_in_buf_min_thresh(itbuf - 1) | - mtk_in_buf_max_thresh(itbuf), - cryp->base + pe_in_tbuf_thresh); - - writel(mtk_out_buf_min_thresh(opbuf - 1) | - mtk_out_buf_max_thresh(opbuf), - cryp->base + pe_out_dbuf_thresh); - - writel(0, cryp->base + pe_out_tbuf_thresh); - writel(0, cryp->base + pe_out_buf_ctrl); -} - -static int mtk_dfe_dse_state_check(struct mtk_cryp *cryp) -{ - int ret = -einval; - u32 val; - - /* check for completion of all dma transfers */ - val = readl(cryp->base + dfe_thr_stat); - if (mtk_dfse_ring_id(val) == mtk_dfse_idle) { - val = readl(cryp->base + dse_thr_stat); - if (mtk_dfse_ring_id(val) == mtk_dfse_idle) - ret = 0; - } - - if (!ret) { - /* take dfe/dse thread out of reset */ - writel(0, cryp->base + dfe_thr_ctrl); - writel(0, cryp->base + dse_thr_ctrl); - } else { - return -ebusy; - } - - return 0; -} - -static int mtk_dfe_dse_reset(struct mtk_cryp *cryp) -{ - /* reset dse/dfe and correct system priorities for all rings. */ - writel(mtk_dfse_thr_ctrl_reset, cryp->base + dfe_thr_ctrl); - writel(0, cryp->base + dfe_prio_0); - writel(0, cryp->base + dfe_prio_1); - writel(0, cryp->base + dfe_prio_2); - writel(0, cryp->base + dfe_prio_3); - - writel(mtk_dfse_thr_ctrl_reset, cryp->base + dse_thr_ctrl); - writel(0, cryp->base + dse_prio_0); - writel(0, cryp->base + dse_prio_1); - writel(0, cryp->base + dse_prio_2); - writel(0, cryp->base + dse_prio_3); - - return mtk_dfe_dse_state_check(cryp); -} - -static void mtk_cmd_desc_ring_setup(struct mtk_cryp *cryp, - int i, struct mtk_sys_cap *cap) -{ - /* full descriptor that fits fifo minus one */ - u32 count = - ((1 << mtk_cmd_fifo_size(cap->hia_opt)) / mtk_desc_sz) - 1; - - /* temporarily disable external triggering */ - writel(0, cryp->base + cdr_cfg(i)); - - /* clear cdr count */ - writel(mtk_cnt_rst, cryp->base + cdr_prep_count(i)); - writel(mtk_cnt_rst, cryp->base + cdr_proc_count(i)); - - writel(0, cryp->base + cdr_prep_pntr(i)); - writel(0, cryp->base + cdr_proc_pntr(i)); - writel(0, cryp->base + cdr_dma_cfg(i)); - - /* configure cdr host address space */ - writel(0, cryp->base + cdr_base_addr_hi(i)); - writel(cryp->ring[i]->cmd_dma, cryp->base + cdr_base_addr_lo(i)); - - writel(mtk_desc_ring_sz, cryp->base + cdr_ring_size(i)); - - /* clear and disable all cdr interrupts */ - writel(mtk_cdr_stat_clr, cryp->base + cdr_stat(i)); - - /* - * set command descriptor offset and enable additional - * token present in descriptor. - */ - writel(mtk_desc_size(mtk_desc_sz) | - mtk_desc_offset(mtk_desc_off) | - mtk_desc_atp_present, - cryp->base + cdr_desc_size(i)); - - writel(mtk_desc_fetch_size(count * mtk_desc_off) | - mtk_desc_fetch_thresh(count * mtk_desc_sz), - cryp->base + cdr_cfg(i)); -} - -static void mtk_res_desc_ring_setup(struct mtk_cryp *cryp, - int i, struct mtk_sys_cap *cap) -{ - u32 rndup = 2; - u32 count = ((1 << mtk_res_fifo_size(cap->hia_opt)) / rndup) - 1; - - /* temporarily disable external triggering */ - writel(0, cryp->base + rdr_cfg(i)); - - /* clear rdr count */ - writel(mtk_cnt_rst, cryp->base + rdr_prep_count(i)); - writel(mtk_cnt_rst, cryp->base + rdr_proc_count(i)); - - writel(0, cryp->base + rdr_prep_pntr(i)); - writel(0, cryp->base + rdr_proc_pntr(i)); - writel(0, cryp->base + rdr_dma_cfg(i)); - - /* configure rdr host address space */ - writel(0, cryp->base + rdr_base_addr_hi(i)); - writel(cryp->ring[i]->res_dma, cryp->base + rdr_base_addr_lo(i)); - - writel(mtk_desc_ring_sz, cryp->base + rdr_ring_size(i)); - writel(mtk_rdr_stat_clr, cryp->base + rdr_stat(i)); - - /* - * rdr manager generates update interrupts on a per-completed-packet, - * and the rd_proc_thresh_irq interrupt is fired when proc_pkt_count - * for the rdr exceeds the number of packets. - */ - writel(mtk_rdr_proc_thresh | mtk_rdr_proc_mode, - cryp->base + rdr_thresh(i)); - - /* - * configure a threshold and time-out value for the processed - * result descriptors (or complete packets) that are written to - * the rdr. - */ - writel(mtk_desc_size(mtk_desc_sz) | mtk_desc_offset(mtk_desc_off), - cryp->base + rdr_desc_size(i)); - - /* - * configure hia fetch size and fetch threshold that are used to - * fetch blocks of multiple descriptors. - */ - writel(mtk_desc_fetch_size(count * mtk_desc_off) | - mtk_desc_fetch_thresh(count * rndup) | - mtk_desc_ovl_irq_en, - cryp->base + rdr_cfg(i)); -} - -static int mtk_packet_engine_setup(struct mtk_cryp *cryp) -{ - struct mtk_sys_cap cap; - int i, err; - u32 val; - - cap.hia_ver = readl(cryp->base + hia_version); - cap.hia_opt = readl(cryp->base + hia_options); - cap.hw_opt = readl(cryp->base + eip97_options); - - if (!(((u16)cap.hia_ver) == mtk_hia_signature)) - return -einval; - - /* configure endianness conversion method for master (dma) interface */ - writel(0, cryp->base + eip97_mst_ctrl); - - /* set hia burst size */ - val = readl(cryp->base + hia_mst_ctrl); - val &= ~mtk_burst_size_msk; - val |= mtk_burst_size(5); - writel(val, cryp->base + hia_mst_ctrl); - - err = mtk_dfe_dse_reset(cryp); - if (err) { - dev_err(cryp->dev, "failed to reset dfe and dse. "); - return err; - } - - mtk_dfe_dse_buf_setup(cryp, &cap); - - /* enable the 4 rings for the packet engines. */ - mtk_desc_ring_link(cryp, 0xf); - - for (i = 0; i < mtk_ring_max; i++) { - mtk_cmd_desc_ring_setup(cryp, i, &cap); - mtk_res_desc_ring_setup(cryp, i, &cap); - } - - writel(mtk_pe_tk_loc_avl | mtk_pe_proc_held | mtk_pe_tk_timeout_en, - cryp->base + pe_token_ctrl_stat); - - /* clear all pending interrupts */ - writel(mtk_aic_g_clr, cryp->base + aic_g_ack); - writel(mtk_pe_input_dma_err | mtk_pe_output_dma_err | - mtk_pe_pkt_porc_err | mtk_pe_pkt_timeout | - mtk_pe_fatal_err | mtk_pe_input_dma_err_en | - mtk_pe_output_dma_err_en | mtk_pe_pkt_porc_err_en | - mtk_pe_pkt_timeout_en | mtk_pe_fatal_err_en | - mtk_pe_int_out_en, - cryp->base + pe_interrupt_ctrl_stat); - - return 0; -} - -static int mtk_aic_cap_check(struct mtk_cryp *cryp, int hw) -{ - u32 val; - - if (hw == mtk_ring_max) - val = readl(cryp->base + aic_g_version); - else - val = readl(cryp->base + aic_version(hw)); - - val &= mtk_aic_ver_msk; - if (val != mtk_aic_ver11 && val != mtk_aic_ver12) - return -enxio; - - if (hw == mtk_ring_max) - val = readl(cryp->base + aic_g_options); - else - val = readl(cryp->base + aic_options(hw)); - - val &= mtk_aic_int_msk; - if (!val || val > 32) - return -enxio; - - return 0; -} - -static int mtk_aic_init(struct mtk_cryp *cryp, int hw) -{ - int err; - - err = mtk_aic_cap_check(cryp, hw); - if (err) - return err; - - /* disable all interrupts and set initial configuration */ - if (hw == mtk_ring_max) { - writel(0, cryp->base + aic_g_enable_ctrl); - writel(0, cryp->base + aic_g_pol_ctrl); - writel(0, cryp->base + aic_g_type_ctrl); - writel(0, cryp->base + aic_g_enable_set); - } else { - writel(0, cryp->base + aic_enable_ctrl(hw)); - writel(0, cryp->base + aic_pol_ctrl(hw)); - writel(0, cryp->base + aic_type_ctrl(hw)); - writel(0, cryp->base + aic_enable_set(hw)); - } - - return 0; -} - -static int mtk_accelerator_init(struct mtk_cryp *cryp) -{ - int i, err; - - /* initialize advanced interrupt controller(aic) */ - for (i = 0; i < mtk_irq_num; i++) { - err = mtk_aic_init(cryp, i); - if (err) { - dev_err(cryp->dev, "failed to initialize aic. "); - return err; - } - } - - /* initialize packet engine */ - err = mtk_packet_engine_setup(cryp); - if (err) { - dev_err(cryp->dev, "failed to configure packet engine. "); - return err; - } - - return 0; -} - -static void mtk_desc_dma_free(struct mtk_cryp *cryp) -{ - int i; - - for (i = 0; i < mtk_ring_max; i++) { - dma_free_coherent(cryp->dev, mtk_desc_ring_sz, - cryp->ring[i]->res_base, - cryp->ring[i]->res_dma); - dma_free_coherent(cryp->dev, mtk_desc_ring_sz, - cryp->ring[i]->cmd_base, - cryp->ring[i]->cmd_dma); - kfree(cryp->ring[i]); - } -} - -static int mtk_desc_ring_alloc(struct mtk_cryp *cryp) -{ - struct mtk_ring **ring = cryp->ring; - int i; - - for (i = 0; i < mtk_ring_max; i++) { - ring[i] = kzalloc(sizeof(**ring), gfp_kernel); - if (!ring[i]) - goto err_cleanup; - - ring[i]->cmd_base = dma_alloc_coherent(cryp->dev, - mtk_desc_ring_sz, - &ring[i]->cmd_dma, - gfp_kernel); - if (!ring[i]->cmd_base) - goto err_cleanup; - - ring[i]->res_base = dma_alloc_coherent(cryp->dev, - mtk_desc_ring_sz, - &ring[i]->res_dma, - gfp_kernel); - if (!ring[i]->res_base) - goto err_cleanup; - - ring[i]->cmd_next = ring[i]->cmd_base; - ring[i]->res_next = ring[i]->res_base; - } - return 0; - -err_cleanup: - do { - dma_free_coherent(cryp->dev, mtk_desc_ring_sz, - ring[i]->res_base, ring[i]->res_dma); - dma_free_coherent(cryp->dev, mtk_desc_ring_sz, - ring[i]->cmd_base, ring[i]->cmd_dma); - kfree(ring[i]); - } while (i--); - return -enomem; -} - -static int mtk_crypto_probe(struct platform_device *pdev) -{ - struct mtk_cryp *cryp; - int i, err; - - cryp = devm_kzalloc(&pdev->dev, sizeof(*cryp), gfp_kernel); - if (!cryp) - return -enomem; - - cryp->base = devm_platform_ioremap_resource(pdev, 0); - if (is_err(cryp->base)) - return ptr_err(cryp->base); - - for (i = 0; i < mtk_irq_num; i++) { - cryp->irq[i] = platform_get_irq(pdev, i); - if (cryp->irq[i] < 0) - return cryp->irq[i]; - } - - cryp->clk_cryp = devm_clk_get(&pdev->dev, "cryp"); - if (is_err(cryp->clk_cryp)) - return -eprobe_defer; - - cryp->dev = &pdev->dev; - pm_runtime_enable(cryp->dev); - pm_runtime_get_sync(cryp->dev); - - err = clk_prepare_enable(cryp->clk_cryp); - if (err) - goto err_clk_cryp; - - /* allocate four command/result descriptor rings */ - err = mtk_desc_ring_alloc(cryp); - if (err) { - dev_err(cryp->dev, "unable to allocate descriptor rings. "); - goto err_resource; - } - - /* initialize hardware modules */ - err = mtk_accelerator_init(cryp); - if (err) { - dev_err(cryp->dev, "failed to initialize cryptographic engine. "); - goto err_engine; - } - - err = mtk_cipher_alg_register(cryp); - if (err) { - dev_err(cryp->dev, "unable to register cipher algorithm. "); - goto err_cipher; - } - - err = mtk_hash_alg_register(cryp); - if (err) { - dev_err(cryp->dev, "unable to register hash algorithm. "); - goto err_hash; - } - - platform_set_drvdata(pdev, cryp); - return 0; - -err_hash: - mtk_cipher_alg_release(cryp); -err_cipher: - mtk_dfe_dse_reset(cryp); -err_engine: - mtk_desc_dma_free(cryp); -err_resource: - clk_disable_unprepare(cryp->clk_cryp); -err_clk_cryp: - pm_runtime_put_sync(cryp->dev); - pm_runtime_disable(cryp->dev); - - return err; -} - -static int mtk_crypto_remove(struct platform_device *pdev) -{ - struct mtk_cryp *cryp = platform_get_drvdata(pdev); - - mtk_hash_alg_release(cryp); - mtk_cipher_alg_release(cryp); - mtk_desc_dma_free(cryp); - - clk_disable_unprepare(cryp->clk_cryp); - - pm_runtime_put_sync(cryp->dev); - pm_runtime_disable(cryp->dev); - platform_set_drvdata(pdev, null); - - return 0; -} - -static const struct of_device_id of_crypto_id[] = { - { .compatible = "mediatek,eip97-crypto" }, - {}, -}; -module_device_table(of, of_crypto_id); - -static struct platform_driver mtk_crypto_driver = { - .probe = mtk_crypto_probe, - .remove = mtk_crypto_remove, - .driver = { - .name = "mtk-crypto", - .of_match_table = of_crypto_id, - }, -}; -module_platform_driver(mtk_crypto_driver); - -module_license("gpl"); -module_author("ryder lee <ryder.lee@mediatek.com>"); -module_description("cryptographic accelerator driver for eip97"); diff --git a/drivers/crypto/mediatek/mtk-platform.h b/drivers/crypto/mediatek/mtk-platform.h --- a/drivers/crypto/mediatek/mtk-platform.h +++ /dev/null -/* spdx-license-identifier: gpl-2.0-only */ -/* - * driver for eip97 cryptographic accelerator. - * - * copyright (c) 2016 ryder lee <ryder.lee@mediatek.com> - */ - -#ifndef __mtk_platform_h_ -#define __mtk_platform_h_ - -#include <crypto/algapi.h> -#include <crypto/internal/aead.h> -#include <crypto/internal/hash.h> -#include <crypto/scatterwalk.h> -#include <crypto/skcipher.h> -#include <linux/crypto.h> -#include <linux/dma-mapping.h> -#include <linux/interrupt.h> -#include <linux/scatterlist.h> -#include "mtk-regs.h" - -#define mtk_rdr_proc_thresh bit(0) -#define mtk_rdr_proc_mode bit(23) -#define mtk_cnt_rst bit(31) -#define mtk_irq_rdr0 bit(1) -#define mtk_irq_rdr1 bit(3) -#define mtk_irq_rdr2 bit(5) -#define mtk_irq_rdr3 bit(7) - -#define size_in_words(x) ((x) >> 2) - -/** - * ring 0/1 are used by aes encrypt and decrypt. - * ring 2/3 are used by sha. - */ -enum { - mtk_ring0, - mtk_ring1, - mtk_ring2, - mtk_ring3, - mtk_ring_max -}; - -#define mtk_rec_num (mtk_ring_max / 2) -#define mtk_irq_num 5 - -/** - * struct mtk_desc - dma descriptor - * @hdr: the descriptor control header - * @buf: dma address of input buffer segment - * @ct: dma address of command token that control operation flow - * @ct_hdr: the command token control header - * @tag: the user-defined field - * @tfm: dma address of transform state - * @bound: align descriptors offset boundary - * - * structure passed to the crypto engine to describe where source - * data needs to be fetched and how it needs to be processed. - */ -struct mtk_desc { - __le32 hdr; - __le32 buf; - __le32 ct; - __le32 ct_hdr; - __le32 tag; - __le32 tfm; - __le32 bound[2]; -}; - -#define mtk_desc_num 512 -#define mtk_desc_off size_in_words(sizeof(struct mtk_desc)) -#define mtk_desc_sz (mtk_desc_off - 2) -#define mtk_desc_ring_sz ((sizeof(struct mtk_desc) * mtk_desc_num)) -#define mtk_desc_cnt(x) ((mtk_desc_off * (x)) << 2) -#define mtk_desc_last cpu_to_le32(bit(22)) -#define mtk_desc_first cpu_to_le32(bit(23)) -#define mtk_desc_buf_len(x) cpu_to_le32(x) -#define mtk_desc_ct_len(x) cpu_to_le32((x) << 24) - -/** - * struct mtk_ring - descriptor ring - * @cmd_base: pointer to command descriptor ring base - * @cmd_next: pointer to the next command descriptor - * @cmd_dma: dma address of command descriptor ring - * @res_base: pointer to result descriptor ring base - * @res_next: pointer to the next result descriptor - * @res_prev: pointer to the previous result descriptor - * @res_dma: dma address of result descriptor ring - * - * a descriptor ring is a circular buffer that is used to manage - * one or more descriptors. there are two type of descriptor rings; - * the command descriptor ring and result descriptor ring. - */ -struct mtk_ring { - struct mtk_desc *cmd_base; - struct mtk_desc *cmd_next; - dma_addr_t cmd_dma; - struct mtk_desc *res_base; - struct mtk_desc *res_next; - struct mtk_desc *res_prev; - dma_addr_t res_dma; -}; - -/** - * struct mtk_aes_dma - structure that holds sg list info - * @sg: pointer to scatter-gather list - * @nents: number of entries in the sg list - * @remainder: remainder of sg list - * @sg_len: number of entries in the sg mapped list - */ -struct mtk_aes_dma { - struct scatterlist *sg; - int nents; - u32 remainder; - u32 sg_len; -}; - -struct mtk_aes_base_ctx; -struct mtk_aes_rec; -struct mtk_cryp; - -typedef int (*mtk_aes_fn)(struct mtk_cryp *cryp, struct mtk_aes_rec *aes); - -/** - * struct mtk_aes_rec - aes operation record - * @cryp: pointer to cryptographic device - * @queue: crypto request queue - * @areq: pointer to async request - * @done_task: the tasklet is use in aes interrupt - * @queue_task: the tasklet is used to dequeue request - * @ctx: pointer to current context - * @src: the structure that holds source sg list info - * @dst: the structure that holds destination sg list info - * @aligned_sg: the scatter list is use to alignment - * @real_dst: pointer to the destination sg list - * @resume: pointer to resume function - * @total: request buffer length - * @buf: pointer to page buffer - * @id: the current use of ring - * @flags: it's describing aes operation state - * @lock: the async queue lock - * - * structure used to record aes execution state. - */ -struct mtk_aes_rec { - struct mtk_cryp *cryp; - struct crypto_queue queue; - struct crypto_async_request *areq; - struct tasklet_struct done_task; - struct tasklet_struct queue_task; - struct mtk_aes_base_ctx *ctx; - struct mtk_aes_dma src; - struct mtk_aes_dma dst; - - struct scatterlist aligned_sg; - struct scatterlist *real_dst; - - mtk_aes_fn resume; - - size_t total; - void *buf; - - u8 id; - unsigned long flags; - /* queue lock */ - spinlock_t lock; -}; - -/** - * struct mtk_sha_rec - sha operation record - * @cryp: pointer to cryptographic device - * @queue: crypto request queue - * @req: pointer to ahash request - * @done_task: the tasklet is use in sha interrupt - * @queue_task: the tasklet is used to dequeue request - * @id: the current use of ring - * @flags: it's describing sha operation state - * @lock: the async queue lock - * - * structure used to record sha execution state. - */ -struct mtk_sha_rec { - struct mtk_cryp *cryp; - struct crypto_queue queue; - struct ahash_request *req; - struct tasklet_struct done_task; - struct tasklet_struct queue_task; - - u8 id; - unsigned long flags; - /* queue lock */ - spinlock_t lock; -}; - -/** - * struct mtk_cryp - cryptographic device - * @base: pointer to mapped register i/o base - * @dev: pointer to device - * @clk_cryp: pointer to crypto clock - * @irq: global system and rings irq - * @ring: pointer to descriptor rings - * @aes: pointer to operation record of aes - * @sha: pointer to operation record of sha - * @aes_list: device list of aes - * @sha_list: device list of sha - * @rec: it's used to select sha record for tfm - * - * structure storing cryptographic device information. - */ -struct mtk_cryp { - void __iomem *base; - struct device *dev; - struct clk *clk_cryp; - int irq[mtk_irq_num]; - - struct mtk_ring *ring[mtk_ring_max]; - struct mtk_aes_rec *aes[mtk_rec_num]; - struct mtk_sha_rec *sha[mtk_rec_num]; - - struct list_head aes_list; - struct list_head sha_list; - - bool rec; -}; - -int mtk_cipher_alg_register(struct mtk_cryp *cryp); -void mtk_cipher_alg_release(struct mtk_cryp *cryp); -int mtk_hash_alg_register(struct mtk_cryp *cryp); -void mtk_hash_alg_release(struct mtk_cryp *cryp); - -#endif /* __mtk_platform_h_ */ diff --git a/drivers/crypto/mediatek/mtk-regs.h b/drivers/crypto/mediatek/mtk-regs.h --- a/drivers/crypto/mediatek/mtk-regs.h +++ /dev/null -/* spdx-license-identifier: gpl-2.0-only */ -/* - * support for mediatek cryptographic accelerator. - * - * copyright (c) 2016 mediatek inc. - * author: ryder lee <ryder.lee@mediatek.com> - */ - -#ifndef __mtk_regs_h__ -#define __mtk_regs_h__ - -/* hia, command descriptor ring manager */ -#define cdr_base_addr_lo(x) (0x0 + ((x) << 12)) -#define cdr_base_addr_hi(x) (0x4 + ((x) << 12)) -#define cdr_data_base_addr_lo(x) (0x8 + ((x) << 12)) -#define cdr_data_base_addr_hi(x) (0xc + ((x) << 12)) -#define cdr_acd_base_addr_lo(x) (0x10 + ((x) << 12)) -#define cdr_acd_base_addr_hi(x) (0x14 + ((x) << 12)) -#define cdr_ring_size(x) (0x18 + ((x) << 12)) -#define cdr_desc_size(x) (0x1c + ((x) << 12)) -#define cdr_cfg(x) (0x20 + ((x) << 12)) -#define cdr_dma_cfg(x) (0x24 + ((x) << 12)) -#define cdr_thresh(x) (0x28 + ((x) << 12)) -#define cdr_prep_count(x) (0x2c + ((x) << 12)) -#define cdr_proc_count(x) (0x30 + ((x) << 12)) -#define cdr_prep_pntr(x) (0x34 + ((x) << 12)) -#define cdr_proc_pntr(x) (0x38 + ((x) << 12)) -#define cdr_stat(x) (0x3c + ((x) << 12)) - -/* hia, result descriptor ring manager */ -#define rdr_base_addr_lo(x) (0x800 + ((x) << 12)) -#define rdr_base_addr_hi(x) (0x804 + ((x) << 12)) -#define rdr_data_base_addr_lo(x) (0x808 + ((x) << 12)) -#define rdr_data_base_addr_hi(x) (0x80c + ((x) << 12)) -#define rdr_acd_base_addr_lo(x) (0x810 + ((x) << 12)) -#define rdr_acd_base_addr_hi(x) (0x814 + ((x) << 12)) -#define rdr_ring_size(x) (0x818 + ((x) << 12)) -#define rdr_desc_size(x) (0x81c + ((x) << 12)) -#define rdr_cfg(x) (0x820 + ((x) << 12)) -#define rdr_dma_cfg(x) (0x824 + ((x) << 12)) -#define rdr_thresh(x) (0x828 + ((x) << 12)) -#define rdr_prep_count(x) (0x82c + ((x) << 12)) -#define rdr_proc_count(x) (0x830 + ((x) << 12)) -#define rdr_prep_pntr(x) (0x834 + ((x) << 12)) -#define rdr_proc_pntr(x) (0x838 + ((x) << 12)) -#define rdr_stat(x) (0x83c + ((x) << 12)) - -/* hia, ring aic */ -#define aic_pol_ctrl(x) (0xe000 - ((x) << 12)) -#define aic_type_ctrl(x) (0xe004 - ((x) << 12)) -#define aic_enable_ctrl(x) (0xe008 - ((x) << 12)) -#define aic_raw_stal(x) (0xe00c - ((x) << 12)) -#define aic_enable_set(x) (0xe00c - ((x) << 12)) -#define aic_enabled_stat(x) (0xe010 - ((x) << 12)) -#define aic_ack(x) (0xe010 - ((x) << 12)) -#define aic_enable_clr(x) (0xe014 - ((x) << 12)) -#define aic_options(x) (0xe018 - ((x) << 12)) -#define aic_version(x) (0xe01c - ((x) << 12)) - -/* hia, global aic */ -#define aic_g_pol_ctrl 0xf800 -#define aic_g_type_ctrl 0xf804 -#define aic_g_enable_ctrl 0xf808 -#define aic_g_raw_stat 0xf80c -#define aic_g_enable_set 0xf80c -#define aic_g_enabled_stat 0xf810 -#define aic_g_ack 0xf810 -#define aic_g_enable_clr 0xf814 -#define aic_g_options 0xf818 -#define aic_g_version 0xf81c - -/* hia, data fetch engine */ -#define dfe_cfg 0xf000 -#define dfe_prio_0 0xf010 -#define dfe_prio_1 0xf014 -#define dfe_prio_2 0xf018 -#define dfe_prio_3 0xf01c - -/* hia, data fetch engine access monitoring for cdr */ -#define dfe_ring_region_lo(x) (0xf080 + ((x) << 3)) -#define dfe_ring_region_hi(x) (0xf084 + ((x) << 3)) - -/* hia, data fetch engine thread control and status for thread */ -#define dfe_thr_ctrl 0xf200 -#define dfe_thr_stat 0xf204 -#define dfe_thr_desc_ctrl 0xf208 -#define dfe_thr_desc_dptr_lo 0xf210 -#define dfe_thr_desc_dptr_hi 0xf214 -#define dfe_thr_desc_acdptr_lo 0xf218 -#define dfe_thr_desc_acdptr_hi 0xf21c - -/* hia, data store engine */ -#define dse_cfg 0xf400 -#define dse_prio_0 0xf410 -#define dse_prio_1 0xf414 -#define dse_prio_2 0xf418 -#define dse_prio_3 0xf41c - -/* hia, data store engine access monitoring for rdr */ -#define dse_ring_region_lo(x) (0xf480 + ((x) << 3)) -#define dse_ring_region_hi(x) (0xf484 + ((x) << 3)) - -/* hia, data store engine thread control and status for thread */ -#define dse_thr_ctrl 0xf600 -#define dse_thr_stat 0xf604 -#define dse_thr_desc_ctrl 0xf608 -#define dse_thr_desc_dptr_lo 0xf610 -#define dse_thr_desc_dptr_hi 0xf614 -#define dse_thr_desc_s_dptr_lo 0xf618 -#define dse_thr_desc_s_dptr_hi 0xf61c -#define dse_thr_error_stat 0xf620 - -/* hia global */ -#define hia_mst_ctrl 0xfff4 -#define hia_options 0xfff8 -#define hia_version 0xfffc - -/* processing engine input side, processing engine */ -#define pe_in_dbuf_thresh 0x10000 -#define pe_in_tbuf_thresh 0x10100 - -/* packet engine configuration / status registers */ -#define pe_token_ctrl_stat 0x11000 -#define pe_function_en 0x11004 -#define pe_context_ctrl 0x11008 -#define pe_interrupt_ctrl_stat 0x11010 -#define pe_context_stat 0x1100c -#define pe_out_trans_ctrl_stat 0x11018 -#define pe_out_buf_ctrl 0x1101c - -/* packet engine prng registers */ -#define pe_prng_stat 0x11040 -#define pe_prng_ctrl 0x11044 -#define pe_prng_seed_l 0x11048 -#define pe_prng_seed_h 0x1104c -#define pe_prng_key_0_l 0x11050 -#define pe_prng_key_0_h 0x11054 -#define pe_prng_key_1_l 0x11058 -#define pe_prng_key_1_h 0x1105c -#define pe_prng_res_0 0x11060 -#define pe_prng_res_1 0x11064 -#define pe_prng_res_2 0x11068 -#define pe_prng_res_3 0x1106c -#define pe_prng_lfsr_l 0x11070 -#define pe_prng_lfsr_h 0x11074 - -/* packet engine aic */ -#define pe_eip96_aic_pol_ctrl 0x113c0 -#define pe_eip96_aic_type_ctrl 0x113c4 -#define pe_eip96_aic_enable_ctrl 0x113c8 -#define pe_eip96_aic_raw_stat 0x113cc -#define pe_eip96_aic_enable_set 0x113cc -#define pe_eip96_aic_enabled_stat 0x113d0 -#define pe_eip96_aic_ack 0x113d0 -#define pe_eip96_aic_enable_clr 0x113d4 -#define pe_eip96_aic_options 0x113d8 -#define pe_eip96_aic_version 0x113dc - -/* packet engine options & version registers */ -#define pe_eip96_options 0x113f8 -#define pe_eip96_version 0x113fc - -/* processing engine output side */ -#define pe_out_dbuf_thresh 0x11c00 -#define pe_out_tbuf_thresh 0x11d00 - -/* processing engine local aic */ -#define pe_aic_pol_ctrl 0x11f00 -#define pe_aic_type_ctrl 0x11f04 -#define pe_aic_enable_ctrl 0x11f08 -#define pe_aic_raw_stat 0x11f0c -#define pe_aic_enable_set 0x11f0c -#define pe_aic_enabled_stat 0x11f10 -#define pe_aic_enable_clr 0x11f14 -#define pe_aic_options 0x11f18 -#define pe_aic_version 0x11f1c - -/* processing engine general configuration and version */ -#define pe_in_flight 0x11ff0 -#define pe_options 0x11ff8 -#define pe_version 0x11ffc - -/* eip-97 - global */ -#define eip97_clock_state 0x1ffe4 -#define eip97_force_clock_on 0x1ffe8 -#define eip97_force_clock_off 0x1ffec -#define eip97_mst_ctrl 0x1fff4 -#define eip97_options 0x1fff8 -#define eip97_version 0x1fffc -#endif /* __mtk_regs_h__ */ diff --git a/drivers/crypto/mediatek/mtk-sha.c b/drivers/crypto/mediatek/mtk-sha.c --- a/drivers/crypto/mediatek/mtk-sha.c +++ /dev/null -// spdx-license-identifier: gpl-2.0-only -/* - * cryptographic api. - * - * driver for eip97 sha1/sha2(hmac) acceleration. - * - * copyright (c) 2016 ryder lee <ryder.lee@mediatek.com> - * - * some ideas are from atmel-sha.c and omap-sham.c drivers. - */ - -#include <crypto/hmac.h> -#include <crypto/sha1.h> -#include <crypto/sha2.h> -#include "mtk-platform.h" - -#define sha_align_msk (sizeof(u32) - 1) -#define sha_queue_size 512 -#define sha_buf_size ((u32)page_size) - -#define sha_op_update 1 -#define sha_op_final 2 - -#define sha_data_len_msk cpu_to_le32(genmask(16, 0)) -#define sha_max_digest_buf_size 32 - -/* sha command token */ -#define sha_ct_size 5 -#define sha_ct_ctrl_hdr cpu_to_le32(0x02220000) -#define sha_cmd0 cpu_to_le32(0x03020000) -#define sha_cmd1 cpu_to_le32(0x21060000) -#define sha_cmd2 cpu_to_le32(0xe0e63802) - -/* sha transform information */ -#define sha_tfm_hash cpu_to_le32(0x2 << 0) -#define sha_tfm_size(x) cpu_to_le32((x) << 8) -#define sha_tfm_start cpu_to_le32(0x1 << 4) -#define sha_tfm_continue cpu_to_le32(0x1 << 5) -#define sha_tfm_hash_store cpu_to_le32(0x1 << 19) -#define sha_tfm_sha1 cpu_to_le32(0x2 << 23) -#define sha_tfm_sha256 cpu_to_le32(0x3 << 23) -#define sha_tfm_sha224 cpu_to_le32(0x4 << 23) -#define sha_tfm_sha512 cpu_to_le32(0x5 << 23) -#define sha_tfm_sha384 cpu_to_le32(0x6 << 23) -#define sha_tfm_digest(x) cpu_to_le32(((x) & genmask(3, 0)) << 24) - -/* sha flags */ -#define sha_flags_busy bit(0) -#define sha_flags_final bit(1) -#define sha_flags_finup bit(2) -#define sha_flags_sg bit(3) -#define sha_flags_algo_msk genmask(8, 4) -#define sha_flags_sha1 bit(4) -#define sha_flags_sha224 bit(5) -#define sha_flags_sha256 bit(6) -#define sha_flags_sha384 bit(7) -#define sha_flags_sha512 bit(8) -#define sha_flags_hmac bit(9) -#define sha_flags_pad bit(10) - -/** - * mtk_sha_info - hardware information of aes - * @cmd: command token, hardware instruction - * @tfm: transform state of cipher algorithm. - * @state: contains keys and initial vectors. - * - */ -struct mtk_sha_info { - __le32 ctrl[2]; - __le32 cmd[3]; - __le32 tfm[2]; - __le32 digest[sha_max_digest_buf_size]; -}; - -struct mtk_sha_reqctx { - struct mtk_sha_info info; - unsigned long flags; - unsigned long op; - - u64 digcnt; - size_t bufcnt; - dma_addr_t dma_addr; - - __le32 ct_hdr; - u32 ct_size; - dma_addr_t ct_dma; - dma_addr_t tfm_dma; - - /* walk state */ - struct scatterlist *sg; - u32 offset; /* offset in current sg */ - u32 total; /* total request */ - size_t ds; - size_t bs; - - u8 *buffer; -}; - -struct mtk_sha_hmac_ctx { - struct crypto_shash *shash; - u8 ipad[sha512_block_size] __aligned(sizeof(u32)); - u8 opad[sha512_block_size] __aligned(sizeof(u32)); -}; - -struct mtk_sha_ctx { - struct mtk_cryp *cryp; - unsigned long flags; - u8 id; - u8 buf[sha_buf_size] __aligned(sizeof(u32)); - - struct mtk_sha_hmac_ctx base[]; -}; - -struct mtk_sha_drv { - struct list_head dev_list; - /* device list lock */ - spinlock_t lock; -}; - -static struct mtk_sha_drv mtk_sha = { - .dev_list = list_head_init(mtk_sha.dev_list), - .lock = __spin_lock_unlocked(mtk_sha.lock), -}; - -static int mtk_sha_handle_queue(struct mtk_cryp *cryp, u8 id, - struct ahash_request *req); - -static inline u32 mtk_sha_read(struct mtk_cryp *cryp, u32 offset) -{ - return readl_relaxed(cryp->base + offset); -} - -static inline void mtk_sha_write(struct mtk_cryp *cryp, - u32 offset, u32 value) -{ - writel_relaxed(value, cryp->base + offset); -} - -static inline void mtk_sha_ring_shift(struct mtk_ring *ring, - struct mtk_desc **cmd_curr, - struct mtk_desc **res_curr, - int *count) -{ - *cmd_curr = ring->cmd_next++; - *res_curr = ring->res_next++; - (*count)++; - - if (ring->cmd_next == ring->cmd_base + mtk_desc_num) { - ring->cmd_next = ring->cmd_base; - ring->res_next = ring->res_base; - } -} - -static struct mtk_cryp *mtk_sha_find_dev(struct mtk_sha_ctx *tctx) -{ - struct mtk_cryp *cryp = null; - struct mtk_cryp *tmp; - - spin_lock_bh(&mtk_sha.lock); - if (!tctx->cryp) { - list_for_each_entry(tmp, &mtk_sha.dev_list, sha_list) { - cryp = tmp; - break; - } - tctx->cryp = cryp; - } else { - cryp = tctx->cryp; - } - - /* - * assign record id to tfm in round-robin fashion, and this - * will help tfm to bind to corresponding descriptor rings. - */ - tctx->id = cryp->rec; - cryp->rec = !cryp->rec; - - spin_unlock_bh(&mtk_sha.lock); - - return cryp; -} - -static int mtk_sha_append_sg(struct mtk_sha_reqctx *ctx) -{ - size_t count; - - while ((ctx->bufcnt < sha_buf_size) && ctx->total) { - count = min(ctx->sg->length - ctx->offset, ctx->total); - count = min(count, sha_buf_size - ctx->bufcnt); - - if (count <= 0) { - /* - * check if count <= 0 because the buffer is full or - * because the sg length is 0. in the latest case, - * check if there is another sg in the list, a 0 length - * sg doesn't necessarily mean the end of the sg list. - */ - if ((ctx->sg->length == 0) && !sg_is_last(ctx->sg)) { - ctx->sg = sg_next(ctx->sg); - continue; - } else { - break; - } - } - - scatterwalk_map_and_copy(ctx->buffer + ctx->bufcnt, ctx->sg, - ctx->offset, count, 0); - - ctx->bufcnt += count; - ctx->offset += count; - ctx->total -= count; - - if (ctx->offset == ctx->sg->length) { - ctx->sg = sg_next(ctx->sg); - if (ctx->sg) - ctx->offset = 0; - else - ctx->total = 0; - } - } - - return 0; -} - -/* - * the purpose of this padding is to ensure that the padded message is a - * multiple of 512 bits (sha1/sha224/sha256) or 1024 bits (sha384/sha512). - * the bit "1" is appended at the end of the message followed by - * "padlen-1" zero bits. then a 64 bits block (sha1/sha224/sha256) or - * 128 bits block (sha384/sha512) equals to the message length in bits - * is appended. - * - * for sha1/sha224/sha256, padlen is calculated as followed: - * - if message length < 56 bytes then padlen = 56 - message length - * - else padlen = 64 + 56 - message length - * - * for sha384/sha512, padlen is calculated as followed: - * - if message length < 112 bytes then padlen = 112 - message length - * - else padlen = 128 + 112 - message length - */ -static void mtk_sha_fill_padding(struct mtk_sha_reqctx *ctx, u32 len) -{ - u32 index, padlen; - __be64 bits[2]; - u64 size = ctx->digcnt; - - size += ctx->bufcnt; - size += len; - - bits[1] = cpu_to_be64(size << 3); - bits[0] = cpu_to_be64(size >> 61); - - switch (ctx->flags & sha_flags_algo_msk) { - case sha_flags_sha384: - case sha_flags_sha512: - index = ctx->bufcnt & 0x7f; - padlen = (index < 112) ? (112 - index) : ((128 + 112) - index); - *(ctx->buffer + ctx->bufcnt) = 0x80; - memset(ctx->buffer + ctx->bufcnt + 1, 0, padlen - 1); - memcpy(ctx->buffer + ctx->bufcnt + padlen, bits, 16); - ctx->bufcnt += padlen + 16; - ctx->flags |= sha_flags_pad; - break; - - default: - index = ctx->bufcnt & 0x3f; - padlen = (index < 56) ? (56 - index) : ((64 + 56) - index); - *(ctx->buffer + ctx->bufcnt) = 0x80; - memset(ctx->buffer + ctx->bufcnt + 1, 0, padlen - 1); - memcpy(ctx->buffer + ctx->bufcnt + padlen, &bits[1], 8); - ctx->bufcnt += padlen + 8; - ctx->flags |= sha_flags_pad; - break; - } -} - -/* initialize basic transform information of sha */ -static void mtk_sha_info_init(struct mtk_sha_reqctx *ctx) -{ - struct mtk_sha_info *info = &ctx->info; - - ctx->ct_hdr = sha_ct_ctrl_hdr; - ctx->ct_size = sha_ct_size; - - info->tfm[0] = sha_tfm_hash | sha_tfm_size(size_in_words(ctx->ds)); - - switch (ctx->flags & sha_flags_algo_msk) { - case sha_flags_sha1: - info->tfm[0] |= sha_tfm_sha1; - break; - case sha_flags_sha224: - info->tfm[0] |= sha_tfm_sha224; - break; - case sha_flags_sha256: - info->tfm[0] |= sha_tfm_sha256; - break; - case sha_flags_sha384: - info->tfm[0] |= sha_tfm_sha384; - break; - case sha_flags_sha512: - info->tfm[0] |= sha_tfm_sha512; - break; - - default: - /* should not happen... */ - return; - } - - info->tfm[1] = sha_tfm_hash_store; - info->ctrl[0] = info->tfm[0] | sha_tfm_continue | sha_tfm_start; - info->ctrl[1] = info->tfm[1]; - - info->cmd[0] = sha_cmd0; - info->cmd[1] = sha_cmd1; - info->cmd[2] = sha_cmd2 | sha_tfm_digest(size_in_words(ctx->ds)); -} - -/* - * update input data length field of transform information and - * map it to dma region. - */ -static int mtk_sha_info_update(struct mtk_cryp *cryp, - struct mtk_sha_rec *sha, - size_t len1, size_t len2) -{ - struct mtk_sha_reqctx *ctx = ahash_request_ctx(sha->req); - struct mtk_sha_info *info = &ctx->info; - - ctx->ct_hdr &= ~sha_data_len_msk; - ctx->ct_hdr |= cpu_to_le32(len1 + len2); - info->cmd[0] &= ~sha_data_len_msk; - info->cmd[0] |= cpu_to_le32(len1 + len2); - - /* setting sha_tfm_start only for the first iteration */ - if (ctx->digcnt) - info->ctrl[0] &= ~sha_tfm_start; - - ctx->digcnt += len1; - - ctx->ct_dma = dma_map_single(cryp->dev, info, sizeof(*info), - dma_bidirectional); - if (unlikely(dma_mapping_error(cryp->dev, ctx->ct_dma))) { - dev_err(cryp->dev, "dma %zu bytes error ", sizeof(*info)); - return -einval; - } - - ctx->tfm_dma = ctx->ct_dma + sizeof(info->ctrl) + sizeof(info->cmd); - - return 0; -} - -/* - * because of hardware limitation, we must pre-calculate the inner - * and outer digest that need to be processed firstly by engine, then - * apply the result digest to the input message. these complex hashing - * procedures limits hmac performance, so we use fallback sw encoding. - */ -static int mtk_sha_finish_hmac(struct ahash_request *req) -{ - struct mtk_sha_ctx *tctx = crypto_tfm_ctx(req->base.tfm); - struct mtk_sha_hmac_ctx *bctx = tctx->base; - struct mtk_sha_reqctx *ctx = ahash_request_ctx(req); - - shash_desc_on_stack(shash, bctx->shash); - - shash->tfm = bctx->shash; - - return crypto_shash_init(shash) ?: - crypto_shash_update(shash, bctx->opad, ctx->bs) ?: - crypto_shash_finup(shash, req->result, ctx->ds, req->result); -} - -/* initialize request context */ -static int mtk_sha_init(struct ahash_request *req) -{ - struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); - struct mtk_sha_ctx *tctx = crypto_ahash_ctx(tfm); - struct mtk_sha_reqctx *ctx = ahash_request_ctx(req); - - ctx->flags = 0; - ctx->ds = crypto_ahash_digestsize(tfm); - - switch (ctx->ds) { - case sha1_digest_size: - ctx->flags |= sha_flags_sha1; - ctx->bs = sha1_block_size; - break; - case sha224_digest_size: - ctx->flags |= sha_flags_sha224; - ctx->bs = sha224_block_size; - break; - case sha256_digest_size: - ctx->flags |= sha_flags_sha256; - ctx->bs = sha256_block_size; - break; - case sha384_digest_size: - ctx->flags |= sha_flags_sha384; - ctx->bs = sha384_block_size; - break; - case sha512_digest_size: - ctx->flags |= sha_flags_sha512; - ctx->bs = sha512_block_size; - break; - default: - return -einval; - } - - ctx->bufcnt = 0; - ctx->digcnt = 0; - ctx->buffer = tctx->buf; - - if (tctx->flags & sha_flags_hmac) { - struct mtk_sha_hmac_ctx *bctx = tctx->base; - - memcpy(ctx->buffer, bctx->ipad, ctx->bs); - ctx->bufcnt = ctx->bs; - ctx->flags |= sha_flags_hmac; - } - - return 0; -} - -static int mtk_sha_xmit(struct mtk_cryp *cryp, struct mtk_sha_rec *sha, - dma_addr_t addr1, size_t len1, - dma_addr_t addr2, size_t len2) -{ - struct mtk_sha_reqctx *ctx = ahash_request_ctx(sha->req); - struct mtk_ring *ring = cryp->ring[sha->id]; - struct mtk_desc *cmd, *res; - int err, count = 0; - - err = mtk_sha_info_update(cryp, sha, len1, len2); - if (err) - return err; - - /* fill in the command/result descriptors */ - mtk_sha_ring_shift(ring, &cmd, &res, &count); - - res->hdr = mtk_desc_first | mtk_desc_buf_len(len1); - cmd->hdr = mtk_desc_first | mtk_desc_buf_len(len1) | - mtk_desc_ct_len(ctx->ct_size); - cmd->buf = cpu_to_le32(addr1); - cmd->ct = cpu_to_le32(ctx->ct_dma); - cmd->ct_hdr = ctx->ct_hdr; - cmd->tfm = cpu_to_le32(ctx->tfm_dma); - - if (len2) { - mtk_sha_ring_shift(ring, &cmd, &res, &count); - - res->hdr = mtk_desc_buf_len(len2); - cmd->hdr = mtk_desc_buf_len(len2); - cmd->buf = cpu_to_le32(addr2); - } - - cmd->hdr |= mtk_desc_last; - res->hdr |= mtk_desc_last; - - /* - * make sure that all changes to the dma ring are done before we - * start engine. - */ - wmb(); - /* start dma transfer */ - mtk_sha_write(cryp, rdr_prep_count(sha->id), mtk_desc_cnt(count)); - mtk_sha_write(cryp, cdr_prep_count(sha->id), mtk_desc_cnt(count)); - - return -einprogress; -} - -static int mtk_sha_dma_map(struct mtk_cryp *cryp, - struct mtk_sha_rec *sha, - struct mtk_sha_reqctx *ctx, - size_t count) -{ - ctx->dma_addr = dma_map_single(cryp->dev, ctx->buffer, - sha_buf_size, dma_to_device); - if (unlikely(dma_mapping_error(cryp->dev, ctx->dma_addr))) { - dev_err(cryp->dev, "dma map error "); - return -einval; - } - - ctx->flags &= ~sha_flags_sg; - - return mtk_sha_xmit(cryp, sha, ctx->dma_addr, count, 0, 0); -} - -static int mtk_sha_update_slow(struct mtk_cryp *cryp, - struct mtk_sha_rec *sha) -{ - struct mtk_sha_reqctx *ctx = ahash_request_ctx(sha->req); - size_t count; - u32 final; - - mtk_sha_append_sg(ctx); - - final = (ctx->flags & sha_flags_finup) && !ctx->total; - - dev_dbg(cryp->dev, "slow: bufcnt: %zu ", ctx->bufcnt); - - if (final) { - sha->flags |= sha_flags_final; - mtk_sha_fill_padding(ctx, 0); - } - - if (final || (ctx->bufcnt == sha_buf_size && ctx->total)) { - count = ctx->bufcnt; - ctx->bufcnt = 0; - - return mtk_sha_dma_map(cryp, sha, ctx, count); - } - return 0; -} - -static int mtk_sha_update_start(struct mtk_cryp *cryp, - struct mtk_sha_rec *sha) -{ - struct mtk_sha_reqctx *ctx = ahash_request_ctx(sha->req); - u32 len, final, tail; - struct scatterlist *sg; - - if (!ctx->total) - return 0; - - if (ctx->bufcnt || ctx->offset) - return mtk_sha_update_slow(cryp, sha); - - sg = ctx->sg; - - if (!is_aligned(sg->offset, sizeof(u32))) - return mtk_sha_update_slow(cryp, sha); - - if (!sg_is_last(sg) && !is_aligned(sg->length, ctx->bs)) - /* size is not ctx->bs aligned */ - return mtk_sha_update_slow(cryp, sha); - - len = min(ctx->total, sg->length); - - if (sg_is_last(sg)) { - if (!(ctx->flags & sha_flags_finup)) { - /* not last sg must be ctx->bs aligned */ - tail = len & (ctx->bs - 1); - len -= tail; - } - } - - ctx->total -= len; - ctx->offset = len; /* offset where to start slow */ - - final = (ctx->flags & sha_flags_finup) && !ctx->total; - - /* add padding */ - if (final) { - size_t count; - - tail = len & (ctx->bs - 1); - len -= tail; - ctx->total += tail; - ctx->offset = len; /* offset where to start slow */ - - sg = ctx->sg; - mtk_sha_append_sg(ctx); - mtk_sha_fill_padding(ctx, len); - - ctx->dma_addr = dma_map_single(cryp->dev, ctx->buffer, - sha_buf_size, dma_to_device); - if (unlikely(dma_mapping_error(cryp->dev, ctx->dma_addr))) { - dev_err(cryp->dev, "dma map bytes error "); - return -einval; - } - - sha->flags |= sha_flags_final; - count = ctx->bufcnt; - ctx->bufcnt = 0; - - if (len == 0) { - ctx->flags &= ~sha_flags_sg; - return mtk_sha_xmit(cryp, sha, ctx->dma_addr, - count, 0, 0); - - } else { - ctx->sg = sg; - if (!dma_map_sg(cryp->dev, ctx->sg, 1, dma_to_device)) { - dev_err(cryp->dev, "dma_map_sg error "); - return -einval; - } - - ctx->flags |= sha_flags_sg; - return mtk_sha_xmit(cryp, sha, sg_dma_address(ctx->sg), - len, ctx->dma_addr, count); - } - } - - if (!dma_map_sg(cryp->dev, ctx->sg, 1, dma_to_device)) { - dev_err(cryp->dev, "dma_map_sg error "); - return -einval; - } - - ctx->flags |= sha_flags_sg; - - return mtk_sha_xmit(cryp, sha, sg_dma_address(ctx->sg), - len, 0, 0); -} - -static int mtk_sha_final_req(struct mtk_cryp *cryp, - struct mtk_sha_rec *sha) -{ - struct mtk_sha_reqctx *ctx = ahash_request_ctx(sha->req); - size_t count; - - mtk_sha_fill_padding(ctx, 0); - - sha->flags |= sha_flags_final; - count = ctx->bufcnt; - ctx->bufcnt = 0; - - return mtk_sha_dma_map(cryp, sha, ctx, count); -} - -/* copy ready hash (+ finalize hmac) */ -static int mtk_sha_finish(struct ahash_request *req) -{ - struct mtk_sha_reqctx *ctx = ahash_request_ctx(req); - __le32 *digest = ctx->info.digest; - u32 *result = (u32 *)req->result; - int i; - - /* get the hash from the digest buffer */ - for (i = 0; i < size_in_words(ctx->ds); i++) - result[i] = le32_to_cpu(digest[i]); - - if (ctx->flags & sha_flags_hmac) - return mtk_sha_finish_hmac(req); - - return 0; -} - -static void mtk_sha_finish_req(struct mtk_cryp *cryp, - struct mtk_sha_rec *sha, - int err) -{ - if (likely(!err && (sha_flags_final & sha->flags))) - err = mtk_sha_finish(sha->req); - - sha->flags &= ~(sha_flags_busy | sha_flags_final); - - sha->req->base.complete(&sha->req->base, err); - - /* handle new request */ - tasklet_schedule(&sha->queue_task); -} - -static int mtk_sha_handle_queue(struct mtk_cryp *cryp, u8 id, - struct ahash_request *req) -{ - struct mtk_sha_rec *sha = cryp->sha[id]; - struct crypto_async_request *async_req, *backlog; - struct mtk_sha_reqctx *ctx; - unsigned long flags; - int err = 0, ret = 0; - - spin_lock_irqsave(&sha->lock, flags); - if (req) - ret = ahash_enqueue_request(&sha->queue, req); - - if (sha_flags_busy & sha->flags) { - spin_unlock_irqrestore(&sha->lock, flags); - return ret; - } - - backlog = crypto_get_backlog(&sha->queue); - async_req = crypto_dequeue_request(&sha->queue); - if (async_req) - sha->flags |= sha_flags_busy; - spin_unlock_irqrestore(&sha->lock, flags); - - if (!async_req) - return ret; - - if (backlog) - backlog->complete(backlog, -einprogress); - - req = ahash_request_cast(async_req); - ctx = ahash_request_ctx(req); - - sha->req = req; - - mtk_sha_info_init(ctx); - - if (ctx->op == sha_op_update) { - err = mtk_sha_update_start(cryp, sha); - if (err != -einprogress && (ctx->flags & sha_flags_finup)) - /* no final() after finup() */ - err = mtk_sha_final_req(cryp, sha); - } else if (ctx->op == sha_op_final) { - err = mtk_sha_final_req(cryp, sha); - } - - if (unlikely(err != -einprogress)) - /* task will not finish it, so do it here */ - mtk_sha_finish_req(cryp, sha, err); - - return ret; -} - -static int mtk_sha_enqueue(struct ahash_request *req, u32 op) -{ - struct mtk_sha_reqctx *ctx = ahash_request_ctx(req); - struct mtk_sha_ctx *tctx = crypto_tfm_ctx(req->base.tfm); - - ctx->op = op; - - return mtk_sha_handle_queue(tctx->cryp, tctx->id, req); -} - -static void mtk_sha_unmap(struct mtk_cryp *cryp, struct mtk_sha_rec *sha) -{ - struct mtk_sha_reqctx *ctx = ahash_request_ctx(sha->req); - - dma_unmap_single(cryp->dev, ctx->ct_dma, sizeof(ctx->info), - dma_bidirectional); - - if (ctx->flags & sha_flags_sg) { - dma_unmap_sg(cryp->dev, ctx->sg, 1, dma_to_device); - if (ctx->sg->length == ctx->offset) { - ctx->sg = sg_next(ctx->sg); - if (ctx->sg) - ctx->offset = 0; - } - if (ctx->flags & sha_flags_pad) { - dma_unmap_single(cryp->dev, ctx->dma_addr, - sha_buf_size, dma_to_device); - } - } else - dma_unmap_single(cryp->dev, ctx->dma_addr, - sha_buf_size, dma_to_device); -} - -static void mtk_sha_complete(struct mtk_cryp *cryp, - struct mtk_sha_rec *sha) -{ - int err = 0; - - err = mtk_sha_update_start(cryp, sha); - if (err != -einprogress) - mtk_sha_finish_req(cryp, sha, err); -} - -static int mtk_sha_update(struct ahash_request *req) -{ - struct mtk_sha_reqctx *ctx = ahash_request_ctx(req); - - ctx->total = req->nbytes; - ctx->sg = req->src; - ctx->offset = 0; - - if ((ctx->bufcnt + ctx->total < sha_buf_size) && - !(ctx->flags & sha_flags_finup)) - return mtk_sha_append_sg(ctx); - - return mtk_sha_enqueue(req, sha_op_update); -} - -static int mtk_sha_final(struct ahash_request *req) -{ - struct mtk_sha_reqctx *ctx = ahash_request_ctx(req); - - ctx->flags |= sha_flags_finup; - - if (ctx->flags & sha_flags_pad) - return mtk_sha_finish(req); - - return mtk_sha_enqueue(req, sha_op_final); -} - -static int mtk_sha_finup(struct ahash_request *req) -{ - struct mtk_sha_reqctx *ctx = ahash_request_ctx(req); - int err1, err2; - - ctx->flags |= sha_flags_finup; - - err1 = mtk_sha_update(req); - if (err1 == -einprogress || - (err1 == -ebusy && (ahash_request_flags(req) & - crypto_tfm_req_may_backlog))) - return err1; - /* - * final() has to be always called to cleanup resources - * even if update() failed - */ - err2 = mtk_sha_final(req); - - return err1 ?: err2; -} - -static int mtk_sha_digest(struct ahash_request *req) -{ - return mtk_sha_init(req) ?: mtk_sha_finup(req); -} - -static int mtk_sha_setkey(struct crypto_ahash *tfm, const u8 *key, - u32 keylen) -{ - struct mtk_sha_ctx *tctx = crypto_ahash_ctx(tfm); - struct mtk_sha_hmac_ctx *bctx = tctx->base; - size_t bs = crypto_shash_blocksize(bctx->shash); - size_t ds = crypto_shash_digestsize(bctx->shash); - int err, i; - - if (keylen > bs) { - err = crypto_shash_tfm_digest(bctx->shash, key, keylen, - bctx->ipad); - if (err) - return err; - keylen = ds; - } else { - memcpy(bctx->ipad, key, keylen); - } - - memset(bctx->ipad + keylen, 0, bs - keylen); - memcpy(bctx->opad, bctx->ipad, bs); - - for (i = 0; i < bs; i++) { - bctx->ipad[i] ^= hmac_ipad_value; - bctx->opad[i] ^= hmac_opad_value; - } - - return 0; -} - -static int mtk_sha_export(struct ahash_request *req, void *out) -{ - const struct mtk_sha_reqctx *ctx = ahash_request_ctx(req); - - memcpy(out, ctx, sizeof(*ctx)); - return 0; -} - -static int mtk_sha_import(struct ahash_request *req, const void *in) -{ - struct mtk_sha_reqctx *ctx = ahash_request_ctx(req); - - memcpy(ctx, in, sizeof(*ctx)); - return 0; -} - -static int mtk_sha_cra_init_alg(struct crypto_tfm *tfm, - const char *alg_base) -{ - struct mtk_sha_ctx *tctx = crypto_tfm_ctx(tfm); - struct mtk_cryp *cryp = null; - - cryp = mtk_sha_find_dev(tctx); - if (!cryp) - return -enodev; - - crypto_ahash_set_reqsize(__crypto_ahash_cast(tfm), - sizeof(struct mtk_sha_reqctx)); - - if (alg_base) { - struct mtk_sha_hmac_ctx *bctx = tctx->base; - - tctx->flags |= sha_flags_hmac; - bctx->shash = crypto_alloc_shash(alg_base, 0, - crypto_alg_need_fallback); - if (is_err(bctx->shash)) { - pr_err("base driver %s could not be loaded. ", - alg_base); - - return ptr_err(bctx->shash); - } - } - return 0; -} - -static int mtk_sha_cra_init(struct crypto_tfm *tfm) -{ - return mtk_sha_cra_init_alg(tfm, null); -} - -static int mtk_sha_cra_sha1_init(struct crypto_tfm *tfm) -{ - return mtk_sha_cra_init_alg(tfm, "sha1"); -} - -static int mtk_sha_cra_sha224_init(struct crypto_tfm *tfm) -{ - return mtk_sha_cra_init_alg(tfm, "sha224"); -} - -static int mtk_sha_cra_sha256_init(struct crypto_tfm *tfm) -{ - return mtk_sha_cra_init_alg(tfm, "sha256"); -} - -static int mtk_sha_cra_sha384_init(struct crypto_tfm *tfm) -{ - return mtk_sha_cra_init_alg(tfm, "sha384"); -} - -static int mtk_sha_cra_sha512_init(struct crypto_tfm *tfm) -{ - return mtk_sha_cra_init_alg(tfm, "sha512"); -} - -static void mtk_sha_cra_exit(struct crypto_tfm *tfm) -{ - struct mtk_sha_ctx *tctx = crypto_tfm_ctx(tfm); - - if (tctx->flags & sha_flags_hmac) { - struct mtk_sha_hmac_ctx *bctx = tctx->base; - - crypto_free_shash(bctx->shash); - } -} - -static struct ahash_alg algs_sha1_sha224_sha256[] = { -{ - .init = mtk_sha_init, - .update = mtk_sha_update, - .final = mtk_sha_final, - .finup = mtk_sha_finup, - .digest = mtk_sha_digest, - .export = mtk_sha_export, - .import = mtk_sha_import, - .halg.digestsize = sha1_digest_size, - .halg.statesize = sizeof(struct mtk_sha_reqctx), - .halg.base = { - .cra_name = "sha1", - .cra_driver_name = "mtk-sha1", - .cra_priority = 400, - .cra_flags = crypto_alg_async, - .cra_blocksize = sha1_block_size, - .cra_ctxsize = sizeof(struct mtk_sha_ctx), - .cra_alignmask = sha_align_msk, - .cra_module = this_module, - .cra_init = mtk_sha_cra_init, - .cra_exit = mtk_sha_cra_exit, - } -}, -{ - .init = mtk_sha_init, - .update = mtk_sha_update, - .final = mtk_sha_final, - .finup = mtk_sha_finup, - .digest = mtk_sha_digest, - .export = mtk_sha_export, - .import = mtk_sha_import, - .halg.digestsize = sha224_digest_size, - .halg.statesize = sizeof(struct mtk_sha_reqctx), - .halg.base = { - .cra_name = "sha224", - .cra_driver_name = "mtk-sha224", - .cra_priority = 400, - .cra_flags = crypto_alg_async, - .cra_blocksize = sha224_block_size, - .cra_ctxsize = sizeof(struct mtk_sha_ctx), - .cra_alignmask = sha_align_msk, - .cra_module = this_module, - .cra_init = mtk_sha_cra_init, - .cra_exit = mtk_sha_cra_exit, - } -}, -{ - .init = mtk_sha_init, - .update = mtk_sha_update, - .final = mtk_sha_final, - .finup = mtk_sha_finup, - .digest = mtk_sha_digest, - .export = mtk_sha_export, - .import = mtk_sha_import, - .halg.digestsize = sha256_digest_size, - .halg.statesize = sizeof(struct mtk_sha_reqctx), - .halg.base = { - .cra_name = "sha256", - .cra_driver_name = "mtk-sha256", - .cra_priority = 400, - .cra_flags = crypto_alg_async, - .cra_blocksize = sha256_block_size, - .cra_ctxsize = sizeof(struct mtk_sha_ctx), - .cra_alignmask = sha_align_msk, - .cra_module = this_module, - .cra_init = mtk_sha_cra_init, - .cra_exit = mtk_sha_cra_exit, - } -}, -{ - .init = mtk_sha_init, - .update = mtk_sha_update, - .final = mtk_sha_final, - .finup = mtk_sha_finup, - .digest = mtk_sha_digest, - .export = mtk_sha_export, - .import = mtk_sha_import, - .setkey = mtk_sha_setkey, - .halg.digestsize = sha1_digest_size, - .halg.statesize = sizeof(struct mtk_sha_reqctx), - .halg.base = { - .cra_name = "hmac(sha1)", - .cra_driver_name = "mtk-hmac-sha1", - .cra_priority = 400, - .cra_flags = crypto_alg_async | - crypto_alg_need_fallback, - .cra_blocksize = sha1_block_size, - .cra_ctxsize = sizeof(struct mtk_sha_ctx) + - sizeof(struct mtk_sha_hmac_ctx), - .cra_alignmask = sha_align_msk, - .cra_module = this_module, - .cra_init = mtk_sha_cra_sha1_init, - .cra_exit = mtk_sha_cra_exit, - } -}, -{ - .init = mtk_sha_init, - .update = mtk_sha_update, - .final = mtk_sha_final, - .finup = mtk_sha_finup, - .digest = mtk_sha_digest, - .export = mtk_sha_export, - .import = mtk_sha_import, - .setkey = mtk_sha_setkey, - .halg.digestsize = sha224_digest_size, - .halg.statesize = sizeof(struct mtk_sha_reqctx), - .halg.base = { - .cra_name = "hmac(sha224)", - .cra_driver_name = "mtk-hmac-sha224", - .cra_priority = 400, - .cra_flags = crypto_alg_async | - crypto_alg_need_fallback, - .cra_blocksize = sha224_block_size, - .cra_ctxsize = sizeof(struct mtk_sha_ctx) + - sizeof(struct mtk_sha_hmac_ctx), - .cra_alignmask = sha_align_msk, - .cra_module = this_module, - .cra_init = mtk_sha_cra_sha224_init, - .cra_exit = mtk_sha_cra_exit, - } -}, -{ - .init = mtk_sha_init, - .update = mtk_sha_update, - .final = mtk_sha_final, - .finup = mtk_sha_finup, - .digest = mtk_sha_digest, - .export = mtk_sha_export, - .import = mtk_sha_import, - .setkey = mtk_sha_setkey, - .halg.digestsize = sha256_digest_size, - .halg.statesize = sizeof(struct mtk_sha_reqctx), - .halg.base = { - .cra_name = "hmac(sha256)", - .cra_driver_name = "mtk-hmac-sha256", - .cra_priority = 400, - .cra_flags = crypto_alg_async | - crypto_alg_need_fallback, - .cra_blocksize = sha256_block_size, - .cra_ctxsize = sizeof(struct mtk_sha_ctx) + - sizeof(struct mtk_sha_hmac_ctx), - .cra_alignmask = sha_align_msk, - .cra_module = this_module, - .cra_init = mtk_sha_cra_sha256_init, - .cra_exit = mtk_sha_cra_exit, - } -}, -}; - -static struct ahash_alg algs_sha384_sha512[] = { -{ - .init = mtk_sha_init, - .update = mtk_sha_update, - .final = mtk_sha_final, - .finup = mtk_sha_finup, - .digest = mtk_sha_digest, - .export = mtk_sha_export, - .import = mtk_sha_import, - .halg.digestsize = sha384_digest_size, - .halg.statesize = sizeof(struct mtk_sha_reqctx), - .halg.base = { - .cra_name = "sha384", - .cra_driver_name = "mtk-sha384", - .cra_priority = 400, - .cra_flags = crypto_alg_async, - .cra_blocksize = sha384_block_size, - .cra_ctxsize = sizeof(struct mtk_sha_ctx), - .cra_alignmask = sha_align_msk, - .cra_module = this_module, - .cra_init = mtk_sha_cra_init, - .cra_exit = mtk_sha_cra_exit, - } -}, -{ - .init = mtk_sha_init, - .update = mtk_sha_update, - .final = mtk_sha_final, - .finup = mtk_sha_finup, - .digest = mtk_sha_digest, - .export = mtk_sha_export, - .import = mtk_sha_import, - .halg.digestsize = sha512_digest_size, - .halg.statesize = sizeof(struct mtk_sha_reqctx), - .halg.base = { - .cra_name = "sha512", - .cra_driver_name = "mtk-sha512", - .cra_priority = 400, - .cra_flags = crypto_alg_async, - .cra_blocksize = sha512_block_size, - .cra_ctxsize = sizeof(struct mtk_sha_ctx), - .cra_alignmask = sha_align_msk, - .cra_module = this_module, - .cra_init = mtk_sha_cra_init, - .cra_exit = mtk_sha_cra_exit, - } -}, -{ - .init = mtk_sha_init, - .update = mtk_sha_update, - .final = mtk_sha_final, - .finup = mtk_sha_finup, - .digest = mtk_sha_digest, - .export = mtk_sha_export, - .import = mtk_sha_import, - .setkey = mtk_sha_setkey, - .halg.digestsize = sha384_digest_size, - .halg.statesize = sizeof(struct mtk_sha_reqctx), - .halg.base = { - .cra_name = "hmac(sha384)", - .cra_driver_name = "mtk-hmac-sha384", - .cra_priority = 400, - .cra_flags = crypto_alg_async | - crypto_alg_need_fallback, - .cra_blocksize = sha384_block_size, - .cra_ctxsize = sizeof(struct mtk_sha_ctx) + - sizeof(struct mtk_sha_hmac_ctx), - .cra_alignmask = sha_align_msk, - .cra_module = this_module, - .cra_init = mtk_sha_cra_sha384_init, - .cra_exit = mtk_sha_cra_exit, - } -}, -{ - .init = mtk_sha_init, - .update = mtk_sha_update, - .final = mtk_sha_final, - .finup = mtk_sha_finup, - .digest = mtk_sha_digest, - .export = mtk_sha_export, - .import = mtk_sha_import, - .setkey = mtk_sha_setkey, - .halg.digestsize = sha512_digest_size, - .halg.statesize = sizeof(struct mtk_sha_reqctx), - .halg.base = { - .cra_name = "hmac(sha512)", - .cra_driver_name = "mtk-hmac-sha512", - .cra_priority = 400, - .cra_flags = crypto_alg_async | - crypto_alg_need_fallback, - .cra_blocksize = sha512_block_size, - .cra_ctxsize = sizeof(struct mtk_sha_ctx) + - sizeof(struct mtk_sha_hmac_ctx), - .cra_alignmask = sha_align_msk, - .cra_module = this_module, - .cra_init = mtk_sha_cra_sha512_init, - .cra_exit = mtk_sha_cra_exit, - } -}, -}; - -static void mtk_sha_queue_task(unsigned long data) -{ - struct mtk_sha_rec *sha = (struct mtk_sha_rec *)data; - - mtk_sha_handle_queue(sha->cryp, sha->id - mtk_ring2, null); -} - -static void mtk_sha_done_task(unsigned long data) -{ - struct mtk_sha_rec *sha = (struct mtk_sha_rec *)data; - struct mtk_cryp *cryp = sha->cryp; - - mtk_sha_unmap(cryp, sha); - mtk_sha_complete(cryp, sha); -} - -static irqreturn_t mtk_sha_irq(int irq, void *dev_id) -{ - struct mtk_sha_rec *sha = (struct mtk_sha_rec *)dev_id; - struct mtk_cryp *cryp = sha->cryp; - u32 val = mtk_sha_read(cryp, rdr_stat(sha->id)); - - mtk_sha_write(cryp, rdr_stat(sha->id), val); - - if (likely((sha_flags_busy & sha->flags))) { - mtk_sha_write(cryp, rdr_proc_count(sha->id), mtk_cnt_rst); - mtk_sha_write(cryp, rdr_thresh(sha->id), - mtk_rdr_proc_thresh | mtk_rdr_proc_mode); - - tasklet_schedule(&sha->done_task); - } else { - dev_warn(cryp->dev, "sha interrupt when no active requests. "); - } - return irq_handled; -} - -/* - * the purpose of two sha records is used to get extra performance. - * it is similar to mtk_aes_record_init(). - */ -static int mtk_sha_record_init(struct mtk_cryp *cryp) -{ - struct mtk_sha_rec **sha = cryp->sha; - int i, err = -enomem; - - for (i = 0; i < mtk_rec_num; i++) { - sha[i] = kzalloc(sizeof(**sha), gfp_kernel); - if (!sha[i]) - goto err_cleanup; - - sha[i]->cryp = cryp; - - spin_lock_init(&sha[i]->lock); - crypto_init_queue(&sha[i]->queue, sha_queue_size); - - tasklet_init(&sha[i]->queue_task, mtk_sha_queue_task, - (unsigned long)sha[i]); - tasklet_init(&sha[i]->done_task, mtk_sha_done_task, - (unsigned long)sha[i]); - } - - /* link to ring2 and ring3 respectively */ - sha[0]->id = mtk_ring2; - sha[1]->id = mtk_ring3; - - cryp->rec = 1; - - return 0; - -err_cleanup: - for (; i--; ) - kfree(sha[i]); - return err; -} - -static void mtk_sha_record_free(struct mtk_cryp *cryp) -{ - int i; - - for (i = 0; i < mtk_rec_num; i++) { - tasklet_kill(&cryp->sha[i]->done_task); - tasklet_kill(&cryp->sha[i]->queue_task); - - kfree(cryp->sha[i]); - } -} - -static void mtk_sha_unregister_algs(void) -{ - int i; - - for (i = 0; i < array_size(algs_sha1_sha224_sha256); i++) - crypto_unregister_ahash(&algs_sha1_sha224_sha256[i]); - - for (i = 0; i < array_size(algs_sha384_sha512); i++) - crypto_unregister_ahash(&algs_sha384_sha512[i]); -} - -static int mtk_sha_register_algs(void) -{ - int err, i; - - for (i = 0; i < array_size(algs_sha1_sha224_sha256); i++) { - err = crypto_register_ahash(&algs_sha1_sha224_sha256[i]); - if (err) - goto err_sha_224_256_algs; - } - - for (i = 0; i < array_size(algs_sha384_sha512); i++) { - err = crypto_register_ahash(&algs_sha384_sha512[i]); - if (err) - goto err_sha_384_512_algs; - } - - return 0; - -err_sha_384_512_algs: - for (; i--; ) - crypto_unregister_ahash(&algs_sha384_sha512[i]); - i = array_size(algs_sha1_sha224_sha256); -err_sha_224_256_algs: - for (; i--; ) - crypto_unregister_ahash(&algs_sha1_sha224_sha256[i]); - - return err; -} - -int mtk_hash_alg_register(struct mtk_cryp *cryp) -{ - int err; - - init_list_head(&cryp->sha_list); - - /* initialize two hash records */ - err = mtk_sha_record_init(cryp); - if (err) - goto err_record; - - err = devm_request_irq(cryp->dev, cryp->irq[mtk_ring2], mtk_sha_irq, - 0, "mtk-sha", cryp->sha[0]); - if (err) { - dev_err(cryp->dev, "unable to request sha irq0. "); - goto err_res; - } - - err = devm_request_irq(cryp->dev, cryp->irq[mtk_ring3], mtk_sha_irq, - 0, "mtk-sha", cryp->sha[1]); - if (err) { - dev_err(cryp->dev, "unable to request sha irq1. "); - goto err_res; - } - - /* enable ring2 and ring3 interrupt for hash */ - mtk_sha_write(cryp, aic_enable_set(mtk_ring2), mtk_irq_rdr2); - mtk_sha_write(cryp, aic_enable_set(mtk_ring3), mtk_irq_rdr3); - - spin_lock(&mtk_sha.lock); - list_add_tail(&cryp->sha_list, &mtk_sha.dev_list); - spin_unlock(&mtk_sha.lock); - - err = mtk_sha_register_algs(); - if (err) - goto err_algs; - - return 0; - -err_algs: - spin_lock(&mtk_sha.lock); - list_del(&cryp->sha_list); - spin_unlock(&mtk_sha.lock); -err_res: - mtk_sha_record_free(cryp); -err_record: - - dev_err(cryp->dev, "mtk-sha initialization failed. "); - return err; -} - -void mtk_hash_alg_release(struct mtk_cryp *cryp) -{ - spin_lock(&mtk_sha.lock); - list_del(&cryp->sha_list); - spin_unlock(&mtk_sha.lock); - - mtk_sha_unregister_algs(); - mtk_sha_record_free(cryp); -}
|
Cryptography hardware acceleration
|
6a702fa5339597f2f2bb466043fbb20f3e55e0ad
|
vic wu ryder lee ryder lee mediatek com
|
drivers
|
crypto
|
mediatek
|
pci: brcmstb: support bcm4908 with external perst# signal controller
|
bcm4908 uses external misc block for controlling perst# signal. use it as a reset controller.
|
this release allows to map an uid to a different one in a mount; it also adds support for selecting the preemption model at runtime; support for a low-overhead memory error detector designed to be used in production; support for the acrn hypervisor designed for embedded systems; btrfs initial support for zoned devices, subpage blocks sizes and performance improvements; support for eager nfs writes; support for a thermal power management to control the surface temperature of embedded devices in an unified way; the napi polling can be moved to a kernel thread; and support for non-blocking path lookups. as always, there are many other features, new drivers, improvements and fixes.
|
support bcm4908 with external perst# signal controller
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'cryptography', 'security', 'networking', 'architectures x86 arm risc-v powerpc mips csky s390 pa-risc c6x']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'device voltage and frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'non-transparent bridge (ntb)', 'thunderbolt', 'clock', 'phy ("physical layer" framework)', 'cxl (compute express link)', 'various']
|
['brcmstb']
|
['kconfig', 'c']
| 2
| 33
| 1
|
--- diff --git a/drivers/pci/controller/kconfig b/drivers/pci/controller/kconfig --- a/drivers/pci/controller/kconfig +++ b/drivers/pci/controller/kconfig - depends on arch_brcmstb || arch_bcm2835 || compile_test + depends on arch_brcmstb || arch_bcm2835 || arch_bcm4908 || compile_test diff --git a/drivers/pci/controller/pcie-brcmstb.c b/drivers/pci/controller/pcie-brcmstb.c --- a/drivers/pci/controller/pcie-brcmstb.c +++ b/drivers/pci/controller/pcie-brcmstb.c +#define brcm_pcie_hw_rev_3_20 0x0320 +static inline void brcm_pcie_perst_set_4908(struct brcm_pcie *pcie, u32 val); + bcm4908, +static const struct pcie_cfg_data bcm4908_cfg = { + .offsets = pcie_offsets, + .type = bcm4908, + .perst_set = brcm_pcie_perst_set_4908, + .bridge_sw_init_set = brcm_pcie_bridge_sw_init_set_generic, +}; + + struct reset_control *perst_reset; +static inline void brcm_pcie_perst_set_4908(struct brcm_pcie *pcie, u32 val) +{ + if (warn_once(!pcie->perst_reset, "missing perst# reset controller ")) + return; + + if (val) + reset_control_assert(pcie->perst_reset); + else + reset_control_deassert(pcie->perst_reset); +} + + { .compatible = "brcm,bcm4908-pcie", .data = &bcm4908_cfg }, + pcie->perst_reset = devm_reset_control_get_optional_exclusive(&pdev->dev, "perst"); + if (is_err(pcie->perst_reset)) { + clk_disable_unprepare(pcie->clk); + return ptr_err(pcie->perst_reset); + } + if (pcie->type == bcm4908 && pcie->hw_rev >= brcm_pcie_hw_rev_3_20) { + dev_err(pcie->dev, "hardware revision with unsupported perst# setup "); + goto fail; + }
|
PCI
|
0cdfaceb9889b69d0230b82ae91c46ed0b33fc27
|
rafa mi ecki florian fainelli f fainelli gmail com
|
drivers
|
pci
|
controller
|
pci: layerscape: add lx2160a rev2 ep mode support
|
the lx2160a rev2 uses the same pcie ip as ls2088a, but lx2160a rev2 pcie controller is integrated with different stride between pfs' register address.
|
this release allows to map an uid to a different one in a mount; it also adds support for selecting the preemption model at runtime; support for a low-overhead memory error detector designed to be used in production; support for the acrn hypervisor designed for embedded systems; btrfs initial support for zoned devices, subpage blocks sizes and performance improvements; support for eager nfs writes; support for a thermal power management to control the surface temperature of embedded devices in an unified way; the napi polling can be moved to a kernel thread; and support for non-blocking path lookups. as always, there are many other features, new drivers, improvements and fixes.
|
add lx2160a rev2 ep mode support
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'cryptography', 'security', 'networking', 'architectures x86 arm risc-v powerpc mips csky s390 pa-risc c6x']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'device voltage and frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'non-transparent bridge (ntb)', 'thunderbolt', 'clock', 'phy ("physical layer" framework)', 'cxl (compute express link)', 'various']
|
['layerscape']
|
['c']
| 1
| 7
| 0
|
--- diff --git a/drivers/pci/controller/dwc/pci-layerscape-ep.c b/drivers/pci/controller/dwc/pci-layerscape-ep.c --- a/drivers/pci/controller/dwc/pci-layerscape-ep.c +++ b/drivers/pci/controller/dwc/pci-layerscape-ep.c +static const struct ls_pcie_ep_drvdata lx2_ep_drvdata = { + .func_offset = 0x8000, + .ops = &ls_pcie_ep_ops, + .dw_pcie_ops = &dw_ls_pcie_ep_ops, +}; + + { .compatible = "fsl,lx2160ar2-pcie-ep", .data = &lx2_ep_drvdata },
|
PCI
|
5bfb792f210ce6644bc2d72e047e0715ac4a1010
|
hou zhiqiang
|
drivers
|
pci
|
controller, dwc
|
pci: microchip: add microchip polarfire pcie controller driver
|
add support for the microchip polarfire pcie controller when configured in host (root complex) mode.
|
this release allows to map an uid to a different one in a mount; it also adds support for selecting the preemption model at runtime; support for a low-overhead memory error detector designed to be used in production; support for the acrn hypervisor designed for embedded systems; btrfs initial support for zoned devices, subpage blocks sizes and performance improvements; support for eager nfs writes; support for a thermal power management to control the surface temperature of embedded devices in an unified way; the napi polling can be moved to a kernel thread; and support for non-blocking path lookups. as always, there are many other features, new drivers, improvements and fixes.
|
add microchip polarfire pcie controller driver
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'cryptography', 'security', 'networking', 'architectures x86 arm risc-v powerpc mips csky s390 pa-risc c6x']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'device voltage and frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'non-transparent bridge (ntb)', 'thunderbolt', 'clock', 'phy ("physical layer" framework)', 'cxl (compute express link)', 'various']
|
['microchip']
|
['kconfig', 'c', 'makefile']
| 3
| 1,149
| 0
|
--- diff --git a/drivers/pci/controller/kconfig b/drivers/pci/controller/kconfig --- a/drivers/pci/controller/kconfig +++ b/drivers/pci/controller/kconfig +config pcie_microchip_host + bool "microchip axi pcie host bridge support" + depends on pci_msi && of + select pci_msi_irq_domain + select generic_msi_irq_domain + select pci_host_common + help + say y here if you want kernel to support the microchip axi pcie + host bridge driver. + diff --git a/drivers/pci/controller/makefile b/drivers/pci/controller/makefile --- a/drivers/pci/controller/makefile +++ b/drivers/pci/controller/makefile +obj-$(config_pcie_microchip_host) += pcie-microchip-host.o diff --git a/drivers/pci/controller/pcie-microchip-host.c b/drivers/pci/controller/pcie-microchip-host.c --- /dev/null +++ b/drivers/pci/controller/pcie-microchip-host.c +// spdx-license-identifier: gpl-2.0 +/* + * microchip axi pcie bridge host controller driver + * + * copyright (c) 2018 - 2020 microchip corporation. all rights reserved. + * + * author: daire mcnamara <daire.mcnamara@microchip.com> + */ + +#include <linux/clk.h> +#include <linux/irqchip/chained_irq.h> +#include <linux/module.h> +#include <linux/msi.h> +#include <linux/of_address.h> +#include <linux/of_irq.h> +#include <linux/of_pci.h> +#include <linux/pci-ecam.h> +#include <linux/platform_device.h> + +#include "../pci.h" + +/* number of msi irqs */ +#define mc_num_msi_irqs 32 +#define mc_num_msi_irqs_coded 5 + +/* pcie bridge phy and controller phy offsets */ +#define mc_pcie1_bridge_addr 0x00008000u +#define mc_pcie1_ctrl_addr 0x0000a000u + +#define mc_pcie_bridge_addr (mc_pcie1_bridge_addr) +#define mc_pcie_ctrl_addr (mc_pcie1_ctrl_addr) + +/* pcie controller phy regs */ +#define sec_error_cnt 0x20 +#define ded_error_cnt 0x24 +#define sec_error_int 0x28 +#define sec_error_int_tx_ram_sec_err_int genmask(3, 0) +#define sec_error_int_rx_ram_sec_err_int genmask(7, 4) +#define sec_error_int_pcie2axi_ram_sec_err_int genmask(11, 8) +#define sec_error_int_axi2pcie_ram_sec_err_int genmask(15, 12) +#define num_sec_error_ints (4) +#define sec_error_int_mask 0x2c +#define ded_error_int 0x30 +#define ded_error_int_tx_ram_ded_err_int genmask(3, 0) +#define ded_error_int_rx_ram_ded_err_int genmask(7, 4) +#define ded_error_int_pcie2axi_ram_ded_err_int genmask(11, 8) +#define ded_error_int_axi2pcie_ram_ded_err_int genmask(15, 12) +#define num_ded_error_ints (4) +#define ded_error_int_mask 0x34 +#define ecc_control 0x38 +#define ecc_control_tx_ram_inj_error_0 bit(0) +#define ecc_control_tx_ram_inj_error_1 bit(1) +#define ecc_control_tx_ram_inj_error_2 bit(2) +#define ecc_control_tx_ram_inj_error_3 bit(3) +#define ecc_control_rx_ram_inj_error_0 bit(4) +#define ecc_control_rx_ram_inj_error_1 bit(5) +#define ecc_control_rx_ram_inj_error_2 bit(6) +#define ecc_control_rx_ram_inj_error_3 bit(7) +#define ecc_control_pcie2axi_ram_inj_error_0 bit(8) +#define ecc_control_pcie2axi_ram_inj_error_1 bit(9) +#define ecc_control_pcie2axi_ram_inj_error_2 bit(10) +#define ecc_control_pcie2axi_ram_inj_error_3 bit(11) +#define ecc_control_axi2pcie_ram_inj_error_0 bit(12) +#define ecc_control_axi2pcie_ram_inj_error_1 bit(13) +#define ecc_control_axi2pcie_ram_inj_error_2 bit(14) +#define ecc_control_axi2pcie_ram_inj_error_3 bit(15) +#define ecc_control_tx_ram_ecc_bypass bit(24) +#define ecc_control_rx_ram_ecc_bypass bit(25) +#define ecc_control_pcie2axi_ram_ecc_bypass bit(26) +#define ecc_control_axi2pcie_ram_ecc_bypass bit(27) +#define ltssm_state 0x5c +#define ltssm_l0_state 0x10 +#define pcie_event_int 0x14c +#define pcie_event_int_l2_exit_int bit(0) +#define pcie_event_int_hotrst_exit_int bit(1) +#define pcie_event_int_dlup_exit_int bit(2) +#define pcie_event_int_mask genmask(2, 0) +#define pcie_event_int_l2_exit_int_mask bit(16) +#define pcie_event_int_hotrst_exit_int_mask bit(17) +#define pcie_event_int_dlup_exit_int_mask bit(18) +#define pcie_event_int_enb_mask genmask(18, 16) +#define pcie_event_int_enb_shift 16 +#define num_pcie_events (3) + +/* pcie bridge phy regs */ +#define pcie_pci_ids_dw1 0x9c + +/* pcie config space msi capability structure */ +#define mc_msi_cap_ctrl_offset 0xe0u +#define mc_msi_max_q_avail (mc_num_msi_irqs_coded << 1) +#define mc_msi_q_size (mc_num_msi_irqs_coded << 4) + +#define imask_local 0x180 +#define dma_end_engine_0_mask 0x00000000u +#define dma_end_engine_0_shift 0 +#define dma_end_engine_1_mask 0x00000000u +#define dma_end_engine_1_shift 1 +#define dma_error_engine_0_mask 0x00000100u +#define dma_error_engine_0_shift 8 +#define dma_error_engine_1_mask 0x00000200u +#define dma_error_engine_1_shift 9 +#define a_atr_evt_post_err_mask 0x00010000u +#define a_atr_evt_post_err_shift 16 +#define a_atr_evt_fetch_err_mask 0x00020000u +#define a_atr_evt_fetch_err_shift 17 +#define a_atr_evt_discard_err_mask 0x00040000u +#define a_atr_evt_discard_err_shift 18 +#define a_atr_evt_doorbell_mask 0x00000000u +#define a_atr_evt_doorbell_shift 19 +#define p_atr_evt_post_err_mask 0x00100000u +#define p_atr_evt_post_err_shift 20 +#define p_atr_evt_fetch_err_mask 0x00200000u +#define p_atr_evt_fetch_err_shift 21 +#define p_atr_evt_discard_err_mask 0x00400000u +#define p_atr_evt_discard_err_shift 22 +#define p_atr_evt_doorbell_mask 0x00000000u +#define p_atr_evt_doorbell_shift 23 +#define pm_msi_int_inta_mask 0x01000000u +#define pm_msi_int_inta_shift 24 +#define pm_msi_int_intb_mask 0x02000000u +#define pm_msi_int_intb_shift 25 +#define pm_msi_int_intc_mask 0x04000000u +#define pm_msi_int_intc_shift 26 +#define pm_msi_int_intd_mask 0x08000000u +#define pm_msi_int_intd_shift 27 +#define pm_msi_int_intx_mask 0x0f000000u +#define pm_msi_int_intx_shift 24 +#define pm_msi_int_msi_mask 0x10000000u +#define pm_msi_int_msi_shift 28 +#define pm_msi_int_aer_evt_mask 0x20000000u +#define pm_msi_int_aer_evt_shift 29 +#define pm_msi_int_events_mask 0x40000000u +#define pm_msi_int_events_shift 30 +#define pm_msi_int_sys_err_mask 0x80000000u +#define pm_msi_int_sys_err_shift 31 +#define num_local_events 15 +#define istatus_local 0x184 +#define imask_host 0x188 +#define istatus_host 0x18c +#define msi_addr 0x190 +#define istatus_msi 0x194 + +/* pcie master table init defines */ +#define atr0_pcie_win0_srcaddr_param 0x600u +#define atr0_pcie_atr_size 0x25 +#define atr0_pcie_atr_size_shift 1 +#define atr0_pcie_win0_src_addr 0x604u +#define atr0_pcie_win0_trsl_addr_lsb 0x608u +#define atr0_pcie_win0_trsl_addr_udw 0x60cu +#define atr0_pcie_win0_trsl_param 0x610u + +/* pcie axi slave table init defines */ +#define atr0_axi4_slv0_srcaddr_param 0x800u +#define atr_size_shift 1 +#define atr_impl_enable 1 +#define atr0_axi4_slv0_src_addr 0x804u +#define atr0_axi4_slv0_trsl_addr_lsb 0x808u +#define atr0_axi4_slv0_trsl_addr_udw 0x80cu +#define atr0_axi4_slv0_trsl_param 0x810u +#define pcie_tx_rx_interface 0x00000000u +#define pcie_config_interface 0x00000001u + +#define atr_entry_size 32 + +#define event_pcie_l2_exit 0 +#define event_pcie_hotrst_exit 1 +#define event_pcie_dlup_exit 2 +#define event_sec_tx_ram_sec_err 3 +#define event_sec_rx_ram_sec_err 4 +#define event_sec_axi2pcie_ram_sec_err 5 +#define event_sec_pcie2axi_ram_sec_err 6 +#define event_ded_tx_ram_ded_err 7 +#define event_ded_rx_ram_ded_err 8 +#define event_ded_axi2pcie_ram_ded_err 9 +#define event_ded_pcie2axi_ram_ded_err 10 +#define event_local_dma_end_engine_0 11 +#define event_local_dma_end_engine_1 12 +#define event_local_dma_error_engine_0 13 +#define event_local_dma_error_engine_1 14 +#define event_local_a_atr_evt_post_err 15 +#define event_local_a_atr_evt_fetch_err 16 +#define event_local_a_atr_evt_discard_err 17 +#define event_local_a_atr_evt_doorbell 18 +#define event_local_p_atr_evt_post_err 19 +#define event_local_p_atr_evt_fetch_err 20 +#define event_local_p_atr_evt_discard_err 21 +#define event_local_p_atr_evt_doorbell 22 +#define event_local_pm_msi_int_intx 23 +#define event_local_pm_msi_int_msi 24 +#define event_local_pm_msi_int_aer_evt 25 +#define event_local_pm_msi_int_events 26 +#define event_local_pm_msi_int_sys_err 27 +#define num_events 28 + +#define pcie_event_cause(x, s) \ + [event_pcie_ ## x] = { __stringify(x), s } + +#define sec_error_cause(x, s) \ + [event_sec_ ## x] = { __stringify(x), s } + +#define ded_error_cause(x, s) \ + [event_ded_ ## x] = { __stringify(x), s } + +#define local_event_cause(x, s) \ + [event_local_ ## x] = { __stringify(x), s } + +#define pcie_event(x) \ + .base = mc_pcie_ctrl_addr, \ + .offset = pcie_event_int, \ + .mask_offset = pcie_event_int, \ + .mask_high = 1, \ + .mask = pcie_event_int_ ## x ## _int, \ + .enb_mask = pcie_event_int_enb_mask + +#define sec_event(x) \ + .base = mc_pcie_ctrl_addr, \ + .offset = sec_error_int, \ + .mask_offset = sec_error_int_mask, \ + .mask = sec_error_int_ ## x ## _int, \ + .mask_high = 1, \ + .enb_mask = 0 + +#define ded_event(x) \ + .base = mc_pcie_ctrl_addr, \ + .offset = ded_error_int, \ + .mask_offset = ded_error_int_mask, \ + .mask_high = 1, \ + .mask = ded_error_int_ ## x ## _int, \ + .enb_mask = 0 + +#define local_event(x) \ + .base = mc_pcie_bridge_addr, \ + .offset = istatus_local, \ + .mask_offset = imask_local, \ + .mask_high = 0, \ + .mask = x ## _mask, \ + .enb_mask = 0 + +#define pcie_event_to_event_map(x) \ + { pcie_event_int_ ## x ## _int, event_pcie_ ## x } + +#define sec_error_to_event_map(x) \ + { sec_error_int_ ## x ## _int, event_sec_ ## x } + +#define ded_error_to_event_map(x) \ + { ded_error_int_ ## x ## _int, event_ded_ ## x } + +#define local_status_to_event_map(x) \ + { x ## _mask, event_local_ ## x } + +struct event_map { + u32 reg_mask; + u32 event_bit; +}; + +struct mc_msi { + struct mutex lock; /* protect used bitmap */ + struct irq_domain *msi_domain; + struct irq_domain *dev_domain; + u32 num_vectors; + u64 vector_phy; + declare_bitmap(used, mc_num_msi_irqs); +}; + +struct mc_port { + void __iomem *axi_base_addr; + struct device *dev; + struct irq_domain *intx_domain; + struct irq_domain *event_domain; + raw_spinlock_t lock; + struct mc_msi msi; +}; + +struct cause { + const char *sym; + const char *str; +}; + +static const struct cause event_cause[num_events] = { + pcie_event_cause(l2_exit, "l2 exit event"), + pcie_event_cause(hotrst_exit, "hot reset exit event"), + pcie_event_cause(dlup_exit, "dlup exit event"), + sec_error_cause(tx_ram_sec_err, "sec error in tx buffer"), + sec_error_cause(rx_ram_sec_err, "sec error in rx buffer"), + sec_error_cause(pcie2axi_ram_sec_err, "sec error in pcie2axi buffer"), + sec_error_cause(axi2pcie_ram_sec_err, "sec error in axi2pcie buffer"), + ded_error_cause(tx_ram_ded_err, "ded error in tx buffer"), + ded_error_cause(rx_ram_ded_err, "ded error in rx buffer"), + ded_error_cause(pcie2axi_ram_ded_err, "ded error in pcie2axi buffer"), + ded_error_cause(axi2pcie_ram_ded_err, "ded error in axi2pcie buffer"), + local_event_cause(dma_error_engine_0, "dma engine 0 error"), + local_event_cause(dma_error_engine_1, "dma engine 1 error"), + local_event_cause(a_atr_evt_post_err, "axi write request error"), + local_event_cause(a_atr_evt_fetch_err, "axi read request error"), + local_event_cause(a_atr_evt_discard_err, "axi read timeout"), + local_event_cause(p_atr_evt_post_err, "pcie write request error"), + local_event_cause(p_atr_evt_fetch_err, "pcie read request error"), + local_event_cause(p_atr_evt_discard_err, "pcie read timeout"), + local_event_cause(pm_msi_int_aer_evt, "aer event"), + local_event_cause(pm_msi_int_events, "pm/ltr/hotplug event"), + local_event_cause(pm_msi_int_sys_err, "system error"), +}; + +struct event_map pcie_event_to_event[] = { + pcie_event_to_event_map(l2_exit), + pcie_event_to_event_map(hotrst_exit), + pcie_event_to_event_map(dlup_exit), +}; + +struct event_map sec_error_to_event[] = { + sec_error_to_event_map(tx_ram_sec_err), + sec_error_to_event_map(rx_ram_sec_err), + sec_error_to_event_map(pcie2axi_ram_sec_err), + sec_error_to_event_map(axi2pcie_ram_sec_err), +}; + +struct event_map ded_error_to_event[] = { + ded_error_to_event_map(tx_ram_ded_err), + ded_error_to_event_map(rx_ram_ded_err), + ded_error_to_event_map(pcie2axi_ram_ded_err), + ded_error_to_event_map(axi2pcie_ram_ded_err), +}; + +struct event_map local_status_to_event[] = { + local_status_to_event_map(dma_end_engine_0), + local_status_to_event_map(dma_end_engine_1), + local_status_to_event_map(dma_error_engine_0), + local_status_to_event_map(dma_error_engine_1), + local_status_to_event_map(a_atr_evt_post_err), + local_status_to_event_map(a_atr_evt_fetch_err), + local_status_to_event_map(a_atr_evt_discard_err), + local_status_to_event_map(a_atr_evt_doorbell), + local_status_to_event_map(p_atr_evt_post_err), + local_status_to_event_map(p_atr_evt_fetch_err), + local_status_to_event_map(p_atr_evt_discard_err), + local_status_to_event_map(p_atr_evt_doorbell), + local_status_to_event_map(pm_msi_int_intx), + local_status_to_event_map(pm_msi_int_msi), + local_status_to_event_map(pm_msi_int_aer_evt), + local_status_to_event_map(pm_msi_int_events), + local_status_to_event_map(pm_msi_int_sys_err), +}; + +struct { + u32 base; + u32 offset; + u32 mask; + u32 shift; + u32 enb_mask; + u32 mask_high; + u32 mask_offset; +} event_descs[] = { + { pcie_event(l2_exit) }, + { pcie_event(hotrst_exit) }, + { pcie_event(dlup_exit) }, + { sec_event(tx_ram_sec_err) }, + { sec_event(rx_ram_sec_err) }, + { sec_event(pcie2axi_ram_sec_err) }, + { sec_event(axi2pcie_ram_sec_err) }, + { ded_event(tx_ram_ded_err) }, + { ded_event(rx_ram_ded_err) }, + { ded_event(pcie2axi_ram_ded_err) }, + { ded_event(axi2pcie_ram_ded_err) }, + { local_event(dma_end_engine_0) }, + { local_event(dma_end_engine_1) }, + { local_event(dma_error_engine_0) }, + { local_event(dma_error_engine_1) }, + { local_event(a_atr_evt_post_err) }, + { local_event(a_atr_evt_fetch_err) }, + { local_event(a_atr_evt_discard_err) }, + { local_event(a_atr_evt_doorbell) }, + { local_event(p_atr_evt_post_err) }, + { local_event(p_atr_evt_fetch_err) }, + { local_event(p_atr_evt_discard_err) }, + { local_event(p_atr_evt_doorbell) }, + { local_event(pm_msi_int_intx) }, + { local_event(pm_msi_int_msi) }, + { local_event(pm_msi_int_aer_evt) }, + { local_event(pm_msi_int_events) }, + { local_event(pm_msi_int_sys_err) }, +}; + +static char poss_clks[][5] = { "fic0", "fic1", "fic2", "fic3" }; + +static void mc_pcie_enable_msi(struct mc_port *port, void __iomem *base) +{ + struct mc_msi *msi = &port->msi; + u32 cap_offset = mc_msi_cap_ctrl_offset; + u16 msg_ctrl = readw_relaxed(base + cap_offset + pci_msi_flags); + + msg_ctrl |= pci_msi_flags_enable; + msg_ctrl &= ~pci_msi_flags_qmask; + msg_ctrl |= mc_msi_max_q_avail; + msg_ctrl &= ~pci_msi_flags_qsize; + msg_ctrl |= mc_msi_q_size; + msg_ctrl |= pci_msi_flags_64bit; + + writew_relaxed(msg_ctrl, base + cap_offset + pci_msi_flags); + + writel_relaxed(lower_32_bits(msi->vector_phy), + base + cap_offset + pci_msi_address_lo); + writel_relaxed(upper_32_bits(msi->vector_phy), + base + cap_offset + pci_msi_address_hi); +} + +static void mc_handle_msi(struct irq_desc *desc) +{ + struct mc_port *port = irq_desc_get_handler_data(desc); + struct device *dev = port->dev; + struct mc_msi *msi = &port->msi; + void __iomem *bridge_base_addr = + port->axi_base_addr + mc_pcie_bridge_addr; + unsigned long status; + u32 bit; + u32 virq; + + status = readl_relaxed(bridge_base_addr + istatus_local); + if (status & pm_msi_int_msi_mask) { + status = readl_relaxed(bridge_base_addr + istatus_msi); + for_each_set_bit(bit, &status, msi->num_vectors) { + virq = irq_find_mapping(msi->dev_domain, bit); + if (virq) + generic_handle_irq(virq); + else + dev_err_ratelimited(dev, "bad msi irq %d ", + bit); + } + } +} + +static void mc_msi_bottom_irq_ack(struct irq_data *data) +{ + struct mc_port *port = irq_data_get_irq_chip_data(data); + void __iomem *bridge_base_addr = + port->axi_base_addr + mc_pcie_bridge_addr; + u32 bitpos = data->hwirq; + unsigned long status; + + writel_relaxed(bit(bitpos), bridge_base_addr + istatus_msi); + status = readl_relaxed(bridge_base_addr + istatus_msi); + if (!status) + writel_relaxed(bit(pm_msi_int_msi_shift), + bridge_base_addr + istatus_local); +} + +static void mc_compose_msi_msg(struct irq_data *data, struct msi_msg *msg) +{ + struct mc_port *port = irq_data_get_irq_chip_data(data); + phys_addr_t addr = port->msi.vector_phy; + + msg->address_lo = lower_32_bits(addr); + msg->address_hi = upper_32_bits(addr); + msg->data = data->hwirq; + + dev_dbg(port->dev, "msi#%x address_hi %#x address_lo %#x ", + (int)data->hwirq, msg->address_hi, msg->address_lo); +} + +static int mc_msi_set_affinity(struct irq_data *irq_data, + const struct cpumask *mask, bool force) +{ + return -einval; +} + +static struct irq_chip mc_msi_bottom_irq_chip = { + .name = "microchip msi", + .irq_ack = mc_msi_bottom_irq_ack, + .irq_compose_msi_msg = mc_compose_msi_msg, + .irq_set_affinity = mc_msi_set_affinity, +}; + +static int mc_irq_msi_domain_alloc(struct irq_domain *domain, unsigned int virq, + unsigned int nr_irqs, void *args) +{ + struct mc_port *port = domain->host_data; + struct mc_msi *msi = &port->msi; + void __iomem *bridge_base_addr = + port->axi_base_addr + mc_pcie_bridge_addr; + unsigned long bit; + u32 val; + + mutex_lock(&msi->lock); + bit = find_first_zero_bit(msi->used, msi->num_vectors); + if (bit >= msi->num_vectors) { + mutex_unlock(&msi->lock); + return -enospc; + } + + set_bit(bit, msi->used); + + irq_domain_set_info(domain, virq, bit, &mc_msi_bottom_irq_chip, + domain->host_data, handle_edge_irq, null, null); + + /* enable msi interrupts */ + val = readl_relaxed(bridge_base_addr + imask_local); + val |= pm_msi_int_msi_mask; + writel_relaxed(val, bridge_base_addr + imask_local); + + mutex_unlock(&msi->lock); + + return 0; +} + +static void mc_irq_msi_domain_free(struct irq_domain *domain, unsigned int virq, + unsigned int nr_irqs) +{ + struct irq_data *d = irq_domain_get_irq_data(domain, virq); + struct mc_port *port = irq_data_get_irq_chip_data(d); + struct mc_msi *msi = &port->msi; + + mutex_lock(&msi->lock); + + if (test_bit(d->hwirq, msi->used)) + __clear_bit(d->hwirq, msi->used); + else + dev_err(port->dev, "trying to free unused msi%lu ", d->hwirq); + + mutex_unlock(&msi->lock); +} + +static const struct irq_domain_ops msi_domain_ops = { + .alloc = mc_irq_msi_domain_alloc, + .free = mc_irq_msi_domain_free, +}; + +static struct irq_chip mc_msi_irq_chip = { + .name = "microchip pcie msi", + .irq_ack = irq_chip_ack_parent, + .irq_mask = pci_msi_mask_irq, + .irq_unmask = pci_msi_unmask_irq, +}; + +static struct msi_domain_info mc_msi_domain_info = { + .flags = (msi_flag_use_def_dom_ops | msi_flag_use_def_chip_ops | + msi_flag_pci_msix), + .chip = &mc_msi_irq_chip, +}; + +static int mc_allocate_msi_domains(struct mc_port *port) +{ + struct device *dev = port->dev; + struct fwnode_handle *fwnode = of_node_to_fwnode(dev->of_node); + struct mc_msi *msi = &port->msi; + + mutex_init(&port->msi.lock); + + msi->dev_domain = irq_domain_add_linear(null, msi->num_vectors, + &msi_domain_ops, port); + if (!msi->dev_domain) { + dev_err(dev, "failed to create irq domain "); + return -enomem; + } + + msi->msi_domain = pci_msi_create_irq_domain(fwnode, &mc_msi_domain_info, + msi->dev_domain); + if (!msi->msi_domain) { + dev_err(dev, "failed to create msi domain "); + irq_domain_remove(msi->dev_domain); + return -enomem; + } + + return 0; +} + +static void mc_handle_intx(struct irq_desc *desc) +{ + struct mc_port *port = irq_desc_get_handler_data(desc); + struct device *dev = port->dev; + void __iomem *bridge_base_addr = + port->axi_base_addr + mc_pcie_bridge_addr; + unsigned long status; + u32 bit; + u32 virq; + + status = readl_relaxed(bridge_base_addr + istatus_local); + if (status & pm_msi_int_intx_mask) { + status &= pm_msi_int_intx_mask; + status >>= pm_msi_int_intx_shift; + for_each_set_bit(bit, &status, pci_num_intx) { + virq = irq_find_mapping(port->intx_domain, bit); + if (virq) + generic_handle_irq(virq); + else + dev_err_ratelimited(dev, "bad intx irq %d ", + bit); + } + } +} + +static void mc_ack_intx_irq(struct irq_data *data) +{ + struct mc_port *port = irq_data_get_irq_chip_data(data); + void __iomem *bridge_base_addr = + port->axi_base_addr + mc_pcie_bridge_addr; + u32 mask = bit(data->hwirq + pm_msi_int_intx_shift); + + writel_relaxed(mask, bridge_base_addr + istatus_local); +} + +static void mc_mask_intx_irq(struct irq_data *data) +{ + struct mc_port *port = irq_data_get_irq_chip_data(data); + void __iomem *bridge_base_addr = + port->axi_base_addr + mc_pcie_bridge_addr; + unsigned long flags; + u32 mask = bit(data->hwirq + pm_msi_int_intx_shift); + u32 val; + + raw_spin_lock_irqsave(&port->lock, flags); + val = readl_relaxed(bridge_base_addr + imask_local); + val &= ~mask; + writel_relaxed(val, bridge_base_addr + imask_local); + raw_spin_unlock_irqrestore(&port->lock, flags); +} + +static void mc_unmask_intx_irq(struct irq_data *data) +{ + struct mc_port *port = irq_data_get_irq_chip_data(data); + void __iomem *bridge_base_addr = + port->axi_base_addr + mc_pcie_bridge_addr; + unsigned long flags; + u32 mask = bit(data->hwirq + pm_msi_int_intx_shift); + u32 val; + + raw_spin_lock_irqsave(&port->lock, flags); + val = readl_relaxed(bridge_base_addr + imask_local); + val |= mask; + writel_relaxed(val, bridge_base_addr + imask_local); + raw_spin_unlock_irqrestore(&port->lock, flags); +} + +static struct irq_chip mc_intx_irq_chip = { + .name = "microchip pcie intx", + .irq_ack = mc_ack_intx_irq, + .irq_mask = mc_mask_intx_irq, + .irq_unmask = mc_unmask_intx_irq, +}; + +static int mc_pcie_intx_map(struct irq_domain *domain, unsigned int irq, + irq_hw_number_t hwirq) +{ + irq_set_chip_and_handler(irq, &mc_intx_irq_chip, handle_level_irq); + irq_set_chip_data(irq, domain->host_data); + + return 0; +} + +static const struct irq_domain_ops intx_domain_ops = { + .map = mc_pcie_intx_map, +}; + +static inline u32 reg_to_event(u32 reg, struct event_map field) +{ + return (reg & field.reg_mask) ? bit(field.event_bit) : 0; +} + +static u32 pcie_events(void __iomem *addr) +{ + u32 reg = readl_relaxed(addr); + u32 val = 0; + int i; + + for (i = 0; i < array_size(pcie_event_to_event); i++) + val |= reg_to_event(reg, pcie_event_to_event[i]); + + return val; +} + +static u32 sec_errors(void __iomem *addr) +{ + u32 reg = readl_relaxed(addr); + u32 val = 0; + int i; + + for (i = 0; i < array_size(sec_error_to_event); i++) + val |= reg_to_event(reg, sec_error_to_event[i]); + + return val; +} + +static u32 ded_errors(void __iomem *addr) +{ + u32 reg = readl_relaxed(addr); + u32 val = 0; + int i; + + for (i = 0; i < array_size(ded_error_to_event); i++) + val |= reg_to_event(reg, ded_error_to_event[i]); + + return val; +} + +static u32 local_events(void __iomem *addr) +{ + u32 reg = readl_relaxed(addr); + u32 val = 0; + int i; + + for (i = 0; i < array_size(local_status_to_event); i++) + val |= reg_to_event(reg, local_status_to_event[i]); + + return val; +} + +static u32 get_events(struct mc_port *port) +{ + void __iomem *bridge_base_addr = + port->axi_base_addr + mc_pcie_bridge_addr; + void __iomem *ctrl_base_addr = port->axi_base_addr + mc_pcie_ctrl_addr; + u32 events = 0; + + events |= pcie_events(ctrl_base_addr + pcie_event_int); + events |= sec_errors(ctrl_base_addr + sec_error_int); + events |= ded_errors(ctrl_base_addr + ded_error_int); + events |= local_events(bridge_base_addr + istatus_local); + + return events; +} + +static irqreturn_t mc_event_handler(int irq, void *dev_id) +{ + struct mc_port *port = dev_id; + struct device *dev = port->dev; + struct irq_data *data; + + data = irq_domain_get_irq_data(port->event_domain, irq); + + if (event_cause[data->hwirq].str) + dev_err_ratelimited(dev, "%s ", event_cause[data->hwirq].str); + else + dev_err_ratelimited(dev, "bad event irq %ld ", data->hwirq); + + return irq_handled; +} + +static void mc_handle_event(struct irq_desc *desc) +{ + struct mc_port *port = irq_desc_get_handler_data(desc); + unsigned long events; + u32 bit; + struct irq_chip *chip = irq_desc_get_chip(desc); + + chained_irq_enter(chip, desc); + + events = get_events(port); + + for_each_set_bit(bit, &events, num_events) + generic_handle_irq(irq_find_mapping(port->event_domain, bit)); + + chained_irq_exit(chip, desc); +} + +static void mc_ack_event_irq(struct irq_data *data) +{ + struct mc_port *port = irq_data_get_irq_chip_data(data); + u32 event = data->hwirq; + void __iomem *addr; + u32 mask; + + addr = port->axi_base_addr + event_descs[event].base + + event_descs[event].offset; + mask = event_descs[event].mask; + mask |= event_descs[event].enb_mask; + + writel_relaxed(mask, addr); +} + +static void mc_mask_event_irq(struct irq_data *data) +{ + struct mc_port *port = irq_data_get_irq_chip_data(data); + u32 event = data->hwirq; + void __iomem *addr; + u32 mask; + u32 val; + + addr = port->axi_base_addr + event_descs[event].base + + event_descs[event].mask_offset; + mask = event_descs[event].mask; + if (event_descs[event].enb_mask) { + mask <<= pcie_event_int_enb_shift; + mask &= pcie_event_int_enb_mask; + } + + if (!event_descs[event].mask_high) + mask = ~mask; + + raw_spin_lock(&port->lock); + val = readl_relaxed(addr); + if (event_descs[event].mask_high) + val |= mask; + else + val &= mask; + + writel_relaxed(val, addr); + raw_spin_unlock(&port->lock); +} + +static void mc_unmask_event_irq(struct irq_data *data) +{ + struct mc_port *port = irq_data_get_irq_chip_data(data); + u32 event = data->hwirq; + void __iomem *addr; + u32 mask; + u32 val; + + addr = port->axi_base_addr + event_descs[event].base + + event_descs[event].mask_offset; + mask = event_descs[event].mask; + + if (event_descs[event].enb_mask) + mask <<= pcie_event_int_enb_shift; + + if (event_descs[event].mask_high) + mask = ~mask; + + if (event_descs[event].enb_mask) + mask &= pcie_event_int_enb_mask; + + raw_spin_lock(&port->lock); + val = readl_relaxed(addr); + if (event_descs[event].mask_high) + val &= mask; + else + val |= mask; + writel_relaxed(val, addr); + raw_spin_unlock(&port->lock); +} + +static struct irq_chip mc_event_irq_chip = { + .name = "microchip pcie event", + .irq_ack = mc_ack_event_irq, + .irq_mask = mc_mask_event_irq, + .irq_unmask = mc_unmask_event_irq, +}; + +static int mc_pcie_event_map(struct irq_domain *domain, unsigned int irq, + irq_hw_number_t hwirq) +{ + irq_set_chip_and_handler(irq, &mc_event_irq_chip, handle_level_irq); + irq_set_chip_data(irq, domain->host_data); + + return 0; +} + +static const struct irq_domain_ops event_domain_ops = { + .map = mc_pcie_event_map, +}; + +static inline struct clk *mc_pcie_init_clk(struct device *dev, const char *id) +{ + struct clk *clk; + int ret; + + clk = devm_clk_get_optional(dev, id); + if (is_err(clk)) + return clk; + if (!clk) + return clk; + + ret = clk_prepare_enable(clk); + if (ret) + return err_ptr(ret); + + devm_add_action_or_reset(dev, (void (*) (void *))clk_disable_unprepare, + clk); + + return clk; +} + +static int mc_pcie_init_clks(struct device *dev) +{ + int i; + struct clk *fic; + + /* + * pcie may be clocked via fabric interface using between 1 and 4 + * clocks. scan dt for clocks and enable them if present + */ + for (i = 0; i < array_size(poss_clks); i++) { + fic = mc_pcie_init_clk(dev, poss_clks[i]); + if (is_err(fic)) + return ptr_err(fic); + } + + return 0; +} + +static int mc_pcie_init_irq_domains(struct mc_port *port) +{ + struct device *dev = port->dev; + struct device_node *node = dev->of_node; + struct device_node *pcie_intc_node; + + /* setup intx */ + pcie_intc_node = of_get_next_child(node, null); + if (!pcie_intc_node) { + dev_err(dev, "failed to find pcie intc node "); + return -einval; + } + + port->event_domain = irq_domain_add_linear(pcie_intc_node, num_events, + &event_domain_ops, port); + if (!port->event_domain) { + dev_err(dev, "failed to get event domain "); + return -enomem; + } + + irq_domain_update_bus_token(port->event_domain, domain_bus_nexus); + + port->intx_domain = irq_domain_add_linear(pcie_intc_node, pci_num_intx, + &intx_domain_ops, port); + if (!port->intx_domain) { + dev_err(dev, "failed to get an intx irq domain "); + return -enomem; + } + + irq_domain_update_bus_token(port->intx_domain, domain_bus_wired); + + of_node_put(pcie_intc_node); + raw_spin_lock_init(&port->lock); + + return mc_allocate_msi_domains(port); +} + +static void mc_pcie_setup_window(void __iomem *bridge_base_addr, u32 index, + phys_addr_t axi_addr, phys_addr_t pci_addr, + size_t size) +{ + u32 atr_sz = ilog2(size) - 1; + u32 val; + + if (index == 0) + val = pcie_config_interface; + else + val = pcie_tx_rx_interface; + + writel(val, bridge_base_addr + (index * atr_entry_size) + + atr0_axi4_slv0_trsl_param); + + val = lower_32_bits(axi_addr) | (atr_sz << atr_size_shift) | + atr_impl_enable; + writel(val, bridge_base_addr + (index * atr_entry_size) + + atr0_axi4_slv0_srcaddr_param); + + val = upper_32_bits(axi_addr); + writel(val, bridge_base_addr + (index * atr_entry_size) + + atr0_axi4_slv0_src_addr); + + val = lower_32_bits(pci_addr); + writel(val, bridge_base_addr + (index * atr_entry_size) + + atr0_axi4_slv0_trsl_addr_lsb); + + val = upper_32_bits(pci_addr); + writel(val, bridge_base_addr + (index * atr_entry_size) + + atr0_axi4_slv0_trsl_addr_udw); + + val = readl(bridge_base_addr + atr0_pcie_win0_srcaddr_param); + val |= (atr0_pcie_atr_size << atr0_pcie_atr_size_shift); + writel(val, bridge_base_addr + atr0_pcie_win0_srcaddr_param); + writel(0, bridge_base_addr + atr0_pcie_win0_src_addr); +} + +static int mc_pcie_setup_windows(struct platform_device *pdev, + struct mc_port *port) +{ + void __iomem *bridge_base_addr = + port->axi_base_addr + mc_pcie_bridge_addr; + struct pci_host_bridge *bridge = platform_get_drvdata(pdev); + struct resource_entry *entry; + u64 pci_addr; + u32 index = 1; + + resource_list_for_each_entry(entry, &bridge->windows) { + if (resource_type(entry->res) == ioresource_mem) { + pci_addr = entry->res->start - entry->offset; + mc_pcie_setup_window(bridge_base_addr, index, + entry->res->start, pci_addr, + resource_size(entry->res)); + index++; + } + } + + return 0; +} + +static int mc_platform_init(struct pci_config_window *cfg) +{ + struct device *dev = cfg->parent; + struct platform_device *pdev = to_platform_device(dev); + struct mc_port *port; + void __iomem *bridge_base_addr; + void __iomem *ctrl_base_addr; + int ret; + int irq; + int i, intx_irq, msi_irq, event_irq; + u32 val; + int err; + + port = devm_kzalloc(dev, sizeof(*port), gfp_kernel); + if (!port) + return -enomem; + port->dev = dev; + + ret = mc_pcie_init_clks(dev); + if (ret) { + dev_err(dev, "failed to get clock resources, error %d ", ret); + return -enodev; + } + + port->axi_base_addr = devm_platform_ioremap_resource(pdev, 1); + if (is_err(port->axi_base_addr)) + return ptr_err(port->axi_base_addr); + + bridge_base_addr = port->axi_base_addr + mc_pcie_bridge_addr; + ctrl_base_addr = port->axi_base_addr + mc_pcie_ctrl_addr; + + port->msi.vector_phy = msi_addr; + port->msi.num_vectors = mc_num_msi_irqs; + ret = mc_pcie_init_irq_domains(port); + if (ret) { + dev_err(dev, "failed creating irq domains "); + return ret; + } + + irq = platform_get_irq(pdev, 0); + if (irq < 0) { + dev_err(dev, "unable to request irq%d ", irq); + return -enodev; + } + + for (i = 0; i < num_events; i++) { + event_irq = irq_create_mapping(port->event_domain, i); + if (!event_irq) { + dev_err(dev, "failed to map hwirq %d ", i); + return -enxio; + } + + err = devm_request_irq(dev, event_irq, mc_event_handler, + 0, event_cause[i].sym, port); + if (err) { + dev_err(dev, "failed to request irq %d ", event_irq); + return err; + } + } + + intx_irq = irq_create_mapping(port->event_domain, + event_local_pm_msi_int_intx); + if (!intx_irq) { + dev_err(dev, "failed to map intx interrupt "); + return -enxio; + } + + /* plug the intx chained handler */ + irq_set_chained_handler_and_data(intx_irq, mc_handle_intx, port); + + msi_irq = irq_create_mapping(port->event_domain, + event_local_pm_msi_int_msi); + if (!msi_irq) + return -enxio; + + /* plug the msi chained handler */ + irq_set_chained_handler_and_data(msi_irq, mc_handle_msi, port); + + /* plug the main event chained handler */ + irq_set_chained_handler_and_data(irq, mc_handle_event, port); + + /* hardware doesn't setup msi by default */ + mc_pcie_enable_msi(port, cfg->win); + + val = readl_relaxed(bridge_base_addr + imask_local); + val |= pm_msi_int_intx_mask; + writel_relaxed(val, bridge_base_addr + imask_local); + + writel_relaxed(val, ctrl_base_addr + ecc_control); + + val = pcie_event_int_l2_exit_int | + pcie_event_int_hotrst_exit_int | + pcie_event_int_dlup_exit_int; + writel_relaxed(val, ctrl_base_addr + pcie_event_int); + + val = sec_error_int_tx_ram_sec_err_int | + sec_error_int_rx_ram_sec_err_int | + sec_error_int_pcie2axi_ram_sec_err_int | + sec_error_int_axi2pcie_ram_sec_err_int; + writel_relaxed(val, ctrl_base_addr + sec_error_int); + writel_relaxed(0, ctrl_base_addr + sec_error_int_mask); + writel_relaxed(0, ctrl_base_addr + sec_error_cnt); + + val = ded_error_int_tx_ram_ded_err_int | + ded_error_int_rx_ram_ded_err_int | + ded_error_int_pcie2axi_ram_ded_err_int | + ded_error_int_axi2pcie_ram_ded_err_int; + writel_relaxed(val, ctrl_base_addr + ded_error_int); + writel_relaxed(0, ctrl_base_addr + ded_error_int_mask); + writel_relaxed(0, ctrl_base_addr + ded_error_cnt); + + writel_relaxed(0, bridge_base_addr + imask_host); + writel_relaxed(genmask(31, 0), bridge_base_addr + istatus_host); + + /* configure address translation table 0 for pcie config space */ + mc_pcie_setup_window(bridge_base_addr, 0, cfg->res.start & 0xffffffff, + cfg->res.start, resource_size(&cfg->res)); + + return mc_pcie_setup_windows(pdev, port); +} + +static const struct pci_ecam_ops mc_ecam_ops = { + .init = mc_platform_init, + .pci_ops = { + .map_bus = pci_ecam_map_bus, + .read = pci_generic_config_read, + .write = pci_generic_config_write, + } +}; + +static const struct of_device_id mc_pcie_of_match[] = { + { + .compatible = "microchip,pcie-host-1.0", + .data = &mc_ecam_ops, + }, + {}, +}; + +module_device_table(of, mc_pcie_of_match) + +static struct platform_driver mc_pcie_driver = { + .probe = pci_host_common_probe, + .driver = { + .name = "microchip-pcie", + .of_match_table = mc_pcie_of_match, + .suppress_bind_attrs = true, + }, +}; + +builtin_platform_driver(mc_pcie_driver); +module_license("gpl"); +module_description("microchip pcie host controller driver"); +module_author("daire mcnamara <daire.mcnamara@microchip.com>");
|
PCI
|
6f15a9c9f94133bee0d861a4bf25e10aaa95219d
|
daire mcnamara rob herring robh kernel org
|
drivers
|
pci
|
controller
|
pci: remove tango host controller driver
|
the tango platform is getting removed, so the driver is no longer needed.
|
this release allows to map an uid to a different one in a mount; it also adds support for selecting the preemption model at runtime; support for a low-overhead memory error detector designed to be used in production; support for the acrn hypervisor designed for embedded systems; btrfs initial support for zoned devices, subpage blocks sizes and performance improvements; support for eager nfs writes; support for a thermal power management to control the surface temperature of embedded devices in an unified way; the napi polling can be moved to a kernel thread; and support for non-blocking path lookups. as always, there are many other features, new drivers, improvements and fixes.
|
remove tango host controller driver
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'cryptography', 'security', 'networking', 'architectures x86 arm risc-v powerpc mips csky s390 pa-risc c6x']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'device voltage and frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'non-transparent bridge (ntb)', 'thunderbolt', 'clock', 'phy ("physical layer" framework)', 'cxl (compute express link)', 'various']
|
[]
|
['kconfig', 'c', 'makefile']
| 3
| 0
| 356
|
--- diff --git a/drivers/pci/controller/kconfig b/drivers/pci/controller/kconfig --- a/drivers/pci/controller/kconfig +++ b/drivers/pci/controller/kconfig -config pcie_tango_smp8759 - bool "tango smp8759 pcie controller (dangerous)" - depends on arch_tango && pci_msi && of - depends on broken - select pci_host_common - help - say y here to enable pcie controller support for sigma designs - tango smp8759-based systems. - - note: the smp8759 controller multiplexes pci config and mmio - accesses, and linux doesn't provide a way to serialize them. - this can lead to data corruption if drivers perform concurrent - config and mmio accesses. - diff --git a/drivers/pci/controller/makefile b/drivers/pci/controller/makefile --- a/drivers/pci/controller/makefile +++ b/drivers/pci/controller/makefile -obj-$(config_pcie_tango_smp8759) += pcie-tango.o diff --git a/drivers/pci/controller/pcie-tango.c b/drivers/pci/controller/pcie-tango.c --- a/drivers/pci/controller/pcie-tango.c +++ /dev/null -// spdx-license-identifier: gpl-2.0 -#include <linux/irqchip/chained_irq.h> -#include <linux/irqdomain.h> -#include <linux/pci-ecam.h> -#include <linux/delay.h> -#include <linux/msi.h> -#include <linux/of_address.h> - -#define msi_max 256 - -#define smp8759_mux 0x48 -#define smp8759_test_out 0x74 -#define smp8759_doorbell 0x7c -#define smp8759_status 0x80 -#define smp8759_enable 0xa0 - -struct tango_pcie { - declare_bitmap(used_msi, msi_max); - u64 msi_doorbell; - spinlock_t used_msi_lock; - void __iomem *base; - struct irq_domain *dom; -}; - -static void tango_msi_isr(struct irq_desc *desc) -{ - struct irq_chip *chip = irq_desc_get_chip(desc); - struct tango_pcie *pcie = irq_desc_get_handler_data(desc); - unsigned long status, base, virq, idx, pos = 0; - - chained_irq_enter(chip, desc); - spin_lock(&pcie->used_msi_lock); - - while ((pos = find_next_bit(pcie->used_msi, msi_max, pos)) < msi_max) { - base = round_down(pos, 32); - status = readl_relaxed(pcie->base + smp8759_status + base / 8); - for_each_set_bit(idx, &status, 32) { - virq = irq_find_mapping(pcie->dom, base + idx); - generic_handle_irq(virq); - } - pos = base + 32; - } - - spin_unlock(&pcie->used_msi_lock); - chained_irq_exit(chip, desc); -} - -static void tango_ack(struct irq_data *d) -{ - struct tango_pcie *pcie = d->chip_data; - u32 offset = (d->hwirq / 32) * 4; - u32 bit = bit(d->hwirq % 32); - - writel_relaxed(bit, pcie->base + smp8759_status + offset); -} - -static void update_msi_enable(struct irq_data *d, bool unmask) -{ - unsigned long flags; - struct tango_pcie *pcie = d->chip_data; - u32 offset = (d->hwirq / 32) * 4; - u32 bit = bit(d->hwirq % 32); - u32 val; - - spin_lock_irqsave(&pcie->used_msi_lock, flags); - val = readl_relaxed(pcie->base + smp8759_enable + offset); - val = unmask ? val | bit : val & ~bit; - writel_relaxed(val, pcie->base + smp8759_enable + offset); - spin_unlock_irqrestore(&pcie->used_msi_lock, flags); -} - -static void tango_mask(struct irq_data *d) -{ - update_msi_enable(d, false); -} - -static void tango_unmask(struct irq_data *d) -{ - update_msi_enable(d, true); -} - -static int tango_set_affinity(struct irq_data *d, const struct cpumask *mask, - bool force) -{ - return -einval; -} - -static void tango_compose_msi_msg(struct irq_data *d, struct msi_msg *msg) -{ - struct tango_pcie *pcie = d->chip_data; - msg->address_lo = lower_32_bits(pcie->msi_doorbell); - msg->address_hi = upper_32_bits(pcie->msi_doorbell); - msg->data = d->hwirq; -} - -static struct irq_chip tango_chip = { - .irq_ack = tango_ack, - .irq_mask = tango_mask, - .irq_unmask = tango_unmask, - .irq_set_affinity = tango_set_affinity, - .irq_compose_msi_msg = tango_compose_msi_msg, -}; - -static void msi_ack(struct irq_data *d) -{ - irq_chip_ack_parent(d); -} - -static void msi_mask(struct irq_data *d) -{ - pci_msi_mask_irq(d); - irq_chip_mask_parent(d); -} - -static void msi_unmask(struct irq_data *d) -{ - pci_msi_unmask_irq(d); - irq_chip_unmask_parent(d); -} - -static struct irq_chip msi_chip = { - .name = "msi", - .irq_ack = msi_ack, - .irq_mask = msi_mask, - .irq_unmask = msi_unmask, -}; - -static struct msi_domain_info msi_dom_info = { - .flags = msi_flag_pci_msix - | msi_flag_use_def_dom_ops - | msi_flag_use_def_chip_ops, - .chip = &msi_chip, -}; - -static int tango_irq_domain_alloc(struct irq_domain *dom, unsigned int virq, - unsigned int nr_irqs, void *args) -{ - struct tango_pcie *pcie = dom->host_data; - unsigned long flags; - int pos; - - spin_lock_irqsave(&pcie->used_msi_lock, flags); - pos = find_first_zero_bit(pcie->used_msi, msi_max); - if (pos >= msi_max) { - spin_unlock_irqrestore(&pcie->used_msi_lock, flags); - return -enospc; - } - __set_bit(pos, pcie->used_msi); - spin_unlock_irqrestore(&pcie->used_msi_lock, flags); - irq_domain_set_info(dom, virq, pos, &tango_chip, - pcie, handle_edge_irq, null, null); - - return 0; -} - -static void tango_irq_domain_free(struct irq_domain *dom, unsigned int virq, - unsigned int nr_irqs) -{ - unsigned long flags; - struct irq_data *d = irq_domain_get_irq_data(dom, virq); - struct tango_pcie *pcie = d->chip_data; - - spin_lock_irqsave(&pcie->used_msi_lock, flags); - __clear_bit(d->hwirq, pcie->used_msi); - spin_unlock_irqrestore(&pcie->used_msi_lock, flags); -} - -static const struct irq_domain_ops dom_ops = { - .alloc = tango_irq_domain_alloc, - .free = tango_irq_domain_free, -}; - -static int smp8759_config_read(struct pci_bus *bus, unsigned int devfn, - int where, int size, u32 *val) -{ - struct pci_config_window *cfg = bus->sysdata; - struct tango_pcie *pcie = dev_get_drvdata(cfg->parent); - int ret; - - /* reads in configuration space outside devfn 0 return garbage */ - if (devfn != 0) - return pcibios_func_not_supported; - - /* - * pci config and mmio accesses are muxed. linux doesn't have a - * mutual exclusion mechanism for config vs. mmio accesses, so - * concurrent accesses may cause corruption. - */ - writel_relaxed(1, pcie->base + smp8759_mux); - ret = pci_generic_config_read(bus, devfn, where, size, val); - writel_relaxed(0, pcie->base + smp8759_mux); - - return ret; -} - -static int smp8759_config_write(struct pci_bus *bus, unsigned int devfn, - int where, int size, u32 val) -{ - struct pci_config_window *cfg = bus->sysdata; - struct tango_pcie *pcie = dev_get_drvdata(cfg->parent); - int ret; - - writel_relaxed(1, pcie->base + smp8759_mux); - ret = pci_generic_config_write(bus, devfn, where, size, val); - writel_relaxed(0, pcie->base + smp8759_mux); - - return ret; -} - -static const struct pci_ecam_ops smp8759_ecam_ops = { - .pci_ops = { - .map_bus = pci_ecam_map_bus, - .read = smp8759_config_read, - .write = smp8759_config_write, - } -}; - -static int tango_pcie_link_up(struct tango_pcie *pcie) -{ - void __iomem *test_out = pcie->base + smp8759_test_out; - int i; - - writel_relaxed(16, test_out); - for (i = 0; i < 10; ++i) { - u32 ltssm_state = readl_relaxed(test_out) >> 8; - if ((ltssm_state & 0x1f) == 0xf) /* l0 */ - return 1; - usleep_range(3000, 4000); - } - - return 0; -} - -static int tango_pcie_probe(struct platform_device *pdev) -{ - struct device *dev = &pdev->dev; - struct tango_pcie *pcie; - struct resource *res; - struct fwnode_handle *fwnode = of_node_to_fwnode(dev->of_node); - struct irq_domain *msi_dom, *irq_dom; - struct of_pci_range_parser parser; - struct of_pci_range range; - int virq, offset; - - dev_warn(dev, "simultaneous pci config and mmio accesses may cause data corruption "); - add_taint(taint_crap, lockdep_still_ok); - - pcie = devm_kzalloc(dev, sizeof(*pcie), gfp_kernel); - if (!pcie) - return -enomem; - - res = platform_get_resource(pdev, ioresource_mem, 1); - pcie->base = devm_ioremap_resource(dev, res); - if (is_err(pcie->base)) - return ptr_err(pcie->base); - - platform_set_drvdata(pdev, pcie); - - if (!tango_pcie_link_up(pcie)) - return -enodev; - - if (of_pci_dma_range_parser_init(&parser, dev->of_node) < 0) - return -enoent; - - if (of_pci_range_parser_one(&parser, &range) == null) - return -enoent; - - range.pci_addr += range.size; - pcie->msi_doorbell = range.pci_addr + res->start + smp8759_doorbell; - - for (offset = 0; offset < msi_max / 8; offset += 4) - writel_relaxed(0, pcie->base + smp8759_enable + offset); - - virq = platform_get_irq(pdev, 1); - if (virq < 0) - return virq; - - irq_dom = irq_domain_create_linear(fwnode, msi_max, &dom_ops, pcie); - if (!irq_dom) { - dev_err(dev, "failed to create irq domain "); - return -enomem; - } - - msi_dom = pci_msi_create_irq_domain(fwnode, &msi_dom_info, irq_dom); - if (!msi_dom) { - dev_err(dev, "failed to create msi domain "); - irq_domain_remove(irq_dom); - return -enomem; - } - - pcie->dom = irq_dom; - spin_lock_init(&pcie->used_msi_lock); - irq_set_chained_handler_and_data(virq, tango_msi_isr, pcie); - - return pci_host_common_probe(pdev); -} - -static const struct of_device_id tango_pcie_ids[] = { - { - .compatible = "sigma,smp8759-pcie", - .data = &smp8759_ecam_ops, - }, - { }, -}; - -static struct platform_driver tango_pcie_driver = { - .probe = tango_pcie_probe, - .driver = { - .name = kbuild_modname, - .of_match_table = tango_pcie_ids, - .suppress_bind_attrs = true, - }, -}; -builtin_platform_driver(tango_pcie_driver); - -/* - * the root complex advertises the wrong device class. - * header type 1 is for pci-to-pci bridges. - */ -static void tango_fixup_class(struct pci_dev *dev) -{ - dev->class = pci_class_bridge_pci << 8; -} -declare_pci_fixup_early(pci_vendor_id_sigma, 0x0024, tango_fixup_class); -declare_pci_fixup_early(pci_vendor_id_sigma, 0x0028, tango_fixup_class); - -/* - * the root complex exposes a "fake" bar, which is used to filter - * bus-to-system accesses. only accesses within the range defined by this - * bar are forwarded to the host, others are ignored. - * - * by default, the dma framework expects an identity mapping, and dram0 is - * mapped at 0x80000000. - */ -static void tango_fixup_bar(struct pci_dev *dev) -{ - dev->non_compliant_bars = true; - pci_write_config_dword(dev, pci_base_address_0, 0x80000000); -} -declare_pci_fixup_early(pci_vendor_id_sigma, 0x0024, tango_fixup_bar); -declare_pci_fixup_early(pci_vendor_id_sigma, 0x0028, tango_fixup_bar);
|
PCI
|
de9427ca87cfa959abcd8bab7e38343b51219ffa
|
arnd bergmann mans rullgard mans mansr com
|
drivers
|
pci
|
controller
|
documentation: pci: add specification for the pci ntb function device
|
add specification for the pci ntb function device. the endpoint function driver and the host pci driver should be created based on this specification.
|
this release allows to map an uid to a different one in a mount; it also adds support for selecting the preemption model at runtime; support for a low-overhead memory error detector designed to be used in production; support for the acrn hypervisor designed for embedded systems; btrfs initial support for zoned devices, subpage blocks sizes and performance improvements; support for eager nfs writes; support for a thermal power management to control the surface temperature of embedded devices in an unified way; the napi polling can be moved to a kernel thread; and support for non-blocking path lookups. as always, there are many other features, new drivers, improvements and fixes.
|
implement ntb controller using multiple pci ep
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'cryptography', 'security', 'networking', 'architectures x86 arm risc-v powerpc mips csky s390 pa-risc c6x']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'device voltage and frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'non-transparent bridge (ntb)', 'thunderbolt', 'clock', 'phy ("physical layer" framework)', 'cxl (compute express link)', 'various']
|
[]
|
['rst']
| 2
| 349
| 0
|
--- diff --git a/documentation/pci/endpoint/index.rst b/documentation/pci/endpoint/index.rst --- a/documentation/pci/endpoint/index.rst +++ b/documentation/pci/endpoint/index.rst + pci-ntb-function diff --git a/documentation/pci/endpoint/pci-ntb-function.rst b/documentation/pci/endpoint/pci-ntb-function.rst --- /dev/null +++ b/documentation/pci/endpoint/pci-ntb-function.rst +.. spdx-license-identifier: gpl-2.0 + +================= +pci ntb function +================= + +:author: kishon vijay abraham i <kishon@ti.com> + +pci non-transparent bridges (ntb) allow two host systems to communicate +with each other by exposing each host as a device to the other host. +ntbs typically support the ability to generate interrupts on the remote +machine, expose memory ranges as bars, and perform dma. they also support +scratchpads, which are areas of memory within the ntb that are accessible +from both machines. + +pci ntb function allows two different systems (or hosts) to communicate +with each other by configuring the endpoint instances in such a way that +transactions from one system are routed to the other system. + +in the below diagram, pci ntb function configures the soc with multiple +pci endpoint (ep) instances in such a way that transactions from one ep +controller are routed to the other ep controller. once pci ntb function +configures the soc with multiple ep instances, host1 and host2 can +communicate with each other using soc as a bridge. + +.. code-block:: text + + +-------------+ +-------------+ + | | | | + | host1 | | host2 | + | | | | + +------^------+ +------^------+ + | | + | | + +---------|-------------------------------------------------|---------+ + | +------v------+ +------v------+ | + | | | | | | + | | ep | | ep | | + | | controller1 | | controller2 | | + | | <-----------------------------------> | | + | | | | | | + | | | | | | + | | | soc with multiple ep instances | | | + | | | (configured using ntb function) | | | + | +-------------+ +-------------+ | + +---------------------------------------------------------------------+ + +constructs used for implementing ntb +==================================== + + 1) config region + 2) self scratchpad registers + 3) peer scratchpad registers + 4) doorbell (db) registers + 5) memory window (mw) + + +config region: +-------------- + +config region is a construct that is specific to ntb implemented using ntb +endpoint function driver. the host and endpoint side ntb function driver will +exchange information with each other using this region. config region has +control/status registers for configuring the endpoint controller. host can +write into this region for configuring the outbound address translation unit +(atu) and to indicate the link status. endpoint can indicate the status of +commands issued by host in this region. endpoint can also indicate the +scratchpad offset and number of memory windows to the host using this region. + +the format of config region is given below. all the fields here are 32 bits. + +.. code-block:: text + + +------------------------+ + | command | + +------------------------+ + | argument | + +------------------------+ + | status | + +------------------------+ + | topology | + +------------------------+ + | address (lower 32) | + +------------------------+ + | address (upper 32) | + +------------------------+ + | size | + +------------------------+ + | no of memory window | + +------------------------+ + | memory window1 offset | + +------------------------+ + | spad offset | + +------------------------+ + | spad count | + +------------------------+ + | db entry size | + +------------------------+ + | db data | + +------------------------+ + | : | + +------------------------+ + | : | + +------------------------+ + | db data | + +------------------------+ + + + command: + + ntb function supports three commands: + + cmd_configure_doorbell (0x1): command to configure doorbell. before + invoking this command, the host should allocate and initialize + msi/msi-x vectors (i.e., initialize the msi/msi-x capability in the + endpoint). the endpoint on receiving this command will configure + the outbound atu such that transactions to doorbell bar will be routed + to the msi/msi-x address programmed by the host. the argument + register should be populated with number of dbs to configure (in the + lower 16 bits) and if msi or msi-x should be configured (bit 16). + + cmd_configure_mw (0x2): command to configure memory window (mw). the + host invokes this command after allocating a buffer that can be + accessed by remote host. the allocated address should be programmed + in the address register (64 bit), the size should be programmed in + the size register and the memory window index should be programmed + in the argument register. the endpoint on receiving this command + will configure the outbound atu such that transactions to mw bar + are routed to the address provided by the host. + + cmd_link_up (0x3): command to indicate an ntb application is + bound to the ep device on the host side. once the endpoint + receives this command from both the hosts, the endpoint will + raise a link_up event to both the hosts to indicate the host + ntb applications can start communicating with each other. + + argument: + + the value of this register is based on the commands issued in + command register. see command section for more information. + + topology: + + set to ntb_topo_b2b_usd for primary interface + set to ntb_topo_b2b_dsd for secondary interface + + address/size: + + address and size to be used while configuring the memory window. + see "cmd_configure_mw" for more info. + + memory window1 offset: + + memory window 1 and doorbell registers are packed together in the + same bar. the initial portion of the region will have doorbell + registers and the latter portion of the region is for memory window 1. + this register will specify the offset of the memory window 1. + + no of memory window: + + specifies the number of memory windows supported by the ntb device. + + spad offset: + + self scratchpad region and config region are packed together in the + same bar. the initial portion of the region will have config region + and the latter portion of the region is for self scratchpad. this + register will specify the offset of the self scratchpad registers. + + spad count: + + specifies the number of scratchpad registers supported by the ntb + device. + + db entry size: + + used to determine the offset within the db bar that should be written + in order to raise doorbell. epf ntb can use either msi or msi-x to + ring doorbell (msi-x support will be added later). msi uses same + address for all the interrupts and msi-x can provide different + addresses for different interrupts. the msi/msi-x address is provided + by the host and the address it gives is based on the msi/msi-x + implementation supported by the host. for instance, arm platform + using gic its will have the same msi-x address for all the interrupts. + in order to support all the combinations and use the same mechanism + for both msi and msi-x, epf ntb allocates a separate region in the + outbound address space for each of the interrupts. this region will + be mapped to the msi/msi-x address provided by the host. if a host + provides the same address for all the interrupts, all the regions + will be translated to the same address. if a host provides different + addresses, the regions will be translated to different addresses. this + will ensure there is no difference while raising the doorbell. + + db data: + + epf ntb supports 32 interrupts, so there are 32 db data registers. + this holds the msi/msi-x data that has to be written to msi address + for raising doorbell interrupt. this will be populated by epf ntb + while invoking cmd_configure_doorbell. + +scratchpad registers: +--------------------- + + each host has its own register space allocated in the memory of ntb endpoint + controller. they are both readable and writable from both sides of the bridge. + they are used by applications built over ntb and can be used to pass control + and status information between both sides of a device. + + scratchpad registers has 2 parts + 1) self scratchpad: host's own register space + 2) peer scratchpad: remote host's register space. + +doorbell registers: +------------------- + + doorbell registers are used by the hosts to interrupt each other. + +memory window: +-------------- + + actual transfer of data between the two hosts will happen using the + memory window. + +modeling constructs: +==================== + +there are 5 or more distinct regions (config, self scratchpad, peer +scratchpad, doorbell, one or more memory windows) to be modeled to achieve +ntb functionality. at least one memory window is required while more than +one is permitted. all these regions should be mapped to bars for hosts to +access these regions. + +if one 32-bit bar is allocated for each of these regions, the scheme would +look like this: + +====== =============== +bar no constructs used +====== =============== +bar0 config region +bar1 self scratchpad +bar2 peer scratchpad +bar3 doorbell +bar4 memory window 1 +bar5 memory window 2 +====== =============== + +however if we allocate a separate bar for each of the regions, there would not +be enough bars for all the regions in a platform that supports only 64-bit +bars. + +in order to be supported by most of the platforms, the regions should be +packed and mapped to bars in a way that provides ntb functionality and +also makes sure the host doesn't access any region that it is not supposed +to. + +the following scheme is used in epf ntb function: + +====== =============================== +bar no constructs used +====== =============================== +bar0 config region + self scratchpad +bar1 peer scratchpad +bar2 doorbell + memory window 1 +bar3 memory window 2 +bar4 memory window 3 +bar5 memory window 4 +====== =============================== + +with this scheme, for the basic ntb functionality 3 bars should be sufficient. + +modeling config/scratchpad region: +---------------------------------- + +.. code-block:: text + + +-----------------+------->+------------------+ +-----------------+ + | bar0 | | config region | | bar0 | + +-----------------+----+ +------------------+<-------+-----------------+ + | bar1 | | |scratchpad region | | bar1 | + +-----------------+ +-->+------------------+<-------+-----------------+ + | bar2 | local memory | bar2 | + +-----------------+ +-----------------+ + | bar3 | | bar3 | + +-----------------+ +-----------------+ + | bar4 | | bar4 | + +-----------------+ +-----------------+ + | bar5 | | bar5 | + +-----------------+ +-----------------+ + ep controller 1 ep controller 2 + +above diagram shows config region + scratchpad region for host1 (connected to +ep controller 1) allocated in local memory. the host1 can access the config +region and scratchpad region (self scratchpad) using bar0 of ep controller 1. +the peer host (host2 connected to ep controller 2) can also access this +scratchpad region (peer scratchpad) using bar1 of ep controller 2. this +diagram shows the case where config region and scratchpad regions are allocated +for host1, however the same is applicable for host2. + +modeling doorbell/memory window 1: +---------------------------------- + +.. code-block:: text + + +-----------------+ +----->+----------------+-----------+-----------------+ + | bar0 | | | doorbell 1 +-----------> msi-x address 1 | + +-----------------+ | +----------------+ +-----------------+ + | bar1 | | | doorbell 2 +---------+ | | + +-----------------+----+ +----------------+ | | | + | bar2 | | doorbell 3 +-------+ | +-----------------+ + +-----------------+----+ +----------------+ | +-> msi-x address 2 | + | bar3 | | | doorbell 4 +-----+ | +-----------------+ + +-----------------+ | |----------------+ | | | | + | bar4 | | | | | | +-----------------+ + +-----------------+ | | mw1 +---+ | +-->+ msi-x address 3|| + | bar5 | | | | | | +-----------------+ + +-----------------+ +----->-----------------+ | | | | + ep controller 1 | | | | +-----------------+ + | | | +---->+ msi-x address 4 | + +----------------+ | +-----------------+ + ep controller 2 | | | + (ob space) | | | + +-------> mw1 | + | | + | | + +-----------------+ + | | + | | + | | + | | + | | + +-----------------+ + pci address space + (managed by host2) + +above diagram shows how the doorbell and memory window 1 is mapped so that +host1 can raise doorbell interrupt on host2 and also how host1 can access +buffers exposed by host2 using memory window1 (mw1). here doorbell and +memory window 1 regions are allocated in ep controller 2 outbound (ob) address +space. allocating and configuring bars for doorbell and memory window1 +is done during the initialization phase of ntb endpoint function driver. +mapping from ep controller 2 ob space to pci address space is done when host2 +sends cmd_configure_mw/cmd_configure_doorbell. + +modeling optional memory windows: +--------------------------------- + +this is modeled the same was as mw1 but each of the additional memory windows +is mapped to separate bars.
|
Non-Transparent Bridge (NTB)
|
13bccf873808ac9516089760efce7ea18b7484a9
|
kishon vijay abraham i
|
documentation
|
pci
|
endpoint
|
pci: endpoint: make *_get_first_free_bar() take into account 64 bit bar
|
pci_epc_get_first_free_bar() uses only "reserved_bar" member in epc_features to get the first unreserved bar. however if the reserved bar is also a 64-bit bar, then the next bar shouldn't be returned (since 64-bit bar uses two bars).
|
this release allows to map an uid to a different one in a mount; it also adds support for selecting the preemption model at runtime; support for a low-overhead memory error detector designed to be used in production; support for the acrn hypervisor designed for embedded systems; btrfs initial support for zoned devices, subpage blocks sizes and performance improvements; support for eager nfs writes; support for a thermal power management to control the surface temperature of embedded devices in an unified way; the napi polling can be moved to a kernel thread; and support for non-blocking path lookups. as always, there are many other features, new drivers, improvements and fixes.
|
implement ntb controller using multiple pci ep
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'cryptography', 'security', 'networking', 'architectures x86 arm risc-v powerpc mips csky s390 pa-risc c6x']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'device voltage and frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'non-transparent bridge (ntb)', 'thunderbolt', 'clock', 'phy ("physical layer" framework)', 'cxl (compute express link)', 'various']
|
[]
|
['c']
| 1
| 10
| 2
|
--- diff --git a/drivers/pci/endpoint/pci-epc-core.c b/drivers/pci/endpoint/pci-epc-core.c --- a/drivers/pci/endpoint/pci-epc-core.c +++ b/drivers/pci/endpoint/pci-epc-core.c - int free_bar; + unsigned long free_bar; - free_bar = ffz(epc_features->reserved_bar); + /* find if the reserved bar is also a 64-bit bar */ + free_bar = epc_features->reserved_bar & epc_features->bar_fixed_64bit; + + /* set the adjacent bit if the reserved bar is also a 64-bit bar */ + free_bar <<= 1; + free_bar |= epc_features->reserved_bar; + + /* now find the free bar */ + free_bar = ffz(free_bar);
|
Non-Transparent Bridge (NTB)
|
959a48d0eac0321948c9f3d1707ba22c100e92d5
|
kishon vijay abraham i
|
drivers
|
pci
|
endpoint
|
pci: endpoint: add helper api to get the 'next' unreserved bar
|
add an api to get the next unreserved bar starting from a given bar number that can be used by the endpoint function.
|
this release allows to map an uid to a different one in a mount; it also adds support for selecting the preemption model at runtime; support for a low-overhead memory error detector designed to be used in production; support for the acrn hypervisor designed for embedded systems; btrfs initial support for zoned devices, subpage blocks sizes and performance improvements; support for eager nfs writes; support for a thermal power management to control the surface temperature of embedded devices in an unified way; the napi polling can be moved to a kernel thread; and support for non-blocking path lookups. as always, there are many other features, new drivers, improvements and fixes.
|
implement ntb controller using multiple pci ep
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'cryptography', 'security', 'networking', 'architectures x86 arm risc-v powerpc mips csky s390 pa-risc c6x']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'device voltage and frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'non-transparent bridge (ntb)', 'thunderbolt', 'clock', 'phy ("physical layer" framework)', 'cxl (compute express link)', 'various']
|
[]
|
['h', 'c']
| 2
| 24
| 4
|
--- diff --git a/drivers/pci/endpoint/pci-epc-core.c b/drivers/pci/endpoint/pci-epc-core.c --- a/drivers/pci/endpoint/pci-epc-core.c +++ b/drivers/pci/endpoint/pci-epc-core.c - * invoke to get the first unreserved bar that can be used for endpoint + * invoke to get the first unreserved bar that can be used by the endpoint +{ + return pci_epc_get_next_free_bar(epc_features, bar_0); +} +export_symbol_gpl(pci_epc_get_first_free_bar); + +/** + * pci_epc_get_next_free_bar() - helper to get unreserved bar starting from @bar + * @epc_features: pci_epc_features structure that holds the reserved bar bitmap + * @bar: the starting bar number from where unreserved bar should be searched + * + * invoke to get the next unreserved bar starting from @bar that can be used + * for endpoint function. for any incorrect value in reserved_bar return '0'. + */ +unsigned int pci_epc_get_next_free_bar(const struct pci_epc_features + *epc_features, enum pci_barno bar) + /* if 'bar - 1' is a 64-bit bar, move to the next bar */ + if ((epc_features->bar_fixed_64bit << 1) & 1 << bar) + bar++; + - /* now find the free bar */ - free_bar = ffz(free_bar); + free_bar = find_next_zero_bit(&free_bar, 6, bar); -export_symbol_gpl(pci_epc_get_first_free_bar); +export_symbol_gpl(pci_epc_get_next_free_bar); diff --git a/include/linux/pci-epc.h b/include/linux/pci-epc.h --- a/include/linux/pci-epc.h +++ b/include/linux/pci-epc.h +unsigned int pci_epc_get_next_free_bar(const struct pci_epc_features + *epc_features, enum pci_barno bar);
|
Non-Transparent Bridge (NTB)
|
fa8fef0e104a23efe568b835d9e7e188d1d97610
|
kishon vijay abraham i
|
drivers
|
pci
|
endpoint
|
pci: endpoint: make *_free_bar() to return error codes on failure
|
modify pci_epc_get_next_free_bar() and pci_epc_get_first_free_bar() to return error values if there are no free bars available.
|
this release allows to map an uid to a different one in a mount; it also adds support for selecting the preemption model at runtime; support for a low-overhead memory error detector designed to be used in production; support for the acrn hypervisor designed for embedded systems; btrfs initial support for zoned devices, subpage blocks sizes and performance improvements; support for eager nfs writes; support for a thermal power management to control the surface temperature of embedded devices in an unified way; the napi polling can be moved to a kernel thread; and support for non-blocking path lookups. as always, there are many other features, new drivers, improvements and fixes.
|
implement ntb controller using multiple pci ep
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'cryptography', 'security', 'networking', 'architectures x86 arm risc-v powerpc mips csky s390 pa-risc c6x']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'device voltage and frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'non-transparent bridge (ntb)', 'thunderbolt', 'clock', 'phy ("physical layer" framework)', 'cxl (compute express link)', 'various']
|
[]
|
['h', 'c']
| 4
| 13
| 10
|
--- diff --git a/drivers/pci/endpoint/functions/pci-epf-test.c b/drivers/pci/endpoint/functions/pci-epf-test.c --- a/drivers/pci/endpoint/functions/pci-epf-test.c +++ b/drivers/pci/endpoint/functions/pci-epf-test.c + if (test_reg_bar < 0) + return -einval; diff --git a/drivers/pci/endpoint/pci-epc-core.c b/drivers/pci/endpoint/pci-epc-core.c --- a/drivers/pci/endpoint/pci-epc-core.c +++ b/drivers/pci/endpoint/pci-epc-core.c -unsigned int pci_epc_get_first_free_bar(const struct pci_epc_features - *epc_features) +enum pci_barno +pci_epc_get_first_free_bar(const struct pci_epc_features *epc_features) -unsigned int pci_epc_get_next_free_bar(const struct pci_epc_features - *epc_features, enum pci_barno bar) +enum pci_barno pci_epc_get_next_free_bar(const struct pci_epc_features + *epc_features, enum pci_barno bar) - return 0; + return bar_0; - return 0; + return no_bar; diff --git a/include/linux/pci-epc.h b/include/linux/pci-epc.h --- a/include/linux/pci-epc.h +++ b/include/linux/pci-epc.h -unsigned int pci_epc_get_first_free_bar(const struct pci_epc_features - *epc_features); -unsigned int pci_epc_get_next_free_bar(const struct pci_epc_features - *epc_features, enum pci_barno bar); +enum pci_barno +pci_epc_get_first_free_bar(const struct pci_epc_features *epc_features); +enum pci_barno pci_epc_get_next_free_bar(const struct pci_epc_features + *epc_features, enum pci_barno bar); diff --git a/include/linux/pci-epf.h b/include/linux/pci-epf.h --- a/include/linux/pci-epf.h +++ b/include/linux/pci-epf.h + no_bar = -1,
|
Non-Transparent Bridge (NTB)
|
0e27aeccfa3d1bab7c6a29fb8e6fcedbad7b09a8
|
kishon vijay abraham i
|
drivers
|
pci
|
endpoint, functions
|
pci: endpoint: remove unused pci_epf_match_device()
|
remove unused pci_epf_match_device() function added in pci-epf-core.c
|
this release allows to map an uid to a different one in a mount; it also adds support for selecting the preemption model at runtime; support for a low-overhead memory error detector designed to be used in production; support for the acrn hypervisor designed for embedded systems; btrfs initial support for zoned devices, subpage blocks sizes and performance improvements; support for eager nfs writes; support for a thermal power management to control the surface temperature of embedded devices in an unified way; the napi polling can be moved to a kernel thread; and support for non-blocking path lookups. as always, there are many other features, new drivers, improvements and fixes.
|
implement ntb controller using multiple pci ep
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'cryptography', 'security', 'networking', 'architectures x86 arm risc-v powerpc mips csky s390 pa-risc c6x']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'device voltage and frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'non-transparent bridge (ntb)', 'thunderbolt', 'clock', 'phy ("physical layer" framework)', 'cxl (compute express link)', 'various']
|
[]
|
['h', 'c']
| 2
| 0
| 18
|
--- diff --git a/drivers/pci/endpoint/pci-epf-core.c b/drivers/pci/endpoint/pci-epf-core.c --- a/drivers/pci/endpoint/pci-epf-core.c +++ b/drivers/pci/endpoint/pci-epf-core.c -const struct pci_epf_device_id * -pci_epf_match_device(const struct pci_epf_device_id *id, struct pci_epf *epf) -{ - if (!id || !epf) - return null; - - while (*id->name) { - if (strcmp(epf->name, id->name) == 0) - return id; - id++; - } - - return null; -} -export_symbol_gpl(pci_epf_match_device); - diff --git a/include/linux/pci-epf.h b/include/linux/pci-epf.h --- a/include/linux/pci-epf.h +++ b/include/linux/pci-epf.h -const struct pci_epf_device_id * -pci_epf_match_device(const struct pci_epf_device_id *id, struct pci_epf *epf);
|
Non-Transparent Bridge (NTB)
|
7e5a51ebb321537c4209cdd0c54c4c19b3ef960d
|
kishon vijay abraham i
|
drivers
|
pci
|
endpoint
|
pci: endpoint: add support to associate secondary epc with epf
|
in the case of standard endpoint functions, only one endpoint controller (epc) will be associated with an endpoint function (epf). however for providing ntb (non transparent bridge) functionality, two epcs should be associated with a single epf. add support to associate secondary epc with epf. this is in preparation for adding ntb endpoint function driver.
|
this release allows to map an uid to a different one in a mount; it also adds support for selecting the preemption model at runtime; support for a low-overhead memory error detector designed to be used in production; support for the acrn hypervisor designed for embedded systems; btrfs initial support for zoned devices, subpage blocks sizes and performance improvements; support for eager nfs writes; support for a thermal power management to control the surface temperature of embedded devices in an unified way; the napi polling can be moved to a kernel thread; and support for non-blocking path lookups. as always, there are many other features, new drivers, improvements and fixes.
|
implement ntb controller using multiple pci ep
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'cryptography', 'security', 'networking', 'architectures x86 arm risc-v powerpc mips csky s390 pa-risc c6x']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'device voltage and frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'non-transparent bridge (ntb)', 'thunderbolt', 'clock', 'phy ("physical layer" framework)', 'cxl (compute express link)', 'various']
|
[]
|
['h', 'c']
| 6
| 125
| 38
|
--- diff --git a/drivers/pci/endpoint/functions/pci-epf-test.c b/drivers/pci/endpoint/functions/pci-epf-test.c --- a/drivers/pci/endpoint/functions/pci-epf-test.c +++ b/drivers/pci/endpoint/functions/pci-epf-test.c - pci_epf_free_space(epf, epf_test->reg[bar], bar); + pci_epf_free_space(epf, epf_test->reg[bar], bar, + primary_interface); - pci_epf_free_space(epf, epf_test->reg[bar], bar); + pci_epf_free_space(epf, epf_test->reg[bar], bar, + primary_interface); - epc_features->align); + epc_features->align, primary_interface); - epc_features->align); + epc_features->align, + primary_interface); diff --git a/drivers/pci/endpoint/pci-ep-cfs.c b/drivers/pci/endpoint/pci-ep-cfs.c --- a/drivers/pci/endpoint/pci-ep-cfs.c +++ b/drivers/pci/endpoint/pci-ep-cfs.c - ret = pci_epc_add_epf(epc, epf); + ret = pci_epc_add_epf(epc, epf, primary_interface); - pci_epc_remove_epf(epc, epf); + pci_epc_remove_epf(epc, epf, primary_interface); - pci_epc_remove_epf(epc, epf); + pci_epc_remove_epf(epc, epf, primary_interface); diff --git a/drivers/pci/endpoint/pci-epc-core.c b/drivers/pci/endpoint/pci-epc-core.c --- a/drivers/pci/endpoint/pci-epc-core.c +++ b/drivers/pci/endpoint/pci-epc-core.c + * @type: identifies if the epc is connected to the primary or secondary + * interface of epf -int pci_epc_add_epf(struct pci_epc *epc, struct pci_epf *epf) +int pci_epc_add_epf(struct pci_epc *epc, struct pci_epf *epf, + enum pci_epc_interface_type type) + struct list_head *list; - if (epf->epc) + if (is_err_or_null(epc)) + return -einval; + + if (type == primary_interface && epf->epc) - if (is_err(epc)) - return -einval; + if (type == secondary_interface && epf->sec_epc) + return -ebusy; - epf->func_no = func_no; - epf->epc = epc; - - list_add_tail(&epf->list, &epc->pci_epf); + if (type == primary_interface) { + epf->func_no = func_no; + epf->epc = epc; + list = &epf->list; + } else { + epf->sec_epc_func_no = func_no; + epf->sec_epc = epc; + list = &epf->sec_epc_list; + } + list_add_tail(list, &epc->pci_epf); -void pci_epc_remove_epf(struct pci_epc *epc, struct pci_epf *epf) +void pci_epc_remove_epf(struct pci_epc *epc, struct pci_epf *epf, + enum pci_epc_interface_type type) + struct list_head *list; + u32 func_no = 0; + + if (type == primary_interface) { + func_no = epf->func_no; + list = &epf->list; + } else { + func_no = epf->sec_epc_func_no; + list = &epf->sec_epc_list; + } + - clear_bit(epf->func_no, &epc->function_num_map); - list_del(&epf->list); + clear_bit(func_no, &epc->function_num_map); + list_del(list); diff --git a/drivers/pci/endpoint/pci-epf-core.c b/drivers/pci/endpoint/pci-epf-core.c --- a/drivers/pci/endpoint/pci-epf-core.c +++ b/drivers/pci/endpoint/pci-epf-core.c + * @type: identifies if the allocated space is for primary epc or secondary epc -void pci_epf_free_space(struct pci_epf *epf, void *addr, enum pci_barno bar) +void pci_epf_free_space(struct pci_epf *epf, void *addr, enum pci_barno bar, + enum pci_epc_interface_type type) + struct pci_epf_bar *epf_bar; + struct pci_epc *epc; - dma_free_coherent(dev, epf->bar[bar].size, addr, - epf->bar[bar].phys_addr); + if (type == primary_interface) { + epc = epf->epc; + epf_bar = epf->bar; + } else { + epc = epf->sec_epc; + epf_bar = epf->sec_epc_bar; + } + + dev = epc->dev.parent; + dma_free_coherent(dev, epf_bar[bar].size, addr, + epf_bar[bar].phys_addr); - epf->bar[bar].phys_addr = 0; - epf->bar[bar].addr = null; - epf->bar[bar].size = 0; - epf->bar[bar].barno = 0; - epf->bar[bar].flags = 0; + epf_bar[bar].phys_addr = 0; + epf_bar[bar].addr = null; + epf_bar[bar].size = 0; + epf_bar[bar].barno = 0; + epf_bar[bar].flags = 0; + * @type: identifies if the allocation is for primary epc or secondary epc - size_t align) + size_t align, enum pci_epc_interface_type type) - void *space; - struct device *dev = epf->epc->dev.parent; + struct pci_epf_bar *epf_bar; + struct pci_epc *epc; + struct device *dev; + void *space; + if (type == primary_interface) { + epc = epf->epc; + epf_bar = epf->bar; + } else { + epc = epf->sec_epc; + epf_bar = epf->sec_epc_bar; + } + + dev = epc->dev.parent; - epf->bar[bar].phys_addr = phys_addr; - epf->bar[bar].addr = space; - epf->bar[bar].size = size; - epf->bar[bar].barno = bar; - epf->bar[bar].flags |= upper_32_bits(size) ? + epf_bar[bar].phys_addr = phys_addr; + epf_bar[bar].addr = space; + epf_bar[bar].size = size; + epf_bar[bar].barno = bar; + epf_bar[bar].flags |= upper_32_bits(size) ? diff --git a/include/linux/pci-epc.h b/include/linux/pci-epc.h --- a/include/linux/pci-epc.h +++ b/include/linux/pci-epc.h +enum pci_epc_interface_type { + unknown_interface = -1, + primary_interface, + secondary_interface, +}; + +static inline const char * +pci_epc_interface_string(enum pci_epc_interface_type type) +{ + switch (type) { + case primary_interface: + return "primary"; + case secondary_interface: + return "secondary"; + default: + return "unknown interface"; + } +} + -int pci_epc_add_epf(struct pci_epc *epc, struct pci_epf *epf); +int pci_epc_add_epf(struct pci_epc *epc, struct pci_epf *epf, + enum pci_epc_interface_type type); -void pci_epc_remove_epf(struct pci_epc *epc, struct pci_epf *epf); +void pci_epc_remove_epf(struct pci_epc *epc, struct pci_epf *epf, + enum pci_epc_interface_type type); diff --git a/include/linux/pci-epf.h b/include/linux/pci-epf.h --- a/include/linux/pci-epf.h +++ b/include/linux/pci-epf.h +enum pci_epc_interface_type; + * @sec_epc: the secondary epc device to which this epf device is bound + * @sec_epc_list: to add pci_epf as list of pci endpoint functions to secondary + * epc device + * @sec_epc_bar: represents the bar of epf device associated with secondary epc + * @sec_epc_func_no: unique (physical) function number within the secondary epc + + /* below members are to attach secondary epc to an endpoint function */ + struct pci_epc *sec_epc; + struct list_head sec_epc_list; + struct pci_epf_bar sec_epc_bar[6]; + u8 sec_epc_func_no; - size_t align); -void pci_epf_free_space(struct pci_epf *epf, void *addr, enum pci_barno bar); + size_t align, enum pci_epc_interface_type type); +void pci_epf_free_space(struct pci_epf *epf, void *addr, enum pci_barno bar, + enum pci_epc_interface_type type);
|
Non-Transparent Bridge (NTB)
|
63840ff5322373d665b2b9c59cd64233d5f0691e
|
kishon vijay abraham i
|
drivers
|
pci
|
endpoint, functions
|
pci: endpoint: add support in configfs to associate two epcs with epf
|
now that pci endpoint core supports to add secondary endpoint controller (epc) with endpoint function (epf), add support in configfs to associate two epcs with epf. this creates "primary" and "secondary" directory inside the directory created by users for epf device. users have to add a symlink of endpoint controller (pci_ep/controllers/) to "primary" or "secondary" directory to bind epf to primary and secondary epf interfaces respectively. existing method of linking directory representing epf device to directory representing epc device to associate a single epc device with a epf device will continue to work.
|
this release allows to map an uid to a different one in a mount; it also adds support for selecting the preemption model at runtime; support for a low-overhead memory error detector designed to be used in production; support for the acrn hypervisor designed for embedded systems; btrfs initial support for zoned devices, subpage blocks sizes and performance improvements; support for eager nfs writes; support for a thermal power management to control the surface temperature of embedded devices in an unified way; the napi polling can be moved to a kernel thread; and support for non-blocking path lookups. as always, there are many other features, new drivers, improvements and fixes.
|
implement ntb controller using multiple pci ep
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'cryptography', 'security', 'networking', 'architectures x86 arm risc-v powerpc mips csky s390 pa-risc c6x']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'device voltage and frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'non-transparent bridge (ntb)', 'thunderbolt', 'clock', 'phy ("physical layer" framework)', 'cxl (compute express link)', 'various']
|
[]
|
['c', 'rst']
| 2
| 157
| 0
|
--- diff --git a/documentation/pci/endpoint/pci-endpoint-cfs.rst b/documentation/pci/endpoint/pci-endpoint-cfs.rst --- a/documentation/pci/endpoint/pci-endpoint-cfs.rst +++ b/documentation/pci/endpoint/pci-endpoint-cfs.rst + ... primary/ + ... <symlink epc device1>/ + ... secondary/ + ... <symlink epc device2>/ + +if an epf device has to be associated with 2 epcs (like in the case of +non-transparent bridge), symlink of endpoint controller connected to primary +interface should be added in 'primary' directory and symlink of endpoint +controller connected to secondary interface should be added in 'secondary' +directory. diff --git a/drivers/pci/endpoint/pci-ep-cfs.c b/drivers/pci/endpoint/pci-ep-cfs.c --- a/drivers/pci/endpoint/pci-ep-cfs.c +++ b/drivers/pci/endpoint/pci-ep-cfs.c + struct config_group primary_epc_group; + struct config_group secondary_epc_group; + struct delayed_work cfs_work; +static int pci_secondary_epc_epf_link(struct config_item *epf_item, + struct config_item *epc_item) +{ + int ret; + struct pci_epf_group *epf_group = to_pci_epf_group(epf_item->ci_parent); + struct pci_epc_group *epc_group = to_pci_epc_group(epc_item); + struct pci_epc *epc = epc_group->epc; + struct pci_epf *epf = epf_group->epf; + + ret = pci_epc_add_epf(epc, epf, secondary_interface); + if (ret) + return ret; + + ret = pci_epf_bind(epf); + if (ret) { + pci_epc_remove_epf(epc, epf, secondary_interface); + return ret; + } + + return 0; +} + +static void pci_secondary_epc_epf_unlink(struct config_item *epc_item, + struct config_item *epf_item) +{ + struct pci_epf_group *epf_group = to_pci_epf_group(epf_item->ci_parent); + struct pci_epc_group *epc_group = to_pci_epc_group(epc_item); + struct pci_epc *epc; + struct pci_epf *epf; + + warn_on_once(epc_group->start); + + epc = epc_group->epc; + epf = epf_group->epf; + pci_epf_unbind(epf); + pci_epc_remove_epf(epc, epf, secondary_interface); +} + +static struct configfs_item_operations pci_secondary_epc_item_ops = { + .allow_link = pci_secondary_epc_epf_link, + .drop_link = pci_secondary_epc_epf_unlink, +}; + +static const struct config_item_type pci_secondary_epc_type = { + .ct_item_ops = &pci_secondary_epc_item_ops, + .ct_owner = this_module, +}; + +static struct config_group +*pci_ep_cfs_add_secondary_group(struct pci_epf_group *epf_group) +{ + struct config_group *secondary_epc_group; + + secondary_epc_group = &epf_group->secondary_epc_group; + config_group_init_type_name(secondary_epc_group, "secondary", + &pci_secondary_epc_type); + configfs_register_group(&epf_group->group, secondary_epc_group); + + return secondary_epc_group; +} + +static int pci_primary_epc_epf_link(struct config_item *epf_item, + struct config_item *epc_item) +{ + int ret; + struct pci_epf_group *epf_group = to_pci_epf_group(epf_item->ci_parent); + struct pci_epc_group *epc_group = to_pci_epc_group(epc_item); + struct pci_epc *epc = epc_group->epc; + struct pci_epf *epf = epf_group->epf; + + ret = pci_epc_add_epf(epc, epf, primary_interface); + if (ret) + return ret; + + ret = pci_epf_bind(epf); + if (ret) { + pci_epc_remove_epf(epc, epf, primary_interface); + return ret; + } + + return 0; +} + +static void pci_primary_epc_epf_unlink(struct config_item *epc_item, + struct config_item *epf_item) +{ + struct pci_epf_group *epf_group = to_pci_epf_group(epf_item->ci_parent); + struct pci_epc_group *epc_group = to_pci_epc_group(epc_item); + struct pci_epc *epc; + struct pci_epf *epf; + + warn_on_once(epc_group->start); + + epc = epc_group->epc; + epf = epf_group->epf; + pci_epf_unbind(epf); + pci_epc_remove_epf(epc, epf, primary_interface); +} + +static struct configfs_item_operations pci_primary_epc_item_ops = { + .allow_link = pci_primary_epc_epf_link, + .drop_link = pci_primary_epc_epf_unlink, +}; + +static const struct config_item_type pci_primary_epc_type = { + .ct_item_ops = &pci_primary_epc_item_ops, + .ct_owner = this_module, +}; + +static struct config_group +*pci_ep_cfs_add_primary_group(struct pci_epf_group *epf_group) +{ + struct config_group *primary_epc_group = &epf_group->primary_epc_group; + + config_group_init_type_name(primary_epc_group, "primary", + &pci_primary_epc_type); + configfs_register_group(&epf_group->group, primary_epc_group); + + return primary_epc_group; +} + +static void pci_epf_cfs_work(struct work_struct *work) +{ + struct pci_epf_group *epf_group; + struct config_group *group; + + epf_group = container_of(work, struct pci_epf_group, cfs_work.work); + group = pci_ep_cfs_add_primary_group(epf_group); + if (is_err(group)) { + pr_err("failed to create 'primary' epc interface "); + return; + } + + group = pci_ep_cfs_add_secondary_group(epf_group); + if (is_err(group)) { + pr_err("failed to create 'secondary' epc interface "); + return; + } +} + + init_delayed_work(&epf_group->cfs_work, pci_epf_cfs_work); + queue_delayed_work(system_wq, &epf_group->cfs_work, + msecs_to_jiffies(1)); +
|
Non-Transparent Bridge (NTB)
|
e85a2d7837622bd99c96f5bbc7f972da90c285a2
|
kishon vijay abraham i
|
drivers
|
pci
|
endpoint
|
pci: endpoint: add pci_epc_ops to map msi irq
|
add pci_epc_ops to map physical address to msi address and return msi data. the physical address is an address in the outbound region. this is required to implement doorbell functionality of ntb (non-transparent bridge) wherein epc on either side of the interface (primary and secondary) can directly write to the physical address (in outbound region) of the other interface to ring doorbell using msi.
|
this release allows to map an uid to a different one in a mount; it also adds support for selecting the preemption model at runtime; support for a low-overhead memory error detector designed to be used in production; support for the acrn hypervisor designed for embedded systems; btrfs initial support for zoned devices, subpage blocks sizes and performance improvements; support for eager nfs writes; support for a thermal power management to control the surface temperature of embedded devices in an unified way; the napi polling can be moved to a kernel thread; and support for non-blocking path lookups. as always, there are many other features, new drivers, improvements and fixes.
|
implement ntb controller using multiple pci ep
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'cryptography', 'security', 'networking', 'architectures x86 arm risc-v powerpc mips csky s390 pa-risc c6x']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'device voltage and frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'non-transparent bridge (ntb)', 'thunderbolt', 'clock', 'phy ("physical layer" framework)', 'cxl (compute express link)', 'various']
|
[]
|
['h', 'c']
| 2
| 49
| 0
|
--- diff --git a/drivers/pci/endpoint/pci-epc-core.c b/drivers/pci/endpoint/pci-epc-core.c --- a/drivers/pci/endpoint/pci-epc-core.c +++ b/drivers/pci/endpoint/pci-epc-core.c +/** + * pci_epc_map_msi_irq() - map physical address to msi address and return + * msi data + * @epc: the epc device which has the msi capability + * @func_no: the physical endpoint function number in the epc device + * @phys_addr: the physical address of the outbound region + * @interrupt_num: the msi interrupt number + * @entry_size: size of outbound address region for each interrupt + * @msi_data: the data that should be written in order to raise msi interrupt + * with interrupt number as 'interrupt num' + * @msi_addr_offset: offset of msi address from the aligned outbound address + * to which the msi address is mapped + * + * invoke to map physical address to msi address and return msi data. the + * physical address should be an address in the outbound region. this is + * required to implement doorbell functionality of ntb wherein epc on either + * side of the interface (primary and secondary) can directly write to the + * physical address (in outbound region) of the other interface to ring + * doorbell. + */ +int pci_epc_map_msi_irq(struct pci_epc *epc, u8 func_no, phys_addr_t phys_addr, + u8 interrupt_num, u32 entry_size, u32 *msi_data, + u32 *msi_addr_offset) +{ + int ret; + + if (is_err_or_null(epc)) + return -einval; + + if (!epc->ops->map_msi_irq) + return -einval; + + mutex_lock(&epc->lock); + ret = epc->ops->map_msi_irq(epc, func_no, phys_addr, interrupt_num, + entry_size, msi_data, msi_addr_offset); + mutex_unlock(&epc->lock); + + return ret; +} +export_symbol_gpl(pci_epc_map_msi_irq); + diff --git a/include/linux/pci-epc.h b/include/linux/pci-epc.h --- a/include/linux/pci-epc.h +++ b/include/linux/pci-epc.h + * @map_msi_irq: ops to map physical address to msi address and return msi data + int (*map_msi_irq)(struct pci_epc *epc, u8 func_no, + phys_addr_t phys_addr, u8 interrupt_num, + u32 entry_size, u32 *msi_data, + u32 *msi_addr_offset); +int pci_epc_map_msi_irq(struct pci_epc *epc, u8 func_no, + phys_addr_t phys_addr, u8 interrupt_num, + u32 entry_size, u32 *msi_data, u32 *msi_addr_offset);
|
Non-Transparent Bridge (NTB)
|
87d5972e476f6c4e98a0abce713c54c6f40661b0
|
kishon vijay abraham i
|
drivers
|
pci
|
endpoint
|
pci: endpoint: add pci_epf_ops to expose function-specific attrs
|
in addition to the attributes that are generic across function drivers documented in documentation/pci/endpoint/pci-endpoint-cfs.rst, there could be function-specific attributes that has to be exposed by the function driver to be configured by the user. add ->add_cfs() in pci_epf_ops to be populated by the function driver if it has to expose any function-specific attributes and pci_epf_type_add_cfs() to be invoked by pci-ep-cfs.c when sub-directory to main function directory is created.
|
this release allows to map an uid to a different one in a mount; it also adds support for selecting the preemption model at runtime; support for a low-overhead memory error detector designed to be used in production; support for the acrn hypervisor designed for embedded systems; btrfs initial support for zoned devices, subpage blocks sizes and performance improvements; support for eager nfs writes; support for a thermal power management to control the surface temperature of embedded devices in an unified way; the napi polling can be moved to a kernel thread; and support for non-blocking path lookups. as always, there are many other features, new drivers, improvements and fixes.
|
implement ntb controller using multiple pci ep
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'cryptography', 'security', 'networking', 'architectures x86 arm risc-v powerpc mips csky s390 pa-risc c6x']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'device voltage and frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'non-transparent bridge (ntb)', 'thunderbolt', 'clock', 'phy ("physical layer" framework)', 'cxl (compute express link)', 'various']
|
[]
|
['h', 'c']
| 2
| 37
| 0
|
--- diff --git a/drivers/pci/endpoint/pci-epf-core.c b/drivers/pci/endpoint/pci-epf-core.c --- a/drivers/pci/endpoint/pci-epf-core.c +++ b/drivers/pci/endpoint/pci-epf-core.c +/** + * pci_epf_type_add_cfs() - help function drivers to expose function specific + * attributes in configfs + * @epf: the epf device that has to be configured using configfs + * @group: the parent configfs group (corresponding to entries in + * pci_epf_device_id) + * + * invoke to expose function specific attributes in configfs. if the function + * driver does not have anything to expose (attributes configured by user), + * return null. + */ +struct config_group *pci_epf_type_add_cfs(struct pci_epf *epf, + struct config_group *group) +{ + struct config_group *epf_type_group; + + if (!epf->driver) { + dev_err(&epf->dev, "epf device not bound to driver "); + return null; + } + + if (!epf->driver->ops->add_cfs) + return null; + + mutex_lock(&epf->lock); + epf_type_group = epf->driver->ops->add_cfs(epf, group); + mutex_unlock(&epf->lock); + + return epf_type_group; +} +export_symbol_gpl(pci_epf_type_add_cfs); + diff --git a/include/linux/pci-epf.h b/include/linux/pci-epf.h --- a/include/linux/pci-epf.h +++ b/include/linux/pci-epf.h + * @add_cfs: ops to initialize function specific configfs attributes + struct config_group *(*add_cfs)(struct pci_epf *epf, + struct config_group *group); +struct config_group *pci_epf_type_add_cfs(struct pci_epf *epf, + struct config_group *group);
|
Non-Transparent Bridge (NTB)
|
256ae475201b16fd69e00dd6c2d14035e4ea5745
|
kishon vijay abraham i
|
drivers
|
pci
|
endpoint
|
pci: endpoint: allow user to create sub-directory of 'epf device' directory
|
documentation/pci/endpoint/pci-endpoint-cfs.rst explains how a user has to create a directory in-order to create a 'epf device' that can be configured/probed by 'epf driver'.
|
this release allows to map an uid to a different one in a mount; it also adds support for selecting the preemption model at runtime; support for a low-overhead memory error detector designed to be used in production; support for the acrn hypervisor designed for embedded systems; btrfs initial support for zoned devices, subpage blocks sizes and performance improvements; support for eager nfs writes; support for a thermal power management to control the surface temperature of embedded devices in an unified way; the napi polling can be moved to a kernel thread; and support for non-blocking path lookups. as always, there are many other features, new drivers, improvements and fixes.
|
implement ntb controller using multiple pci ep
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'cryptography', 'security', 'networking', 'architectures x86 arm risc-v powerpc mips csky s390 pa-risc c6x']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'device voltage and frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'non-transparent bridge (ntb)', 'thunderbolt', 'clock', 'phy ("physical layer" framework)', 'cxl (compute express link)', 'various']
|
[]
|
['h', 'c']
| 2
| 26
| 0
|
--- diff --git a/drivers/pci/endpoint/pci-ep-cfs.c b/drivers/pci/endpoint/pci-ep-cfs.c --- a/drivers/pci/endpoint/pci-ep-cfs.c +++ b/drivers/pci/endpoint/pci-ep-cfs.c +static struct config_group *pci_epf_type_make(struct config_group *group, + const char *name) +{ + struct pci_epf_group *epf_group = to_pci_epf_group(&group->cg_item); + struct config_group *epf_type_group; + + epf_type_group = pci_epf_type_add_cfs(epf_group->epf, group); + return epf_type_group; +} + +static void pci_epf_type_drop(struct config_group *group, + struct config_item *item) +{ + config_item_put(item); +} + +static struct configfs_group_operations pci_epf_type_group_ops = { + .make_group = &pci_epf_type_make, + .drop_item = &pci_epf_type_drop, +}; + + .ct_group_ops = &pci_epf_type_group_ops, + epf->group = &epf_group->group; diff --git a/include/linux/pci-epf.h b/include/linux/pci-epf.h --- a/include/linux/pci-epf.h +++ b/include/linux/pci-epf.h +#include <linux/configfs.h> + * @group: configfs group associated with the epf device + struct config_group *group;
|
Non-Transparent Bridge (NTB)
|
38ad827e3bc0f0e94628ee1d8dc31e778d9be40f
|
kishon vijay abraham i
|
drivers
|
pci
|
endpoint
|
pci: cadence: implement ->msi_map_irq() ops
|
implement ->msi_map_irq() ops in order to map physical address to msi address and return msi data.
|
this release allows to map an uid to a different one in a mount; it also adds support for selecting the preemption model at runtime; support for a low-overhead memory error detector designed to be used in production; support for the acrn hypervisor designed for embedded systems; btrfs initial support for zoned devices, subpage blocks sizes and performance improvements; support for eager nfs writes; support for a thermal power management to control the surface temperature of embedded devices in an unified way; the napi polling can be moved to a kernel thread; and support for non-blocking path lookups. as always, there are many other features, new drivers, improvements and fixes.
|
implement ntb controller using multiple pci ep
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'cryptography', 'security', 'networking', 'architectures x86 arm risc-v powerpc mips csky s390 pa-risc c6x']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'device voltage and frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'non-transparent bridge (ntb)', 'thunderbolt', 'clock', 'phy ("physical layer" framework)', 'cxl (compute express link)', 'various']
|
[]
|
['c']
| 1
| 53
| 0
|
--- diff --git a/drivers/pci/controller/cadence/pcie-cadence-ep.c b/drivers/pci/controller/cadence/pcie-cadence-ep.c --- a/drivers/pci/controller/cadence/pcie-cadence-ep.c +++ b/drivers/pci/controller/cadence/pcie-cadence-ep.c +static int cdns_pcie_ep_map_msi_irq(struct pci_epc *epc, u8 fn, + phys_addr_t addr, u8 interrupt_num, + u32 entry_size, u32 *msi_data, + u32 *msi_addr_offset) +{ + struct cdns_pcie_ep *ep = epc_get_drvdata(epc); + u32 cap = cdns_pcie_ep_func_msi_cap_offset; + struct cdns_pcie *pcie = &ep->pcie; + u64 pci_addr, pci_addr_mask = 0xff; + u16 flags, mme, data, data_mask; + u8 msi_count; + int ret; + int i; + + /* check whether the msi feature has been enabled by the pci host. */ + flags = cdns_pcie_ep_fn_readw(pcie, fn, cap + pci_msi_flags); + if (!(flags & pci_msi_flags_enable)) + return -einval; + + /* get the number of enabled msis */ + mme = (flags & pci_msi_flags_qsize) >> 4; + msi_count = 1 << mme; + if (!interrupt_num || interrupt_num > msi_count) + return -einval; + + /* compute the data value to be written. */ + data_mask = msi_count - 1; + data = cdns_pcie_ep_fn_readw(pcie, fn, cap + pci_msi_data_64); + data = data & ~data_mask; + + /* get the pci address where to write the data into. */ + pci_addr = cdns_pcie_ep_fn_readl(pcie, fn, cap + pci_msi_address_hi); + pci_addr <<= 32; + pci_addr |= cdns_pcie_ep_fn_readl(pcie, fn, cap + pci_msi_address_lo); + pci_addr &= genmask_ull(63, 2); + + for (i = 0; i < interrupt_num; i++) { + ret = cdns_pcie_ep_map_addr(epc, fn, addr, + pci_addr & ~pci_addr_mask, + entry_size); + if (ret) + return ret; + addr = addr + entry_size; + } + + *msi_data = data; + *msi_addr_offset = pci_addr & pci_addr_mask; + + return 0; +} + + .align = 256, + .map_msi_irq = cdns_pcie_ep_map_msi_irq,
|
Non-Transparent Bridge (NTB)
|
dbcc542f36086abcaec28a858b17f2c358d57973
|
kishon vijay abraham i tom joseph tjoseph cadence com
|
drivers
|
pci
|
cadence, controller
|
pci: cadence: configure lm_ep_func_cfg based on epc->function_num_map
|
the number of functions supported by the endpoint controller is configured in lm_ep_func_cfg based on func_no member of struct pci_epf. now that an endpoint function can be associated with two endpoint controllers (primary and secondary), just using func_no will not suffice as that will take into account only if the endpoint controller is associated with the primary interface of endpoint function. instead use epc->function_num_map which will already have the configured functions information (irrespective of whether the endpoint controller is associated with primary or secondary interface).
|
this release allows to map an uid to a different one in a mount; it also adds support for selecting the preemption model at runtime; support for a low-overhead memory error detector designed to be used in production; support for the acrn hypervisor designed for embedded systems; btrfs initial support for zoned devices, subpage blocks sizes and performance improvements; support for eager nfs writes; support for a thermal power management to control the surface temperature of embedded devices in an unified way; the napi polling can be moved to a kernel thread; and support for non-blocking path lookups. as always, there are many other features, new drivers, improvements and fixes.
|
implement ntb controller using multiple pci ep
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'cryptography', 'security', 'networking', 'architectures x86 arm risc-v powerpc mips csky s390 pa-risc c6x']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'device voltage and frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'non-transparent bridge (ntb)', 'thunderbolt', 'clock', 'phy ("physical layer" framework)', 'cxl (compute express link)', 'various']
|
[]
|
['c']
| 1
| 1
| 6
|
--- diff --git a/drivers/pci/controller/cadence/pcie-cadence-ep.c b/drivers/pci/controller/cadence/pcie-cadence-ep.c --- a/drivers/pci/controller/cadence/pcie-cadence-ep.c +++ b/drivers/pci/controller/cadence/pcie-cadence-ep.c - struct pci_epf *epf; - u32 cfg; - cfg = bit(0); - list_for_each_entry(epf, &epc->pci_epf, list) - cfg |= bit(epf->func_no); - cdns_pcie_writel(pcie, cdns_pcie_lm_ep_func_cfg, cfg); + cdns_pcie_writel(pcie, cdns_pcie_lm_ep_func_cfg, epc->function_num_map);
|
Non-Transparent Bridge (NTB)
|
a62074a9ba856082a60ff60693abd79f4b55177d
|
kishon vijay abraham i tom joseph tjoseph cadence com
|
drivers
|
pci
|
cadence, controller
|
pci: endpoint: add ep function driver to provide ntb functionality
|
add a new endpoint function driver to provide ntb functionality using multiple pcie endpoint instances.
|
this release allows to map an uid to a different one in a mount; it also adds support for selecting the preemption model at runtime; support for a low-overhead memory error detector designed to be used in production; support for the acrn hypervisor designed for embedded systems; btrfs initial support for zoned devices, subpage blocks sizes and performance improvements; support for eager nfs writes; support for a thermal power management to control the surface temperature of embedded devices in an unified way; the napi polling can be moved to a kernel thread; and support for non-blocking path lookups. as always, there are many other features, new drivers, improvements and fixes.
|
implement ntb controller using multiple pci ep
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'cryptography', 'security', 'networking', 'architectures x86 arm risc-v powerpc mips csky s390 pa-risc c6x']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'device voltage and frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'non-transparent bridge (ntb)', 'thunderbolt', 'clock', 'phy ("physical layer" framework)', 'cxl (compute express link)', 'various']
|
[]
|
['kconfig', 'c', 'makefile']
| 3
| 2,142
| 0
|
--- diff --git a/drivers/pci/endpoint/functions/kconfig b/drivers/pci/endpoint/functions/kconfig --- a/drivers/pci/endpoint/functions/kconfig +++ b/drivers/pci/endpoint/functions/kconfig + +config pci_epf_ntb + tristate "pci endpoint ntb driver" + depends on pci_endpoint + select configfs_fs + help + select this configuration option to enable the non-transparent + bridge (ntb) driver for pci endpoint. ntb driver implements ntb + controller functionality using multiple pcie endpoint instances. + it can support ntb endpoint function devices created using + device tree. + + if in doubt, say "n" to disable endpoint ntb driver. diff --git a/drivers/pci/endpoint/functions/makefile b/drivers/pci/endpoint/functions/makefile --- a/drivers/pci/endpoint/functions/makefile +++ b/drivers/pci/endpoint/functions/makefile +obj-$(config_pci_epf_ntb) += pci-epf-ntb.o diff --git a/drivers/pci/endpoint/functions/pci-epf-ntb.c b/drivers/pci/endpoint/functions/pci-epf-ntb.c --- /dev/null +++ b/drivers/pci/endpoint/functions/pci-epf-ntb.c +// spdx-license-identifier: gpl-2.0 +/** + * endpoint function driver to implement non-transparent bridge functionality + * + * copyright (c) 2020 texas instruments + * author: kishon vijay abraham i <kishon@ti.com> + */ + +/* + * the pci ntb function driver configures the soc with multiple pcie endpoint + * (ep) controller instances (see diagram below) in such a way that + * transactions from one ep controller are routed to the other ep controller. + * once pci ntb function driver configures the soc with multiple ep instances, + * host1 and host2 can communicate with each other using soc as a bridge. + * + * +-------------+ +-------------+ + * | | | | + * | host1 | | host2 | + * | | | | + * +------^------+ +------^------+ + * | | + * | | + * +---------|-------------------------------------------------|---------+ + * | +------v------+ +------v------+ | + * | | | | | | + * | | ep | | ep | | + * | | controller1 | | controller2 | | + * | | <-----------------------------------> | | + * | | | | | | + * | | | | | | + * | | | soc with multiple ep instances | | | + * | | | (configured using ntb function) | | | + * | +-------------+ +-------------+ | + * +---------------------------------------------------------------------+ + */ + +#include <linux/delay.h> +#include <linux/io.h> +#include <linux/module.h> +#include <linux/slab.h> + +#include <linux/pci-epc.h> +#include <linux/pci-epf.h> + +static struct workqueue_struct *kpcintb_workqueue; + +#define command_configure_doorbell 1 +#define command_teardown_doorbell 2 +#define command_configure_mw 3 +#define command_teardown_mw 4 +#define command_link_up 5 +#define command_link_down 6 + +#define command_status_ok 1 +#define command_status_error 2 + +#define link_status_up bit(0) + +#define spad_count 64 +#define db_count 4 +#define ntb_mw_offset 2 +#define db_count_mask genmask(15, 0) +#define msix_enable bit(16) +#define max_db_count 32 +#define max_mw 4 + +enum epf_ntb_bar { + bar_config, + bar_peer_spad, + bar_db_mw1, + bar_mw2, + bar_mw3, + bar_mw4, +}; + +struct epf_ntb { + u32 num_mws; + u32 db_count; + u32 spad_count; + struct pci_epf *epf; + u64 mws_size[max_mw]; + struct config_group group; + struct epf_ntb_epc *epc[2]; +}; + +#define to_epf_ntb(epf_group) container_of((epf_group), struct epf_ntb, group) + +struct epf_ntb_epc { + u8 func_no; + bool linkup; + bool is_msix; + int msix_bar; + u32 spad_size; + struct pci_epc *epc; + struct epf_ntb *epf_ntb; + void __iomem *mw_addr[6]; + size_t msix_table_offset; + struct epf_ntb_ctrl *reg; + struct pci_epf_bar *epf_bar; + enum pci_barno epf_ntb_bar[6]; + struct delayed_work cmd_handler; + enum pci_epc_interface_type type; + const struct pci_epc_features *epc_features; +}; + +struct epf_ntb_ctrl { + u32 command; + u32 argument; + u16 command_status; + u16 link_status; + u32 topology; + u64 addr; + u64 size; + u32 num_mws; + u32 mw1_offset; + u32 spad_offset; + u32 spad_count; + u32 db_entry_size; + u32 db_data[max_db_count]; + u32 db_offset[max_db_count]; +} __packed; + +static struct pci_epf_header epf_ntb_header = { + .vendorid = pci_any_id, + .deviceid = pci_any_id, + .baseclass_code = pci_base_class_memory, + .interrupt_pin = pci_interrupt_inta, +}; + +/** + * epf_ntb_link_up() - raise link_up interrupt to both the hosts + * @ntb: ntb device that facilitates communication between host1 and host2 + * @link_up: true or false indicating link is up or down + * + * once ntb function in host1 and the ntb function in host2 invoke + * ntb_link_enable(), this ntb function driver will trigger a link event to + * the ntb client in both the hosts. + */ +static int epf_ntb_link_up(struct epf_ntb *ntb, bool link_up) +{ + enum pci_epc_interface_type type; + enum pci_epc_irq_type irq_type; + struct epf_ntb_epc *ntb_epc; + struct epf_ntb_ctrl *ctrl; + struct pci_epc *epc; + bool is_msix; + u8 func_no; + int ret; + + for (type = primary_interface; type <= secondary_interface; type++) { + ntb_epc = ntb->epc[type]; + epc = ntb_epc->epc; + func_no = ntb_epc->func_no; + is_msix = ntb_epc->is_msix; + ctrl = ntb_epc->reg; + if (link_up) + ctrl->link_status |= link_status_up; + else + ctrl->link_status &= ~link_status_up; + irq_type = is_msix ? pci_epc_irq_msix : pci_epc_irq_msi; + ret = pci_epc_raise_irq(epc, func_no, irq_type, 1); + if (ret) { + dev_err(&epc->dev, + "%s intf: failed to raise link up irq ", + pci_epc_interface_string(type)); + return ret; + } + } + + return 0; +} + +/** + * epf_ntb_configure_mw() - configure the outbound address space for one host + * to access the memory window of other host + * @ntb: ntb device that facilitates communication between host1 and host2 + * @type: primary interface or secondary interface + * @mw: index of the memory window (either 0, 1, 2 or 3) + * + * +-----------------+ +---->+----------------+-----------+-----------------+ + * | bar0 | | | doorbell 1 +-----------> msi|x address 1 | + * +-----------------+ | +----------------+ +-----------------+ + * | bar1 | | | doorbell 2 +---------+ | | + * +-----------------+----+ +----------------+ | | | + * | bar2 | | doorbell 3 +-------+ | +-----------------+ + * +-----------------+----+ +----------------+ | +-> msi|x address 2 | + * | bar3 | | | doorbell 4 +-----+ | +-----------------+ + * +-----------------+ | |----------------+ | | | | + * | bar4 | | | | | | +-----------------+ + * +-----------------+ | | mw1 +---+ | +-->+ msi|x address 3|| + * | bar5 | | | | | | +-----------------+ + * +-----------------+ +---->-----------------+ | | | | + * ep controller 1 | | | | +-----------------+ + * | | | +---->+ msi|x address 4 | + * +----------------+ | +-----------------+ + * (a) ep controller 2 | | | + * (ob space) | | | + * +-------> mw1 | + * | | + * | | + * (b) +-----------------+ + * | | + * | | + * | | + * | | + * | | + * +-----------------+ + * pci address space + * (managed by host2) + * + * this function performs stage (b) in the above diagram (see mw1) i.e., map ob + * address space of memory window to pci address space. + * + * this operation requires 3 parameters + * 1) address in the outbound address space + * 2) address in the pci address space + * 3) size of the address region to be mapped + * + * the address in the outbound address space (for mw1, mw2, mw3 and mw4) is + * stored in epf_bar corresponding to bar_db_mw1 for mw1 and bar_mw2, bar_mw3 + * bar_mw4 for rest of the bars of epf_ntb_epc that is connected to host1. this + * is populated in epf_ntb_alloc_peer_mem() in this driver. + * + * the address and size of the pci address region that has to be mapped would + * be provided by host2 in ctrl->addr and ctrl->size of epf_ntb_epc that is + * connected to host2. + * + * please note memory window1 (mw1) and doorbell registers together will be + * mapped to a single bar (bar2) above for 32-bit bars. the exact bar that's + * used for memory window (mw) can be obtained from epf_ntb_bar[bar_db_mw1], + * epf_ntb_bar[bar_mw2], epf_ntb_bar[bar_mw2], epf_ntb_bar[bar_mw2]. + */ +static int epf_ntb_configure_mw(struct epf_ntb *ntb, + enum pci_epc_interface_type type, u32 mw) +{ + struct epf_ntb_epc *peer_ntb_epc, *ntb_epc; + struct pci_epf_bar *peer_epf_bar; + enum pci_barno peer_barno; + struct epf_ntb_ctrl *ctrl; + phys_addr_t phys_addr; + struct pci_epc *epc; + u64 addr, size; + int ret = 0; + u8 func_no; + + ntb_epc = ntb->epc[type]; + epc = ntb_epc->epc; + + peer_ntb_epc = ntb->epc[!type]; + peer_barno = peer_ntb_epc->epf_ntb_bar[mw + ntb_mw_offset]; + peer_epf_bar = &peer_ntb_epc->epf_bar[peer_barno]; + + phys_addr = peer_epf_bar->phys_addr; + ctrl = ntb_epc->reg; + addr = ctrl->addr; + size = ctrl->size; + if (mw + ntb_mw_offset == bar_db_mw1) + phys_addr += ctrl->mw1_offset; + + if (size > ntb->mws_size[mw]) { + dev_err(&epc->dev, + "%s intf: mw: %d req sz:%llxx > supported sz:%llx ", + pci_epc_interface_string(type), mw, size, + ntb->mws_size[mw]); + ret = -einval; + goto err_invalid_size; + } + + func_no = ntb_epc->func_no; + + ret = pci_epc_map_addr(epc, func_no, phys_addr, addr, size); + if (ret) + dev_err(&epc->dev, + "%s intf: failed to map memory window %d address ", + pci_epc_interface_string(type), mw); + +err_invalid_size: + + return ret; +} + +/** + * epf_ntb_teardown_mw() - teardown the configured ob atu + * @ntb: ntb device that facilitates communication between host1 and host2 + * @type: primary interface or secondary interface + * @mw: index of the memory window (either 0, 1, 2 or 3) + * + * teardown the configured ob atu configured in epf_ntb_configure_mw() using + * pci_epc_unmap_addr() + */ +static void epf_ntb_teardown_mw(struct epf_ntb *ntb, + enum pci_epc_interface_type type, u32 mw) +{ + struct epf_ntb_epc *peer_ntb_epc, *ntb_epc; + struct pci_epf_bar *peer_epf_bar; + enum pci_barno peer_barno; + struct epf_ntb_ctrl *ctrl; + phys_addr_t phys_addr; + struct pci_epc *epc; + u8 func_no; + + ntb_epc = ntb->epc[type]; + epc = ntb_epc->epc; + + peer_ntb_epc = ntb->epc[!type]; + peer_barno = peer_ntb_epc->epf_ntb_bar[mw + ntb_mw_offset]; + peer_epf_bar = &peer_ntb_epc->epf_bar[peer_barno]; + + phys_addr = peer_epf_bar->phys_addr; + ctrl = ntb_epc->reg; + if (mw + ntb_mw_offset == bar_db_mw1) + phys_addr += ctrl->mw1_offset; + func_no = ntb_epc->func_no; + + pci_epc_unmap_addr(epc, func_no, phys_addr); +} + +/** + * epf_ntb_configure_msi() - map ob address space to msi address + * @ntb: ntb device that facilitates communication between host1 and host2 + * @type: primary interface or secondary interface + * @db_count: number of doorbell interrupts to map + * + *+-----------------+ +----->+----------------+-----------+-----------------+ + *| bar0 | | | doorbell 1 +---+-------> msi address | + *+-----------------+ | +----------------+ | +-----------------+ + *| bar1 | | | doorbell 2 +---+ | | + *+-----------------+----+ +----------------+ | | | + *| bar2 | | doorbell 3 +---+ | | + *+-----------------+----+ +----------------+ | | | + *| bar3 | | | doorbell 4 +---+ | | + *+-----------------+ | |----------------+ | | + *| bar4 | | | | | | + *+-----------------+ | | mw1 | | | + *| bar5 | | | | | | + *+-----------------+ +----->-----------------+ | | + * ep controller 1 | | | | + * | | | | + * +----------------+ +-----------------+ + * (a) ep controller 2 | | + * (ob space) | | + * | mw1 | + * | | + * | | + * (b) +-----------------+ + * | | + * | | + * | | + * | | + * | | + * +-----------------+ + * pci address space + * (managed by host2) + * + * + * this function performs stage (b) in the above diagram (see doorbell 1, + * doorbell 2, doorbell 3, doorbell 4) i.e map ob address space corresponding to + * doorbell to msi address in pci address space. + * + * this operation requires 3 parameters + * 1) address reserved for doorbell in the outbound address space + * 2) msi-x address in the pcie address space + * 3) number of msi-x interrupts that has to be configured + * + * the address in the outbound address space (for the doorbell) is stored in + * epf_bar corresponding to bar_db_mw1 of epf_ntb_epc that is connected to + * host1. this is populated in epf_ntb_alloc_peer_mem() in this driver along + * with address for mw1. + * + * pci_epc_map_msi_irq() takes the msi address from msi capability register + * and maps the ob address (obtained in epf_ntb_alloc_peer_mem()) to the msi + * address. + * + * epf_ntb_configure_msi() also stores the msi data to raise each interrupt + * in db_data of the peer's control region. this helps the peer to raise + * doorbell of the other host by writing db_data to the bar corresponding to + * bar_db_mw1. + */ +static int epf_ntb_configure_msi(struct epf_ntb *ntb, + enum pci_epc_interface_type type, u16 db_count) +{ + struct epf_ntb_epc *peer_ntb_epc, *ntb_epc; + u32 db_entry_size, db_data, db_offset; + struct pci_epf_bar *peer_epf_bar; + struct epf_ntb_ctrl *peer_ctrl; + enum pci_barno peer_barno; + phys_addr_t phys_addr; + struct pci_epc *epc; + u8 func_no; + int ret, i; + + ntb_epc = ntb->epc[type]; + epc = ntb_epc->epc; + + peer_ntb_epc = ntb->epc[!type]; + peer_barno = peer_ntb_epc->epf_ntb_bar[bar_db_mw1]; + peer_epf_bar = &peer_ntb_epc->epf_bar[peer_barno]; + peer_ctrl = peer_ntb_epc->reg; + db_entry_size = peer_ctrl->db_entry_size; + + phys_addr = peer_epf_bar->phys_addr; + func_no = ntb_epc->func_no; + + ret = pci_epc_map_msi_irq(epc, func_no, phys_addr, db_count, + db_entry_size, &db_data, &db_offset); + if (ret) { + dev_err(&epc->dev, "%s intf: failed to map msi irq ", + pci_epc_interface_string(type)); + return ret; + } + + for (i = 0; i < db_count; i++) { + peer_ctrl->db_data[i] = db_data | i; + peer_ctrl->db_offset[i] = db_offset; + } + + return 0; +} + +/** + * epf_ntb_configure_msix() - map ob address space to msi-x address + * @ntb: ntb device that facilitates communication between host1 and host2 + * @type: primary interface or secondary interface + * @db_count: number of doorbell interrupts to map + * + *+-----------------+ +----->+----------------+-----------+-----------------+ + *| bar0 | | | doorbell 1 +-----------> msi-x address 1 | + *+-----------------+ | +----------------+ +-----------------+ + *| bar1 | | | doorbell 2 +---------+ | | + *+-----------------+----+ +----------------+ | | | + *| bar2 | | doorbell 3 +-------+ | +-----------------+ + *+-----------------+----+ +----------------+ | +-> msi-x address 2 | + *| bar3 | | | doorbell 4 +-----+ | +-----------------+ + *+-----------------+ | |----------------+ | | | | + *| bar4 | | | | | | +-----------------+ + *+-----------------+ | | mw1 + | +-->+ msi-x address 3|| + *| bar5 | | | | | +-----------------+ + *+-----------------+ +----->-----------------+ | | | + * ep controller 1 | | | +-----------------+ + * | | +---->+ msi-x address 4 | + * +----------------+ +-----------------+ + * (a) ep controller 2 | | + * (ob space) | | + * | mw1 | + * | | + * | | + * (b) +-----------------+ + * | | + * | | + * | | + * | | + * | | + * +-----------------+ + * pci address space + * (managed by host2) + * + * this function performs stage (b) in the above diagram (see doorbell 1, + * doorbell 2, doorbell 3, doorbell 4) i.e map ob address space corresponding to + * doorbell to msi-x address in pci address space. + * + * this operation requires 3 parameters + * 1) address reserved for doorbell in the outbound address space + * 2) msi-x address in the pcie address space + * 3) number of msi-x interrupts that has to be configured + * + * the address in the outbound address space (for the doorbell) is stored in + * epf_bar corresponding to bar_db_mw1 of epf_ntb_epc that is connected to + * host1. this is populated in epf_ntb_alloc_peer_mem() in this driver along + * with address for mw1. + * + * the msi-x address is in the msi-x table of ep controller 2 and + * the count of doorbell is in ctrl->argument of epf_ntb_epc that is connected + * to host2. msi-x table is stored memory mapped to ntb_epc->msix_bar and the + * offset is in ntb_epc->msix_table_offset. from this epf_ntb_configure_msix() + * gets the msi-x address and data. + * + * epf_ntb_configure_msix() also stores the msi-x data to raise each interrupt + * in db_data of the peer's control region. this helps the peer to raise + * doorbell of the other host by writing db_data to the bar corresponding to + * bar_db_mw1. + */ +static int epf_ntb_configure_msix(struct epf_ntb *ntb, + enum pci_epc_interface_type type, + u16 db_count) +{ + const struct pci_epc_features *epc_features; + struct epf_ntb_epc *peer_ntb_epc, *ntb_epc; + struct pci_epf_bar *peer_epf_bar, *epf_bar; + struct pci_epf_msix_tbl *msix_tbl; + struct epf_ntb_ctrl *peer_ctrl; + u32 db_entry_size, msg_data; + enum pci_barno peer_barno; + phys_addr_t phys_addr; + struct pci_epc *epc; + size_t align; + u64 msg_addr; + u8 func_no; + int ret, i; + + ntb_epc = ntb->epc[type]; + epc = ntb_epc->epc; + + epf_bar = &ntb_epc->epf_bar[ntb_epc->msix_bar]; + msix_tbl = epf_bar->addr + ntb_epc->msix_table_offset; + + peer_ntb_epc = ntb->epc[!type]; + peer_barno = peer_ntb_epc->epf_ntb_bar[bar_db_mw1]; + peer_epf_bar = &peer_ntb_epc->epf_bar[peer_barno]; + phys_addr = peer_epf_bar->phys_addr; + peer_ctrl = peer_ntb_epc->reg; + epc_features = ntb_epc->epc_features; + align = epc_features->align; + + func_no = ntb_epc->func_no; + db_entry_size = peer_ctrl->db_entry_size; + + for (i = 0; i < db_count; i++) { + msg_addr = align_down(msix_tbl[i].msg_addr, align); + msg_data = msix_tbl[i].msg_data; + ret = pci_epc_map_addr(epc, func_no, phys_addr, msg_addr, + db_entry_size); + if (ret) { + dev_err(&epc->dev, + "%s intf: failed to configure msi-x irq ", + pci_epc_interface_string(type)); + return ret; + } + phys_addr = phys_addr + db_entry_size; + peer_ctrl->db_data[i] = msg_data; + peer_ctrl->db_offset[i] = msix_tbl[i].msg_addr & (align - 1); + } + ntb_epc->is_msix = true; + + return 0; +} + +/** + * epf_ntb_configure_db() - configure the outbound address space for one host + * to ring the doorbell of other host + * @ntb: ntb device that facilitates communication between host1 and host2 + * @type: primary interface or secondary interface + * @db_count: count of the number of doorbells that has to be configured + * @msix: indicates whether msi-x or msi should be used + * + * invokes epf_ntb_configure_msix() or epf_ntb_configure_msi() required for + * one host to ring the doorbell of other host. + */ +static int epf_ntb_configure_db(struct epf_ntb *ntb, + enum pci_epc_interface_type type, + u16 db_count, bool msix) +{ + struct epf_ntb_epc *ntb_epc; + struct pci_epc *epc; + int ret; + + if (db_count > max_db_count) + return -einval; + + ntb_epc = ntb->epc[type]; + epc = ntb_epc->epc; + + if (msix) + ret = epf_ntb_configure_msix(ntb, type, db_count); + else + ret = epf_ntb_configure_msi(ntb, type, db_count); + + if (ret) + dev_err(&epc->dev, "%s intf: failed to configure db ", + pci_epc_interface_string(type)); + + return ret; +} + +/** + * epf_ntb_teardown_db() - unmap address in ob address space to msi/msi-x + * address + * @ntb: ntb device that facilitates communication between host1 and host2 + * @type: primary interface or secondary interface + * + * invoke pci_epc_unmap_addr() to unmap ob address to msi/msi-x address. + */ +static void +epf_ntb_teardown_db(struct epf_ntb *ntb, enum pci_epc_interface_type type) +{ + struct epf_ntb_epc *peer_ntb_epc, *ntb_epc; + struct pci_epf_bar *peer_epf_bar; + enum pci_barno peer_barno; + phys_addr_t phys_addr; + struct pci_epc *epc; + u8 func_no; + + ntb_epc = ntb->epc[type]; + epc = ntb_epc->epc; + + peer_ntb_epc = ntb->epc[!type]; + peer_barno = peer_ntb_epc->epf_ntb_bar[bar_db_mw1]; + peer_epf_bar = &peer_ntb_epc->epf_bar[peer_barno]; + phys_addr = peer_epf_bar->phys_addr; + func_no = ntb_epc->func_no; + + pci_epc_unmap_addr(epc, func_no, phys_addr); +} + +/** + * epf_ntb_cmd_handler() - handle commands provided by the ntb host + * @work: work_struct for the two epf_ntb_epc (primary and secondary) + * + * workqueue function that gets invoked for the two epf_ntb_epc + * periodically (once every 5ms) to see if it has received any commands + * from ntb host. the host can send commands to configure doorbell or + * configure memory window or to update link status. + */ +static void epf_ntb_cmd_handler(struct work_struct *work) +{ + enum pci_epc_interface_type type; + struct epf_ntb_epc *ntb_epc; + struct epf_ntb_ctrl *ctrl; + u32 command, argument; + struct epf_ntb *ntb; + struct device *dev; + u16 db_count; + bool is_msix; + int ret; + + ntb_epc = container_of(work, struct epf_ntb_epc, cmd_handler.work); + ctrl = ntb_epc->reg; + command = ctrl->command; + if (!command) + goto reset_handler; + argument = ctrl->argument; + + ctrl->command = 0; + ctrl->argument = 0; + + ctrl = ntb_epc->reg; + type = ntb_epc->type; + ntb = ntb_epc->epf_ntb; + dev = &ntb->epf->dev; + + switch (command) { + case command_configure_doorbell: + db_count = argument & db_count_mask; + is_msix = argument & msix_enable; + ret = epf_ntb_configure_db(ntb, type, db_count, is_msix); + if (ret < 0) + ctrl->command_status = command_status_error; + else + ctrl->command_status = command_status_ok; + break; + case command_teardown_doorbell: + epf_ntb_teardown_db(ntb, type); + ctrl->command_status = command_status_ok; + break; + case command_configure_mw: + ret = epf_ntb_configure_mw(ntb, type, argument); + if (ret < 0) + ctrl->command_status = command_status_error; + else + ctrl->command_status = command_status_ok; + break; + case command_teardown_mw: + epf_ntb_teardown_mw(ntb, type, argument); + ctrl->command_status = command_status_ok; + break; + case command_link_up: + ntb_epc->linkup = true; + if (ntb->epc[primary_interface]->linkup && + ntb->epc[secondary_interface]->linkup) { + ret = epf_ntb_link_up(ntb, true); + if (ret < 0) + ctrl->command_status = command_status_error; + else + ctrl->command_status = command_status_ok; + goto reset_handler; + } + ctrl->command_status = command_status_ok; + break; + case command_link_down: + ntb_epc->linkup = false; + ret = epf_ntb_link_up(ntb, false); + if (ret < 0) + ctrl->command_status = command_status_error; + else + ctrl->command_status = command_status_ok; + break; + default: + dev_err(dev, "%s intf unknown command: %d ", + pci_epc_interface_string(type), command); + break; + } + +reset_handler: + queue_delayed_work(kpcintb_workqueue, &ntb_epc->cmd_handler, + msecs_to_jiffies(5)); +} + +/** + * epf_ntb_peer_spad_bar_clear() - clear peer scratchpad bar + * @ntb: ntb device that facilitates communication between host1 and host2 + * + *+-----------------+------->+------------------+ +-----------------+ + *| bar0 | | config region | | bar0 | + *+-----------------+----+ +------------------+<-------+-----------------+ + *| bar1 | | |scratchpad region | | bar1 | + *+-----------------+ +-->+------------------+<-------+-----------------+ + *| bar2 | local memory | bar2 | + *+-----------------+ +-----------------+ + *| bar3 | | bar3 | + *+-----------------+ +-----------------+ + *| bar4 | | bar4 | + *+-----------------+ +-----------------+ + *| bar5 | | bar5 | + *+-----------------+ +-----------------+ + * ep controller 1 ep controller 2 + * + * clear bar1 of ep controller 2 which contains the host2's peer scratchpad + * region. while bar1 is the default peer scratchpad bar, an ntb could have + * other bars for peer scratchpad (because of 64-bit bars or reserved bars). + * this function can get the exact bar used for peer scratchpad from + * epf_ntb_bar[bar_peer_spad]. + * + * since host2's peer scratchpad is also host1's self scratchpad, this function + * gets the address of peer scratchpad from + * peer_ntb_epc->epf_ntb_bar[bar_config]. + */ +static void epf_ntb_peer_spad_bar_clear(struct epf_ntb_epc *ntb_epc) +{ + struct pci_epf_bar *epf_bar; + enum pci_barno barno; + struct pci_epc *epc; + u8 func_no; + + epc = ntb_epc->epc; + func_no = ntb_epc->func_no; + barno = ntb_epc->epf_ntb_bar[bar_peer_spad]; + epf_bar = &ntb_epc->epf_bar[barno]; + pci_epc_clear_bar(epc, func_no, epf_bar); +} + +/** + * epf_ntb_peer_spad_bar_set() - set peer scratchpad bar + * @ntb: ntb device that facilitates communication between host1 and host2 + * + *+-----------------+------->+------------------+ +-----------------+ + *| bar0 | | config region | | bar0 | + *+-----------------+----+ +------------------+<-------+-----------------+ + *| bar1 | | |scratchpad region | | bar1 | + *+-----------------+ +-->+------------------+<-------+-----------------+ + *| bar2 | local memory | bar2 | + *+-----------------+ +-----------------+ + *| bar3 | | bar3 | + *+-----------------+ +-----------------+ + *| bar4 | | bar4 | + *+-----------------+ +-----------------+ + *| bar5 | | bar5 | + *+-----------------+ +-----------------+ + * ep controller 1 ep controller 2 + * + * set bar1 of ep controller 2 which contains the host2's peer scratchpad + * region. while bar1 is the default peer scratchpad bar, an ntb could have + * other bars for peer scratchpad (because of 64-bit bars or reserved bars). + * this function can get the exact bar used for peer scratchpad from + * epf_ntb_bar[bar_peer_spad]. + * + * since host2's peer scratchpad is also host1's self scratchpad, this function + * gets the address of peer scratchpad from + * peer_ntb_epc->epf_ntb_bar[bar_config]. + */ +static int epf_ntb_peer_spad_bar_set(struct epf_ntb *ntb, + enum pci_epc_interface_type type) +{ + struct epf_ntb_epc *peer_ntb_epc, *ntb_epc; + struct pci_epf_bar *peer_epf_bar, *epf_bar; + enum pci_barno peer_barno, barno; + u32 peer_spad_offset; + struct pci_epc *epc; + struct device *dev; + u8 func_no; + int ret; + + dev = &ntb->epf->dev; + + peer_ntb_epc = ntb->epc[!type]; + peer_barno = peer_ntb_epc->epf_ntb_bar[bar_config]; + peer_epf_bar = &peer_ntb_epc->epf_bar[peer_barno]; + + ntb_epc = ntb->epc[type]; + barno = ntb_epc->epf_ntb_bar[bar_peer_spad]; + epf_bar = &ntb_epc->epf_bar[barno]; + func_no = ntb_epc->func_no; + epc = ntb_epc->epc; + + peer_spad_offset = peer_ntb_epc->reg->spad_offset; + epf_bar->phys_addr = peer_epf_bar->phys_addr + peer_spad_offset; + epf_bar->size = peer_ntb_epc->spad_size; + epf_bar->barno = barno; + epf_bar->flags = pci_base_address_mem_type_32; + + ret = pci_epc_set_bar(epc, func_no, epf_bar); + if (ret) { + dev_err(dev, "%s intf: peer spad bar set failed ", + pci_epc_interface_string(type)); + return ret; + } + + return 0; +} + +/** + * epf_ntb_config_sspad_bar_clear() - clear config + self scratchpad bar + * @ntb: ntb device that facilitates communication between host1 and host2 + * + * +-----------------+------->+------------------+ +-----------------+ + * | bar0 | | config region | | bar0 | + * +-----------------+----+ +------------------+<-------+-----------------+ + * | bar1 | | |scratchpad region | | bar1 | + * +-----------------+ +-->+------------------+<-------+-----------------+ + * | bar2 | local memory | bar2 | + * +-----------------+ +-----------------+ + * | bar3 | | bar3 | + * +-----------------+ +-----------------+ + * | bar4 | | bar4 | + * +-----------------+ +-----------------+ + * | bar5 | | bar5 | + * +-----------------+ +-----------------+ + * ep controller 1 ep controller 2 + * + * clear bar0 of ep controller 1 which contains the host1's config and + * self scratchpad region (removes inbound atu configuration). while bar0 is + * the default self scratchpad bar, an ntb could have other bars for self + * scratchpad (because of reserved bars). this function can get the exact bar + * used for self scratchpad from epf_ntb_bar[bar_config]. + * + * please note the self scratchpad region and config region is combined to + * a single region and mapped using the same bar. also note host2's peer + * scratchpad is host1's self scratchpad. + */ +static void epf_ntb_config_sspad_bar_clear(struct epf_ntb_epc *ntb_epc) +{ + struct pci_epf_bar *epf_bar; + enum pci_barno barno; + struct pci_epc *epc; + u8 func_no; + + epc = ntb_epc->epc; + func_no = ntb_epc->func_no; + barno = ntb_epc->epf_ntb_bar[bar_config]; + epf_bar = &ntb_epc->epf_bar[barno]; + pci_epc_clear_bar(epc, func_no, epf_bar); +} + +/** + * epf_ntb_config_sspad_bar_set() - set config + self scratchpad bar + * @ntb: ntb device that facilitates communication between host1 and host2 + * + * +-----------------+------->+------------------+ +-----------------+ + * | bar0 | | config region | | bar0 | + * +-----------------+----+ +------------------+<-------+-----------------+ + * | bar1 | | |scratchpad region | | bar1 | + * +-----------------+ +-->+------------------+<-------+-----------------+ + * | bar2 | local memory | bar2 | + * +-----------------+ +-----------------+ + * | bar3 | | bar3 | + * +-----------------+ +-----------------+ + * | bar4 | | bar4 | + * +-----------------+ +-----------------+ + * | bar5 | | bar5 | + * +-----------------+ +-----------------+ + * ep controller 1 ep controller 2 + * + * map bar0 of ep controller 1 which contains the host1's config and + * self scratchpad region. while bar0 is the default self scratchpad bar, an + * ntb could have other bars for self scratchpad (because of reserved bars). + * this function can get the exact bar used for self scratchpad from + * epf_ntb_bar[bar_config]. + * + * please note the self scratchpad region and config region is combined to + * a single region and mapped using the same bar. also note host2's peer + * scratchpad is host1's self scratchpad. + */ +static int epf_ntb_config_sspad_bar_set(struct epf_ntb_epc *ntb_epc) +{ + struct pci_epf_bar *epf_bar; + enum pci_barno barno; + struct epf_ntb *ntb; + struct pci_epc *epc; + struct device *dev; + u8 func_no; + int ret; + + ntb = ntb_epc->epf_ntb; + dev = &ntb->epf->dev; + + epc = ntb_epc->epc; + func_no = ntb_epc->func_no; + barno = ntb_epc->epf_ntb_bar[bar_config]; + epf_bar = &ntb_epc->epf_bar[barno]; + + ret = pci_epc_set_bar(epc, func_no, epf_bar); + if (ret) { + dev_err(dev, "%s inft: config/status/spad bar set failed ", + pci_epc_interface_string(ntb_epc->type)); + return ret; + } + + return 0; +} + +/** + * epf_ntb_config_spad_bar_free() - free the physical memory associated with + * config + scratchpad region + * @ntb: ntb device that facilitates communication between host1 and host2 + * + * +-----------------+------->+------------------+ +-----------------+ + * | bar0 | | config region | | bar0 | + * +-----------------+----+ +------------------+<-------+-----------------+ + * | bar1 | | |scratchpad region | | bar1 | + * +-----------------+ +-->+------------------+<-------+-----------------+ + * | bar2 | local memory | bar2 | + * +-----------------+ +-----------------+ + * | bar3 | | bar3 | + * +-----------------+ +-----------------+ + * | bar4 | | bar4 | + * +-----------------+ +-----------------+ + * | bar5 | | bar5 | + * +-----------------+ +-----------------+ + * ep controller 1 ep controller 2 + * + * free the local memory mentioned in the above diagram. after invoking this + * function, any of config + self scratchpad region of host1 or peer scratchpad + * region of host2 should not be accessed. + */ +static void epf_ntb_config_spad_bar_free(struct epf_ntb *ntb) +{ + enum pci_epc_interface_type type; + struct epf_ntb_epc *ntb_epc; + enum pci_barno barno; + struct pci_epf *epf; + + epf = ntb->epf; + for (type = primary_interface; type <= secondary_interface; type++) { + ntb_epc = ntb->epc[type]; + barno = ntb_epc->epf_ntb_bar[bar_config]; + if (ntb_epc->reg) + pci_epf_free_space(epf, ntb_epc->reg, barno, type); + } +} + +/** + * epf_ntb_config_spad_bar_alloc() - allocate memory for config + scratchpad + * region + * @ntb: ntb device that facilitates communication between host1 and host2 + * @type: primary interface or secondary interface + * + * +-----------------+------->+------------------+ +-----------------+ + * | bar0 | | config region | | bar0 | + * +-----------------+----+ +------------------+<-------+-----------------+ + * | bar1 | | |scratchpad region | | bar1 | + * +-----------------+ +-->+------------------+<-------+-----------------+ + * | bar2 | local memory | bar2 | + * +-----------------+ +-----------------+ + * | bar3 | | bar3 | + * +-----------------+ +-----------------+ + * | bar4 | | bar4 | + * +-----------------+ +-----------------+ + * | bar5 | | bar5 | + * +-----------------+ +-----------------+ + * ep controller 1 ep controller 2 + * + * allocate the local memory mentioned in the above diagram. the size of + * config region is sizeof(struct epf_ntb_ctrl) and size of scratchpad region + * is obtained from "spad-count" configfs entry. + * + * the size of both config region and scratchpad region has to be aligned, + * since the scratchpad region will also be mapped as peer scratchpad of + * other host using a separate bar. + */ +static int epf_ntb_config_spad_bar_alloc(struct epf_ntb *ntb, + enum pci_epc_interface_type type) +{ + const struct pci_epc_features *peer_epc_features, *epc_features; + struct epf_ntb_epc *peer_ntb_epc, *ntb_epc; + size_t msix_table_size, pba_size, align; + enum pci_barno peer_barno, barno; + struct epf_ntb_ctrl *ctrl; + u32 spad_size, ctrl_size; + u64 size, peer_size; + struct pci_epf *epf; + struct device *dev; + bool msix_capable; + u32 spad_count; + void *base; + + epf = ntb->epf; + dev = &epf->dev; + ntb_epc = ntb->epc[type]; + + epc_features = ntb_epc->epc_features; + barno = ntb_epc->epf_ntb_bar[bar_config]; + size = epc_features->bar_fixed_size[barno]; + align = epc_features->align; + + peer_ntb_epc = ntb->epc[!type]; + peer_epc_features = peer_ntb_epc->epc_features; + peer_barno = ntb_epc->epf_ntb_bar[bar_peer_spad]; + peer_size = peer_epc_features->bar_fixed_size[peer_barno]; + + /* check if epc_features is populated incorrectly */ + if ((!is_aligned(size, align))) + return -einval; + + spad_count = ntb->spad_count; + + ctrl_size = sizeof(struct epf_ntb_ctrl); + spad_size = spad_count * 4; + + msix_capable = epc_features->msix_capable; + if (msix_capable) { + msix_table_size = pci_msix_entry_size * ntb->db_count; + ctrl_size = align(ctrl_size, 8); + ntb_epc->msix_table_offset = ctrl_size; + ntb_epc->msix_bar = barno; + /* align to qword or 8 bytes */ + pba_size = align(div_round_up(ntb->db_count, 8), 8); + ctrl_size = ctrl_size + msix_table_size + pba_size; + } + + if (!align) { + ctrl_size = roundup_pow_of_two(ctrl_size); + spad_size = roundup_pow_of_two(spad_size); + } else { + ctrl_size = align(ctrl_size, align); + spad_size = align(spad_size, align); + } + + if (peer_size) { + if (peer_size < spad_size) + spad_count = peer_size / 4; + spad_size = peer_size; + } + + /* + * in order to make sure spad offset is aligned to its size, + * expand control region size to the size of spad if spad size + * is greater than control region size. + */ + if (spad_size > ctrl_size) + ctrl_size = spad_size; + + if (!size) + size = ctrl_size + spad_size; + else if (size < ctrl_size + spad_size) + return -einval; + + base = pci_epf_alloc_space(epf, size, barno, align, type); + if (!base) { + dev_err(dev, "%s intf: config/status/spad alloc region fail ", + pci_epc_interface_string(type)); + return -enomem; + } + + ntb_epc->reg = base; + + ctrl = ntb_epc->reg; + ctrl->spad_offset = ctrl_size; + ctrl->spad_count = spad_count; + ctrl->num_mws = ntb->num_mws; + ctrl->db_entry_size = align ? align : 4; + ntb_epc->spad_size = spad_size; + + return 0; +} + +/** + * epf_ntb_config_spad_bar_alloc_interface() - allocate memory for config + + * scratchpad region for each of primary and secondary interface + * @ntb: ntb device that facilitates communication between host1 and host2 + * + * wrapper for epf_ntb_config_spad_bar_alloc() which allocates memory for + * config + scratchpad region for a specific interface + */ +static int epf_ntb_config_spad_bar_alloc_interface(struct epf_ntb *ntb) +{ + enum pci_epc_interface_type type; + struct device *dev; + int ret; + + dev = &ntb->epf->dev; + + for (type = primary_interface; type <= secondary_interface; type++) { + ret = epf_ntb_config_spad_bar_alloc(ntb, type); + if (ret) { + dev_err(dev, "%s intf: config/spad bar alloc failed ", + pci_epc_interface_string(type)); + return ret; + } + } + + return 0; +} + +/** + * epf_ntb_free_peer_mem() - free memory allocated in peers outbound address + * space + * @ntb_epc: epc associated with one of the host which holds peers outbound + * address regions + * + * +-----------------+ +---->+----------------+-----------+-----------------+ + * | bar0 | | | doorbell 1 +-----------> msi|x address 1 | + * +-----------------+ | +----------------+ +-----------------+ + * | bar1 | | | doorbell 2 +---------+ | | + * +-----------------+----+ +----------------+ | | | + * | bar2 | | doorbell 3 +-------+ | +-----------------+ + * +-----------------+----+ +----------------+ | +-> msi|x address 2 | + * | bar3 | | | doorbell 4 +-----+ | +-----------------+ + * +-----------------+ | |----------------+ | | | | + * | bar4 | | | | | | +-----------------+ + * +-----------------+ | | mw1 +---+ | +-->+ msi|x address 3|| + * | bar5 | | | | | | +-----------------+ + * +-----------------+ +---->-----------------+ | | | | + * ep controller 1 | | | | +-----------------+ + * | | | +---->+ msi|x address 4 | + * +----------------+ | +-----------------+ + * (a) ep controller 2 | | | + * (ob space) | | | + * +-------> mw1 | + * | | + * | | + * (b) +-----------------+ + * | | + * | | + * | | + * | | + * | | + * +-----------------+ + * pci address space + * (managed by host2) + * + * free memory allocated in ep controller 2 (ob space) in the above diagram. + * it'll free doorbell 1, doorbell 2, doorbell 3, doorbell 4, mw1 (and mw2, mw3, + * mw4). + */ +static void epf_ntb_free_peer_mem(struct epf_ntb_epc *ntb_epc) +{ + struct pci_epf_bar *epf_bar; + void __iomem *mw_addr; + phys_addr_t phys_addr; + enum epf_ntb_bar bar; + enum pci_barno barno; + struct pci_epc *epc; + size_t size; + + epc = ntb_epc->epc; + + for (bar = bar_db_mw1; bar < bar_mw4; bar++) { + barno = ntb_epc->epf_ntb_bar[bar]; + mw_addr = ntb_epc->mw_addr[barno]; + epf_bar = &ntb_epc->epf_bar[barno]; + phys_addr = epf_bar->phys_addr; + size = epf_bar->size; + if (mw_addr) { + pci_epc_mem_free_addr(epc, phys_addr, mw_addr, size); + ntb_epc->mw_addr[barno] = null; + } + } +} + +/** + * epf_ntb_db_mw_bar_clear() - clear doorbell and memory bar + * @ntb_epc: epc associated with one of the host which holds peer's outbound + * address + * + * +-----------------+ +---->+----------------+-----------+-----------------+ + * | bar0 | | | doorbell 1 +-----------> msi|x address 1 | + * +-----------------+ | +----------------+ +-----------------+ + * | bar1 | | | doorbell 2 +---------+ | | + * +-----------------+----+ +----------------+ | | | + * | bar2 | | doorbell 3 +-------+ | +-----------------+ + * +-----------------+----+ +----------------+ | +-> msi|x address 2 | + * | bar3 | | | doorbell 4 +-----+ | +-----------------+ + * +-----------------+ | |----------------+ | | | | + * | bar4 | | | | | | +-----------------+ + * +-----------------+ | | mw1 +---+ | +-->+ msi|x address 3|| + * | bar5 | | | | | | +-----------------+ + * +-----------------+ +---->-----------------+ | | | | + * ep controller 1 | | | | +-----------------+ + * | | | +---->+ msi|x address 4 | + * +----------------+ | +-----------------+ + * (a) ep controller 2 | | | + * (ob space) | | | + * +-------> mw1 | + * | | + * | | + * (b) +-----------------+ + * | | + * | | + * | | + * | | + * | | + * +-----------------+ + * pci address space + * (managed by host2) + * + * clear doorbell and memory bars (remove inbound atu configuration). in the above + * diagram it clears bar2 to bar5 of ep controller 1 (doorbell bar, mw1 bar, mw2 + * bar, mw3 bar and mw4 bar). + */ +static void epf_ntb_db_mw_bar_clear(struct epf_ntb_epc *ntb_epc) +{ + struct pci_epf_bar *epf_bar; + enum epf_ntb_bar bar; + enum pci_barno barno; + struct pci_epc *epc; + u8 func_no; + + epc = ntb_epc->epc; + + func_no = ntb_epc->func_no; + + for (bar = bar_db_mw1; bar < bar_mw4; bar++) { + barno = ntb_epc->epf_ntb_bar[bar]; + epf_bar = &ntb_epc->epf_bar[barno]; + pci_epc_clear_bar(epc, func_no, epf_bar); + } +} + +/** + * epf_ntb_db_mw_bar_cleanup() - clear doorbell/memory bar and free memory + * allocated in peers outbound address space + * @ntb: ntb device that facilitates communication between host1 and host2 + * @type: primary interface or secondary interface + * + * wrapper for epf_ntb_db_mw_bar_clear() to clear host1's bar and + * epf_ntb_free_peer_mem() which frees up host2 outbound memory. + */ +static void epf_ntb_db_mw_bar_cleanup(struct epf_ntb *ntb, + enum pci_epc_interface_type type) +{ + struct epf_ntb_epc *peer_ntb_epc, *ntb_epc; + + ntb_epc = ntb->epc[type]; + peer_ntb_epc = ntb->epc[!type]; + + epf_ntb_db_mw_bar_clear(ntb_epc); + epf_ntb_free_peer_mem(peer_ntb_epc); +} + +/** + * epf_ntb_configure_interrupt() - configure msi/msi-x capaiblity + * @ntb: ntb device that facilitates communication between host1 and host2 + * @type: primary interface or secondary interface + * + * configure msi/msi-x capability for each interface with number of + * interrupts equal to "db_count" configfs entry. + */ +static int epf_ntb_configure_interrupt(struct epf_ntb *ntb, + enum pci_epc_interface_type type) +{ + const struct pci_epc_features *epc_features; + bool msix_capable, msi_capable; + struct epf_ntb_epc *ntb_epc; + struct pci_epc *epc; + struct device *dev; + u32 db_count; + u8 func_no; + int ret; + + ntb_epc = ntb->epc[type]; + dev = &ntb->epf->dev; + + epc_features = ntb_epc->epc_features; + msix_capable = epc_features->msix_capable; + msi_capable = epc_features->msi_capable; + + if (!(msix_capable || msi_capable)) { + dev_err(dev, "msi or msi-x is required for doorbell "); + return -einval; + } + + func_no = ntb_epc->func_no; + + db_count = ntb->db_count; + if (db_count > max_db_count) { + dev_err(dev, "db count cannot be more than %d ", max_db_count); + return -einval; + } + + ntb->db_count = db_count; + epc = ntb_epc->epc; + + if (msi_capable) { + ret = pci_epc_set_msi(epc, func_no, db_count); + if (ret) { + dev_err(dev, "%s intf: msi configuration failed ", + pci_epc_interface_string(type)); + return ret; + } + } + + if (msix_capable) { + ret = pci_epc_set_msix(epc, func_no, db_count, + ntb_epc->msix_bar, + ntb_epc->msix_table_offset); + if (ret) { + dev_err(dev, "msi configuration failed "); + return ret; + } + } + + return 0; +} + +/** + * epf_ntb_alloc_peer_mem() - allocate memory in peer's outbound address space + * @ntb_epc: epc associated with one of the host whose bar holds peer's outbound + * address + * @bar: bar of @ntb_epc in for which memory has to be allocated (could be + * bar_db_mw1, bar_mw2, bar_mw3, bar_mw4) + * @peer_ntb_epc: epc associated with host whose outbound address space is + * used by @ntb_epc + * @size: size of the address region that has to be allocated in peers ob space + * + * + * +-----------------+ +---->+----------------+-----------+-----------------+ + * | bar0 | | | doorbell 1 +-----------> msi|x address 1 | + * +-----------------+ | +----------------+ +-----------------+ + * | bar1 | | | doorbell 2 +---------+ | | + * +-----------------+----+ +----------------+ | | | + * | bar2 | | doorbell 3 +-------+ | +-----------------+ + * +-----------------+----+ +----------------+ | +-> msi|x address 2 | + * | bar3 | | | doorbell 4 +-----+ | +-----------------+ + * +-----------------+ | |----------------+ | | | | + * | bar4 | | | | | | +-----------------+ + * +-----------------+ | | mw1 +---+ | +-->+ msi|x address 3|| + * | bar5 | | | | | | +-----------------+ + * +-----------------+ +---->-----------------+ | | | | + * ep controller 1 | | | | +-----------------+ + * | | | +---->+ msi|x address 4 | + * +----------------+ | +-----------------+ + * (a) ep controller 2 | | | + * (ob space) | | | + * +-------> mw1 | + * | | + * | | + * (b) +-----------------+ + * | | + * | | + * | | + * | | + * | | + * +-----------------+ + * pci address space + * (managed by host2) + * + * allocate memory in ob space of ep controller 2 in the above diagram. allocate + * for doorbell 1, doorbell 2, doorbell 3, doorbell 4, mw1 (and mw2, mw3, mw4). + */ +static int epf_ntb_alloc_peer_mem(struct device *dev, + struct epf_ntb_epc *ntb_epc, + enum epf_ntb_bar bar, + struct epf_ntb_epc *peer_ntb_epc, + size_t size) +{ + const struct pci_epc_features *epc_features; + struct pci_epf_bar *epf_bar; + struct pci_epc *peer_epc; + phys_addr_t phys_addr; + void __iomem *mw_addr; + enum pci_barno barno; + size_t align; + + epc_features = ntb_epc->epc_features; + align = epc_features->align; + + if (size < 128) + size = 128; + + if (align) + size = align(size, align); + else + size = roundup_pow_of_two(size); + + peer_epc = peer_ntb_epc->epc; + mw_addr = pci_epc_mem_alloc_addr(peer_epc, &phys_addr, size); + if (!mw_addr) { + dev_err(dev, "%s intf: failed to allocate ob address ", + pci_epc_interface_string(peer_ntb_epc->type)); + return -enomem; + } + + barno = ntb_epc->epf_ntb_bar[bar]; + epf_bar = &ntb_epc->epf_bar[barno]; + ntb_epc->mw_addr[barno] = mw_addr; + + epf_bar->phys_addr = phys_addr; + epf_bar->size = size; + epf_bar->barno = barno; + epf_bar->flags = pci_base_address_mem_type_32; + + return 0; +} + +/** + * epf_ntb_db_mw_bar_init() - configure doorbell and memory window bars + * @ntb: ntb device that facilitates communication between host1 and host2 + * @type: primary interface or secondary interface + * + * wrapper for epf_ntb_alloc_peer_mem() and pci_epc_set_bar() that allocates + * memory in ob address space of host2 and configures bar of host1 + */ +static int epf_ntb_db_mw_bar_init(struct epf_ntb *ntb, + enum pci_epc_interface_type type) +{ + const struct pci_epc_features *epc_features; + struct epf_ntb_epc *peer_ntb_epc, *ntb_epc; + struct pci_epf_bar *epf_bar; + struct epf_ntb_ctrl *ctrl; + u32 num_mws, db_count; + enum epf_ntb_bar bar; + enum pci_barno barno; + struct pci_epc *epc; + struct device *dev; + size_t align; + int ret, i; + u8 func_no; + u64 size; + + ntb_epc = ntb->epc[type]; + peer_ntb_epc = ntb->epc[!type]; + + dev = &ntb->epf->dev; + epc_features = ntb_epc->epc_features; + align = epc_features->align; + func_no = ntb_epc->func_no; + epc = ntb_epc->epc; + num_mws = ntb->num_mws; + db_count = ntb->db_count; + + for (bar = bar_db_mw1, i = 0; i < num_mws; bar++, i++) { + if (bar == bar_db_mw1) { + align = align ? align : 4; + size = db_count * align; + size = align(size, ntb->mws_size[i]); + ctrl = ntb_epc->reg; + ctrl->mw1_offset = size; + size += ntb->mws_size[i]; + } else { + size = ntb->mws_size[i]; + } + + ret = epf_ntb_alloc_peer_mem(dev, ntb_epc, bar, + peer_ntb_epc, size); + if (ret) { + dev_err(dev, "%s intf: doorbell mem alloc failed ", + pci_epc_interface_string(type)); + goto err_alloc_peer_mem; + } + + barno = ntb_epc->epf_ntb_bar[bar]; + epf_bar = &ntb_epc->epf_bar[barno]; + + ret = pci_epc_set_bar(epc, func_no, epf_bar); + if (ret) { + dev_err(dev, "%s intf: doorbell bar set failed ", + pci_epc_interface_string(type)); + goto err_alloc_peer_mem; + } + } + + return 0; + +err_alloc_peer_mem: + epf_ntb_db_mw_bar_cleanup(ntb, type); + + return ret; +} + +/** + * epf_ntb_epc_destroy_interface() - cleanup ntb epc interface + * @ntb: ntb device that facilitates communication between host1 and host2 + * @type: primary interface or secondary interface + * + * unbind ntb function device from epc and relinquish reference to pci_epc + * for each of the interface. + */ +static void epf_ntb_epc_destroy_interface(struct epf_ntb *ntb, + enum pci_epc_interface_type type) +{ + struct epf_ntb_epc *ntb_epc; + struct pci_epc *epc; + struct pci_epf *epf; + + if (type < 0) + return; + + epf = ntb->epf; + ntb_epc = ntb->epc[type]; + if (!ntb_epc) + return; + epc = ntb_epc->epc; + pci_epc_remove_epf(epc, epf, type); + pci_epc_put(epc); +} + +/** + * epf_ntb_epc_destroy() - cleanup ntb epc interface + * @ntb: ntb device that facilitates communication between host1 and host2 + * + * wrapper for epf_ntb_epc_destroy_interface() to cleanup all the ntb interfaces + */ +static void epf_ntb_epc_destroy(struct epf_ntb *ntb) +{ + enum pci_epc_interface_type type; + + for (type = primary_interface; type <= secondary_interface; type++) + epf_ntb_epc_destroy_interface(ntb, type); +} + +/** + * epf_ntb_epc_create_interface() - create and initialize ntb epc interface + * @ntb: ntb device that facilitates communication between host1 and host2 + * @epc: struct pci_epc to which a particular ntb interface should be associated + * @type: primary interface or secondary interface + * + * allocate memory for ntb epc interface and initialize it. + */ +static int epf_ntb_epc_create_interface(struct epf_ntb *ntb, + struct pci_epc *epc, + enum pci_epc_interface_type type) +{ + const struct pci_epc_features *epc_features; + struct pci_epf_bar *epf_bar; + struct epf_ntb_epc *ntb_epc; + struct pci_epf *epf; + struct device *dev; + u8 func_no; + + dev = &ntb->epf->dev; + + ntb_epc = devm_kzalloc(dev, sizeof(*ntb_epc), gfp_kernel); + if (!ntb_epc) + return -enomem; + + epf = ntb->epf; + if (type == primary_interface) { + func_no = epf->func_no; + epf_bar = epf->bar; + } else { + func_no = epf->sec_epc_func_no; + epf_bar = epf->sec_epc_bar; + } + + ntb_epc->linkup = false; + ntb_epc->epc = epc; + ntb_epc->func_no = func_no; + ntb_epc->type = type; + ntb_epc->epf_bar = epf_bar; + ntb_epc->epf_ntb = ntb; + + epc_features = pci_epc_get_features(epc, func_no); + if (!epc_features) + return -einval; + ntb_epc->epc_features = epc_features; + + ntb->epc[type] = ntb_epc; + + return 0; +} + +/** + * epf_ntb_epc_create() - create and initialize ntb epc interface + * @ntb: ntb device that facilitates communication between host1 and host2 + * + * get a reference to epc device and bind ntb function device to that epc + * for each of the interface. it is also a wrapper to + * epf_ntb_epc_create_interface() to allocate memory for ntb epc interface + * and initialize it + */ +static int epf_ntb_epc_create(struct epf_ntb *ntb) +{ + struct pci_epf *epf; + struct device *dev; + int ret; + + epf = ntb->epf; + dev = &epf->dev; + + ret = epf_ntb_epc_create_interface(ntb, epf->epc, primary_interface); + if (ret) { + dev_err(dev, "primary intf: fail to create ntb epc "); + return ret; + } + + ret = epf_ntb_epc_create_interface(ntb, epf->sec_epc, + secondary_interface); + if (ret) { + dev_err(dev, "secondary intf: fail to create ntb epc "); + goto err_epc_create; + } + + return 0; + +err_epc_create: + epf_ntb_epc_destroy_interface(ntb, primary_interface); + + return ret; +} + +/** + * epf_ntb_init_epc_bar_interface() - identify bars to be used for each of + * the ntb constructs (scratchpad region, doorbell, memorywindow) + * @ntb: ntb device that facilitates communication between host1 and host2 + * @type: primary interface or secondary interface + * + * identify the free bars to be used for each of bar_config, bar_peer_spad, + * bar_db_mw1, bar_mw2, bar_mw3 and bar_mw4. + */ +static int epf_ntb_init_epc_bar_interface(struct epf_ntb *ntb, + enum pci_epc_interface_type type) +{ + const struct pci_epc_features *epc_features; + struct epf_ntb_epc *ntb_epc; + enum pci_barno barno; + enum epf_ntb_bar bar; + struct device *dev; + u32 num_mws; + int i; + + barno = bar_0; + ntb_epc = ntb->epc[type]; + num_mws = ntb->num_mws; + dev = &ntb->epf->dev; + epc_features = ntb_epc->epc_features; + + /* these are required bars which are mandatory for ntb functionality */ + for (bar = bar_config; bar <= bar_db_mw1; bar++, barno++) { + barno = pci_epc_get_next_free_bar(epc_features, barno); + if (barno < 0) { + dev_err(dev, "%s intf: fail to get ntb function bar ", + pci_epc_interface_string(type)); + return barno; + } + ntb_epc->epf_ntb_bar[bar] = barno; + } + + /* these are optional bars which don't impact ntb functionality */ + for (bar = bar_mw2, i = 1; i < num_mws; bar++, barno++, i++) { + barno = pci_epc_get_next_free_bar(epc_features, barno); + if (barno < 0) { + ntb->num_mws = i; + dev_dbg(dev, "bar not available for > mw%d ", i + 1); + } + ntb_epc->epf_ntb_bar[bar] = barno; + } + + return 0; +} + +/** + * epf_ntb_init_epc_bar() - identify bars to be used for each of the ntb + * constructs (scratchpad region, doorbell, memorywindow) + * @ntb: ntb device that facilitates communication between host1 and host2 + * @type: primary interface or secondary interface + * + * wrapper to epf_ntb_init_epc_bar_interface() to identify the free bars + * to be used for each of bar_config, bar_peer_spad, bar_db_mw1, bar_mw2, + * bar_mw3 and bar_mw4 for all the interfaces. + */ +static int epf_ntb_init_epc_bar(struct epf_ntb *ntb) +{ + enum pci_epc_interface_type type; + struct device *dev; + int ret; + + dev = &ntb->epf->dev; + for (type = primary_interface; type <= secondary_interface; type++) { + ret = epf_ntb_init_epc_bar_interface(ntb, type); + if (ret) { + dev_err(dev, "fail to init epc bar for %s interface ", + pci_epc_interface_string(type)); + return ret; + } + } + + return 0; +} + +/** + * epf_ntb_epc_init_interface() - initialize ntb interface + * @ntb: ntb device that facilitates communication between host1 and host2 + * @type: primary interface or secondary interface + * + * wrapper to initialize a particular epc interface and start the workqueue + * to check for commands from host. this function will write to the + * ep controller hw for configuring it. + */ +static int epf_ntb_epc_init_interface(struct epf_ntb *ntb, + enum pci_epc_interface_type type) +{ + struct epf_ntb_epc *ntb_epc; + struct pci_epc *epc; + struct pci_epf *epf; + struct device *dev; + u8 func_no; + int ret; + + ntb_epc = ntb->epc[type]; + epf = ntb->epf; + dev = &epf->dev; + epc = ntb_epc->epc; + func_no = ntb_epc->func_no; + + ret = epf_ntb_config_sspad_bar_set(ntb->epc[type]); + if (ret) { + dev_err(dev, "%s intf: config/self spad bar init failed ", + pci_epc_interface_string(type)); + return ret; + } + + ret = epf_ntb_peer_spad_bar_set(ntb, type); + if (ret) { + dev_err(dev, "%s intf: peer spad bar init failed ", + pci_epc_interface_string(type)); + goto err_peer_spad_bar_init; + } + + ret = epf_ntb_configure_interrupt(ntb, type); + if (ret) { + dev_err(dev, "%s intf: interrupt configuration failed ", + pci_epc_interface_string(type)); + goto err_peer_spad_bar_init; + } + + ret = epf_ntb_db_mw_bar_init(ntb, type); + if (ret) { + dev_err(dev, "%s intf: db/mw bar init failed ", + pci_epc_interface_string(type)); + goto err_db_mw_bar_init; + } + + ret = pci_epc_write_header(epc, func_no, epf->header); + if (ret) { + dev_err(dev, "%s intf: configuration header write failed ", + pci_epc_interface_string(type)); + goto err_write_header; + } + + init_delayed_work(&ntb->epc[type]->cmd_handler, epf_ntb_cmd_handler); + queue_work(kpcintb_workqueue, &ntb->epc[type]->cmd_handler.work); + + return 0; + +err_write_header: + epf_ntb_db_mw_bar_cleanup(ntb, type); + +err_db_mw_bar_init: + epf_ntb_peer_spad_bar_clear(ntb->epc[type]); + +err_peer_spad_bar_init: + epf_ntb_config_sspad_bar_clear(ntb->epc[type]); + + return ret; +} + +/** + * epf_ntb_epc_cleanup_interface() - cleanup ntb interface + * @ntb: ntb device that facilitates communication between host1 and host2 + * @type: primary interface or secondary interface + * + * wrapper to cleanup a particular ntb interface. + */ +static void epf_ntb_epc_cleanup_interface(struct epf_ntb *ntb, + enum pci_epc_interface_type type) +{ + struct epf_ntb_epc *ntb_epc; + + if (type < 0) + return; + + ntb_epc = ntb->epc[type]; + cancel_delayed_work(&ntb_epc->cmd_handler); + epf_ntb_db_mw_bar_cleanup(ntb, type); + epf_ntb_peer_spad_bar_clear(ntb_epc); + epf_ntb_config_sspad_bar_clear(ntb_epc); +} + +/** + * epf_ntb_epc_cleanup() - cleanup all ntb interfaces + * @ntb: ntb device that facilitates communication between host1 and host2 + * + * wrapper to cleanup all ntb interfaces. + */ +static void epf_ntb_epc_cleanup(struct epf_ntb *ntb) +{ + enum pci_epc_interface_type type; + + for (type = primary_interface; type <= secondary_interface; type++) + epf_ntb_epc_cleanup_interface(ntb, type); +} + +/** + * epf_ntb_epc_init() - initialize all ntb interfaces + * @ntb: ntb device that facilitates communication between host1 and host2 + * + * wrapper to initialize all ntb interface and start the workqueue + * to check for commands from host. + */ +static int epf_ntb_epc_init(struct epf_ntb *ntb) +{ + enum pci_epc_interface_type type; + struct device *dev; + int ret; + + dev = &ntb->epf->dev; + + for (type = primary_interface; type <= secondary_interface; type++) { + ret = epf_ntb_epc_init_interface(ntb, type); + if (ret) { + dev_err(dev, "%s intf: failed to initialize ", + pci_epc_interface_string(type)); + goto err_init_type; + } + } + + return 0; + +err_init_type: + epf_ntb_epc_cleanup_interface(ntb, type - 1); + + return ret; +} + +/** + * epf_ntb_bind() - initialize endpoint controller to provide ntb functionality + * @epf: ntb endpoint function device + * + * initialize both the endpoint controllers associated with ntb function device. + * invoked when a primary interface or secondary interface is bound to epc + * device. this function will succeed only when epc is bound to both the + * interfaces. + */ +static int epf_ntb_bind(struct pci_epf *epf) +{ + struct epf_ntb *ntb = epf_get_drvdata(epf); + struct device *dev = &epf->dev; + int ret; + + if (!epf->epc) { + dev_dbg(dev, "primary epc interface not yet bound "); + return 0; + } + + if (!epf->sec_epc) { + dev_dbg(dev, "secondary epc interface not yet bound "); + return 0; + } + + ret = epf_ntb_epc_create(ntb); + if (ret) { + dev_err(dev, "failed to create ntb epc "); + return ret; + } + + ret = epf_ntb_init_epc_bar(ntb); + if (ret) { + dev_err(dev, "failed to create ntb epc "); + goto err_bar_init; + } + + ret = epf_ntb_config_spad_bar_alloc_interface(ntb); + if (ret) { + dev_err(dev, "failed to allocate bar memory "); + goto err_bar_alloc; + } + + ret = epf_ntb_epc_init(ntb); + if (ret) { + dev_err(dev, "failed to initialize epc "); + goto err_bar_alloc; + } + + epf_set_drvdata(epf, ntb); + + return 0; + +err_bar_alloc: + epf_ntb_config_spad_bar_free(ntb); + +err_bar_init: + epf_ntb_epc_destroy(ntb); + + return ret; +} + +/** + * epf_ntb_unbind() - cleanup the initialization from epf_ntb_bind() + * @epf: ntb endpoint function device + * + * cleanup the initialization from epf_ntb_bind() + */ +static void epf_ntb_unbind(struct pci_epf *epf) +{ + struct epf_ntb *ntb = epf_get_drvdata(epf); + + epf_ntb_epc_cleanup(ntb); + epf_ntb_config_spad_bar_free(ntb); + epf_ntb_epc_destroy(ntb); +} + +#define epf_ntb_r(_name) \ +static ssize_t epf_ntb_##_name##_show(struct config_item *item, \ + char *page) \ +{ \ + struct config_group *group = to_config_group(item); \ + struct epf_ntb *ntb = to_epf_ntb(group); \ + \ + return sprintf(page, "%d ", ntb->_name); \ +} + +#define epf_ntb_w(_name) \ +static ssize_t epf_ntb_##_name##_store(struct config_item *item, \ + const char *page, size_t len) \ +{ \ + struct config_group *group = to_config_group(item); \ + struct epf_ntb *ntb = to_epf_ntb(group); \ + u32 val; \ + int ret; \ + \ + ret = kstrtou32(page, 0, &val); \ + if (ret) \ + return ret; \ + \ + ntb->_name = val; \ + \ + return len; \ +} + +#define epf_ntb_mw_r(_name) \ +static ssize_t epf_ntb_##_name##_show(struct config_item *item, \ + char *page) \ +{ \ + struct config_group *group = to_config_group(item); \ + struct epf_ntb *ntb = to_epf_ntb(group); \ + int win_no; \ + \ + sscanf(#_name, "mw%d", &win_no); \ + \ + return sprintf(page, "%lld ", ntb->mws_size[win_no - 1]); \ +} + +#define epf_ntb_mw_w(_name) \ +static ssize_t epf_ntb_##_name##_store(struct config_item *item, \ + const char *page, size_t len) \ +{ \ + struct config_group *group = to_config_group(item); \ + struct epf_ntb *ntb = to_epf_ntb(group); \ + struct device *dev = &ntb->epf->dev; \ + int win_no; \ + u64 val; \ + int ret; \ + \ + ret = kstrtou64(page, 0, &val); \ + if (ret) \ + return ret; \ + \ + if (sscanf(#_name, "mw%d", &win_no) != 1) \ + return -einval; \ + \ + if (ntb->num_mws < win_no) { \ + dev_err(dev, "invalid num_nws: %d value ", ntb->num_mws); \ + return -einval; \ + } \ + \ + ntb->mws_size[win_no - 1] = val; \ + \ + return len; \ +} + +static ssize_t epf_ntb_num_mws_store(struct config_item *item, + const char *page, size_t len) +{ + struct config_group *group = to_config_group(item); + struct epf_ntb *ntb = to_epf_ntb(group); + u32 val; + int ret; + + ret = kstrtou32(page, 0, &val); + if (ret) + return ret; + + if (val > max_mw) + return -einval; + + ntb->num_mws = val; + + return len; +} + +epf_ntb_r(spad_count) +epf_ntb_w(spad_count) +epf_ntb_r(db_count) +epf_ntb_w(db_count) +epf_ntb_r(num_mws) +epf_ntb_mw_r(mw1) +epf_ntb_mw_w(mw1) +epf_ntb_mw_r(mw2) +epf_ntb_mw_w(mw2) +epf_ntb_mw_r(mw3) +epf_ntb_mw_w(mw3) +epf_ntb_mw_r(mw4) +epf_ntb_mw_w(mw4) + +configfs_attr(epf_ntb_, spad_count); +configfs_attr(epf_ntb_, db_count); +configfs_attr(epf_ntb_, num_mws); +configfs_attr(epf_ntb_, mw1); +configfs_attr(epf_ntb_, mw2); +configfs_attr(epf_ntb_, mw3); +configfs_attr(epf_ntb_, mw4); + +static struct configfs_attribute *epf_ntb_attrs[] = { + &epf_ntb_attr_spad_count, + &epf_ntb_attr_db_count, + &epf_ntb_attr_num_mws, + &epf_ntb_attr_mw1, + &epf_ntb_attr_mw2, + &epf_ntb_attr_mw3, + &epf_ntb_attr_mw4, + null, +}; + +static const struct config_item_type ntb_group_type = { + .ct_attrs = epf_ntb_attrs, + .ct_owner = this_module, +}; + +/** + * epf_ntb_add_cfs() - add configfs directory specific to ntb + * @epf: ntb endpoint function device + * + * add configfs directory specific to ntb. this directory will hold + * ntb specific properties like db_count, spad_count, num_mws etc., + */ +static struct config_group *epf_ntb_add_cfs(struct pci_epf *epf, + struct config_group *group) +{ + struct epf_ntb *ntb = epf_get_drvdata(epf); + struct config_group *ntb_group = &ntb->group; + struct device *dev = &epf->dev; + + config_group_init_type_name(ntb_group, dev_name(dev), &ntb_group_type); + + return ntb_group; +} + +/** + * epf_ntb_probe() - probe ntb function driver + * @epf: ntb endpoint function device + * + * probe ntb function driver when endpoint function bus detects a ntb + * endpoint function. + */ +static int epf_ntb_probe(struct pci_epf *epf) +{ + struct epf_ntb *ntb; + struct device *dev; + + dev = &epf->dev; + + ntb = devm_kzalloc(dev, sizeof(*ntb), gfp_kernel); + if (!ntb) + return -enomem; + + epf->header = &epf_ntb_header; + ntb->epf = epf; + epf_set_drvdata(epf, ntb); + + return 0; +} + +static struct pci_epf_ops epf_ntb_ops = { + .bind = epf_ntb_bind, + .unbind = epf_ntb_unbind, + .add_cfs = epf_ntb_add_cfs, +}; + +static const struct pci_epf_device_id epf_ntb_ids[] = { + { + .name = "pci_epf_ntb", + }, + {}, +}; + +static struct pci_epf_driver epf_ntb_driver = { + .driver.name = "pci_epf_ntb", + .probe = epf_ntb_probe, + .id_table = epf_ntb_ids, + .ops = &epf_ntb_ops, + .owner = this_module, +}; + +static int __init epf_ntb_init(void) +{ + int ret; + + kpcintb_workqueue = alloc_workqueue("kpcintb", wq_mem_reclaim | + wq_highpri, 0); + ret = pci_epf_register_driver(&epf_ntb_driver); + if (ret) { + destroy_workqueue(kpcintb_workqueue); + pr_err("failed to register pci epf ntb driver --> %d ", ret); + return ret; + } + + return 0; +} +module_init(epf_ntb_init); + +static void __exit epf_ntb_exit(void) +{ + pci_epf_unregister_driver(&epf_ntb_driver); + destroy_workqueue(kpcintb_workqueue); +} +module_exit(epf_ntb_exit); + +module_description("pci epf ntb driver"); +module_author("kishon vijay abraham i <kishon@ti.com>"); +module_license("gpl v2");
|
Non-Transparent Bridge (NTB)
|
8b821cf761503b80d0bd052f932adfe1bc1a0088
|
kishon vijay abraham i
|
drivers
|
pci
|
endpoint, functions
|
pci: add ti j721e device to pci ids
|
add ti j721e device to the pci id database. since this device has a configurable pcie endpoint, it could be used with different drivers.
|
this release allows to map an uid to a different one in a mount; it also adds support for selecting the preemption model at runtime; support for a low-overhead memory error detector designed to be used in production; support for the acrn hypervisor designed for embedded systems; btrfs initial support for zoned devices, subpage blocks sizes and performance improvements; support for eager nfs writes; support for a thermal power management to control the surface temperature of embedded devices in an unified way; the napi polling can be moved to a kernel thread; and support for non-blocking path lookups. as always, there are many other features, new drivers, improvements and fixes.
|
implement ntb controller using multiple pci ep
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'cryptography', 'security', 'networking', 'architectures x86 arm risc-v powerpc mips csky s390 pa-risc c6x']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'device voltage and frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'non-transparent bridge (ntb)', 'thunderbolt', 'clock', 'phy ("physical layer" framework)', 'cxl (compute express link)', 'various']
|
[]
|
['h', 'c']
| 2
| 1
| 1
|
--- diff --git a/drivers/misc/pci_endpoint_test.c b/drivers/misc/pci_endpoint_test.c --- a/drivers/misc/pci_endpoint_test.c +++ b/drivers/misc/pci_endpoint_test.c -#define pci_device_id_ti_j721e 0xb00d diff --git a/include/linux/pci_ids.h b/include/linux/pci_ids.h --- a/include/linux/pci_ids.h +++ b/include/linux/pci_ids.h +#define pci_device_id_ti_j721e 0xb00d
|
Non-Transparent Bridge (NTB)
|
599f86872f9ce8a0a0bd111a23442b18e8ee7059
|
kishon vijay abraham i
|
include
|
linux
| |
ntb: add support for epf pci non-transparent bridge
|
add support for epf pci non-transparent bridge (ntb) devices. this driver is platform independent and may be used by any platform that has multiple pci endpoint instances configured using the pci-epf-ntb driver. the driver connnects to the standard ntb subsystem interface. the epf ntb device has a configurable number of memory windows (max 4), a configurable number of doorbells (max 32), and a configurable number of scratch-pad registers.
|
this release allows to map an uid to a different one in a mount; it also adds support for selecting the preemption model at runtime; support for a low-overhead memory error detector designed to be used in production; support for the acrn hypervisor designed for embedded systems; btrfs initial support for zoned devices, subpage blocks sizes and performance improvements; support for eager nfs writes; support for a thermal power management to control the surface temperature of embedded devices in an unified way; the napi polling can be moved to a kernel thread; and support for non-blocking path lookups. as always, there are many other features, new drivers, improvements and fixes.
|
add support for epf pci non-transparent bridge
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'cryptography', 'security', 'networking', 'architectures x86 arm risc-v powerpc mips csky s390 pa-risc c6x']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'device voltage and frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'non-transparent bridge (ntb)', 'thunderbolt', 'clock', 'phy ("physical layer" framework)', 'cxl (compute express link)', 'various']
|
[]
|
['kconfig', 'c', 'makefile']
| 5
| 762
| 0
|
--- diff --git a/drivers/ntb/hw/kconfig b/drivers/ntb/hw/kconfig --- a/drivers/ntb/hw/kconfig +++ b/drivers/ntb/hw/kconfig +source "drivers/ntb/hw/epf/kconfig" diff --git a/drivers/ntb/hw/makefile b/drivers/ntb/hw/makefile --- a/drivers/ntb/hw/makefile +++ b/drivers/ntb/hw/makefile +obj-$(config_ntb_epf) += epf/ diff --git a/drivers/ntb/hw/epf/kconfig b/drivers/ntb/hw/epf/kconfig --- /dev/null +++ b/drivers/ntb/hw/epf/kconfig +config ntb_epf + tristate "generic epf non-transparent bridge support" + depends on m + help + this driver supports epf ntb on configurable endpoint. + if unsure, say n. diff --git a/drivers/ntb/hw/epf/makefile b/drivers/ntb/hw/epf/makefile --- /dev/null +++ b/drivers/ntb/hw/epf/makefile +obj-$(config_ntb_epf) += ntb_hw_epf.o diff --git a/drivers/ntb/hw/epf/ntb_hw_epf.c b/drivers/ntb/hw/epf/ntb_hw_epf.c --- /dev/null +++ b/drivers/ntb/hw/epf/ntb_hw_epf.c +// spdx-license-identifier: gpl-2.0 +/** + * host side endpoint driver to implement non-transparent bridge functionality + * + * copyright (c) 2020 texas instruments + * author: kishon vijay abraham i <kishon@ti.com> + */ + +#include <linux/delay.h> +#include <linux/module.h> +#include <linux/pci.h> +#include <linux/slab.h> +#include <linux/ntb.h> + +#define ntb_epf_command 0x0 +#define cmd_configure_doorbell 1 +#define cmd_teardown_doorbell 2 +#define cmd_configure_mw 3 +#define cmd_teardown_mw 4 +#define cmd_link_up 5 +#define cmd_link_down 6 + +#define ntb_epf_argument 0x4 +#define msix_enable bit(16) + +#define ntb_epf_cmd_status 0x8 +#define command_status_ok 1 +#define command_status_error 2 + +#define ntb_epf_link_status 0x0a +#define link_status_up bit(0) + +#define ntb_epf_topology 0x0c +#define ntb_epf_lower_addr 0x10 +#define ntb_epf_upper_addr 0x14 +#define ntb_epf_lower_size 0x18 +#define ntb_epf_upper_size 0x1c +#define ntb_epf_mw_count 0x20 +#define ntb_epf_mw1_offset 0x24 +#define ntb_epf_spad_offset 0x28 +#define ntb_epf_spad_count 0x2c +#define ntb_epf_db_entry_size 0x30 +#define ntb_epf_db_data(n) (0x34 + (n) * 4) +#define ntb_epf_db_offset(n) (0xb4 + (n) * 4) + +#define ntb_epf_min_db_count 3 +#define ntb_epf_max_db_count 31 +#define ntb_epf_mw_offset 2 + +#define ntb_epf_command_timeout 1000 /* 1 sec */ + +enum pci_barno { + bar_0, + bar_1, + bar_2, + bar_3, + bar_4, + bar_5, +}; + +struct ntb_epf_dev { + struct ntb_dev ntb; + struct device *dev; + /* mutex to protect providing commands to ntb epf */ + struct mutex cmd_lock; + + enum pci_barno ctrl_reg_bar; + enum pci_barno peer_spad_reg_bar; + enum pci_barno db_reg_bar; + + unsigned int mw_count; + unsigned int spad_count; + unsigned int db_count; + + void __iomem *ctrl_reg; + void __iomem *db_reg; + void __iomem *peer_spad_reg; + + unsigned int self_spad; + unsigned int peer_spad; + + int db_val; + u64 db_valid_mask; +}; + +#define ntb_ndev(__ntb) container_of(__ntb, struct ntb_epf_dev, ntb) + +struct ntb_epf_data { + /* bar that contains both control region and self spad region */ + enum pci_barno ctrl_reg_bar; + /* bar that contains peer spad region */ + enum pci_barno peer_spad_reg_bar; + /* bar that contains doorbell region and memory window '1' */ + enum pci_barno db_reg_bar; +}; + +static int ntb_epf_send_command(struct ntb_epf_dev *ndev, u32 command, + u32 argument) +{ + ktime_t timeout; + bool timedout; + int ret = 0; + u32 status; + + mutex_lock(&ndev->cmd_lock); + writel(argument, ndev->ctrl_reg + ntb_epf_argument); + writel(command, ndev->ctrl_reg + ntb_epf_command); + + timeout = ktime_add_ms(ktime_get(), ntb_epf_command_timeout); + while (1) { + timedout = ktime_after(ktime_get(), timeout); + status = readw(ndev->ctrl_reg + ntb_epf_cmd_status); + + if (status == command_status_error) { + ret = -einval; + break; + } + + if (status == command_status_ok) + break; + + if (warn_on(timedout)) { + ret = -etimedout; + break; + } + + usleep_range(5, 10); + } + + writew(0, ndev->ctrl_reg + ntb_epf_cmd_status); + mutex_unlock(&ndev->cmd_lock); + + return ret; +} + +static int ntb_epf_mw_to_bar(struct ntb_epf_dev *ndev, int idx) +{ + struct device *dev = ndev->dev; + + if (idx < 0 || idx > ndev->mw_count) { + dev_err(dev, "unsupported memory window index %d ", idx); + return -einval; + } + + return idx + 2; +} + +static int ntb_epf_mw_count(struct ntb_dev *ntb, int pidx) +{ + struct ntb_epf_dev *ndev = ntb_ndev(ntb); + struct device *dev = ndev->dev; + + if (pidx != ntb_def_peer_idx) { + dev_err(dev, "unsupported peer id %d ", pidx); + return -einval; + } + + return ndev->mw_count; +} + +static int ntb_epf_mw_get_align(struct ntb_dev *ntb, int pidx, int idx, + resource_size_t *addr_align, + resource_size_t *size_align, + resource_size_t *size_max) +{ + struct ntb_epf_dev *ndev = ntb_ndev(ntb); + struct device *dev = ndev->dev; + int bar; + + if (pidx != ntb_def_peer_idx) { + dev_err(dev, "unsupported peer id %d ", pidx); + return -einval; + } + + bar = ntb_epf_mw_to_bar(ndev, idx); + if (bar < 0) + return bar; + + if (addr_align) + *addr_align = sz_4k; + + if (size_align) + *size_align = 1; + + if (size_max) + *size_max = pci_resource_len(ndev->ntb.pdev, bar); + + return 0; +} + +static u64 ntb_epf_link_is_up(struct ntb_dev *ntb, + enum ntb_speed *speed, + enum ntb_width *width) +{ + struct ntb_epf_dev *ndev = ntb_ndev(ntb); + u32 status; + + status = readw(ndev->ctrl_reg + ntb_epf_link_status); + + return status & link_status_up; +} + +static u32 ntb_epf_spad_read(struct ntb_dev *ntb, int idx) +{ + struct ntb_epf_dev *ndev = ntb_ndev(ntb); + struct device *dev = ndev->dev; + u32 offset; + + if (idx < 0 || idx >= ndev->spad_count) { + dev_err(dev, "read: invalid scratchpad index %d ", idx); + return 0; + } + + offset = readl(ndev->ctrl_reg + ntb_epf_spad_offset); + offset += (idx << 2); + + return readl(ndev->ctrl_reg + offset); +} + +static int ntb_epf_spad_write(struct ntb_dev *ntb, + int idx, u32 val) +{ + struct ntb_epf_dev *ndev = ntb_ndev(ntb); + struct device *dev = ndev->dev; + u32 offset; + + if (idx < 0 || idx >= ndev->spad_count) { + dev_err(dev, "write: invalid scratchpad index %d ", idx); + return -einval; + } + + offset = readl(ndev->ctrl_reg + ntb_epf_spad_offset); + offset += (idx << 2); + writel(val, ndev->ctrl_reg + offset); + + return 0; +} + +static u32 ntb_epf_peer_spad_read(struct ntb_dev *ntb, int pidx, int idx) +{ + struct ntb_epf_dev *ndev = ntb_ndev(ntb); + struct device *dev = ndev->dev; + u32 offset; + + if (pidx != ntb_def_peer_idx) { + dev_err(dev, "unsupported peer id %d ", pidx); + return -einval; + } + + if (idx < 0 || idx >= ndev->spad_count) { + dev_err(dev, "write: invalid peer scratchpad index %d ", idx); + return -einval; + } + + offset = (idx << 2); + return readl(ndev->peer_spad_reg + offset); +} + +static int ntb_epf_peer_spad_write(struct ntb_dev *ntb, int pidx, + int idx, u32 val) +{ + struct ntb_epf_dev *ndev = ntb_ndev(ntb); + struct device *dev = ndev->dev; + u32 offset; + + if (pidx != ntb_def_peer_idx) { + dev_err(dev, "unsupported peer id %d ", pidx); + return -einval; + } + + if (idx < 0 || idx >= ndev->spad_count) { + dev_err(dev, "write: invalid peer scratchpad index %d ", idx); + return -einval; + } + + offset = (idx << 2); + writel(val, ndev->peer_spad_reg + offset); + + return 0; +} + +static int ntb_epf_link_enable(struct ntb_dev *ntb, + enum ntb_speed max_speed, + enum ntb_width max_width) +{ + struct ntb_epf_dev *ndev = ntb_ndev(ntb); + struct device *dev = ndev->dev; + int ret; + + ret = ntb_epf_send_command(ndev, cmd_link_up, 0); + if (ret) { + dev_err(dev, "fail to enable link "); + return ret; + } + + return 0; +} + +static int ntb_epf_link_disable(struct ntb_dev *ntb) +{ + struct ntb_epf_dev *ndev = ntb_ndev(ntb); + struct device *dev = ndev->dev; + int ret; + + ret = ntb_epf_send_command(ndev, cmd_link_down, 0); + if (ret) { + dev_err(dev, "fail to disable link "); + return ret; + } + + return 0; +} + +static irqreturn_t ntb_epf_vec_isr(int irq, void *dev) +{ + struct ntb_epf_dev *ndev = dev; + int irq_no; + + irq_no = irq - pci_irq_vector(ndev->ntb.pdev, 0); + ndev->db_val = irq_no + 1; + + if (irq_no == 0) + ntb_link_event(&ndev->ntb); + else + ntb_db_event(&ndev->ntb, irq_no); + + return irq_handled; +} + +static int ntb_epf_init_isr(struct ntb_epf_dev *ndev, int msi_min, int msi_max) +{ + struct pci_dev *pdev = ndev->ntb.pdev; + struct device *dev = ndev->dev; + u32 argument = msix_enable; + int irq; + int ret; + int i; + + irq = pci_alloc_irq_vectors(pdev, msi_min, msi_max, pci_irq_msix); + if (irq < 0) { + dev_dbg(dev, "failed to get msix interrupts "); + irq = pci_alloc_irq_vectors(pdev, msi_min, msi_max, + pci_irq_msi); + if (irq < 0) { + dev_err(dev, "failed to get msi interrupts "); + return irq; + } + argument &= ~msix_enable; + } + + for (i = 0; i < irq; i++) { + ret = request_irq(pci_irq_vector(pdev, i), ntb_epf_vec_isr, + 0, "ntb_epf", ndev); + if (ret) { + dev_err(dev, "failed to request irq "); + goto err_request_irq; + } + } + + ndev->db_count = irq - 1; + + ret = ntb_epf_send_command(ndev, cmd_configure_doorbell, + argument | irq); + if (ret) { + dev_err(dev, "failed to configure doorbell "); + goto err_configure_db; + } + + return 0; + +err_configure_db: + for (i = 0; i < ndev->db_count + 1; i++) + free_irq(pci_irq_vector(pdev, i), ndev); + +err_request_irq: + pci_free_irq_vectors(pdev); + + return ret; +} + +static int ntb_epf_peer_mw_count(struct ntb_dev *ntb) +{ + return ntb_ndev(ntb)->mw_count; +} + +static int ntb_epf_spad_count(struct ntb_dev *ntb) +{ + return ntb_ndev(ntb)->spad_count; +} + +static u64 ntb_epf_db_valid_mask(struct ntb_dev *ntb) +{ + return ntb_ndev(ntb)->db_valid_mask; +} + +static int ntb_epf_db_set_mask(struct ntb_dev *ntb, u64 db_bits) +{ + return 0; +} + +static int ntb_epf_mw_set_trans(struct ntb_dev *ntb, int pidx, int idx, + dma_addr_t addr, resource_size_t size) +{ + struct ntb_epf_dev *ndev = ntb_ndev(ntb); + struct device *dev = ndev->dev; + resource_size_t mw_size; + int bar; + + if (pidx != ntb_def_peer_idx) { + dev_err(dev, "unsupported peer id %d ", pidx); + return -einval; + } + + bar = idx + ntb_epf_mw_offset; + + mw_size = pci_resource_len(ntb->pdev, bar); + + if (size > mw_size) { + dev_err(dev, "size:%pa is greater than the mw size %pa ", + &size, &mw_size); + return -einval; + } + + writel(lower_32_bits(addr), ndev->ctrl_reg + ntb_epf_lower_addr); + writel(upper_32_bits(addr), ndev->ctrl_reg + ntb_epf_upper_addr); + writel(lower_32_bits(size), ndev->ctrl_reg + ntb_epf_lower_size); + writel(upper_32_bits(size), ndev->ctrl_reg + ntb_epf_upper_size); + ntb_epf_send_command(ndev, cmd_configure_mw, idx); + + return 0; +} + +static int ntb_epf_mw_clear_trans(struct ntb_dev *ntb, int pidx, int idx) +{ + struct ntb_epf_dev *ndev = ntb_ndev(ntb); + struct device *dev = ndev->dev; + int ret = 0; + + ntb_epf_send_command(ndev, cmd_teardown_mw, idx); + if (ret) + dev_err(dev, "failed to teardown memory window "); + + return ret; +} + +static int ntb_epf_peer_mw_get_addr(struct ntb_dev *ntb, int idx, + phys_addr_t *base, resource_size_t *size) +{ + struct ntb_epf_dev *ndev = ntb_ndev(ntb); + u32 offset = 0; + int bar; + + if (idx == 0) + offset = readl(ndev->ctrl_reg + ntb_epf_mw1_offset); + + bar = idx + ntb_epf_mw_offset; + + if (base) + *base = pci_resource_start(ndev->ntb.pdev, bar) + offset; + + if (size) + *size = pci_resource_len(ndev->ntb.pdev, bar) - offset; + + return 0; +} + +static int ntb_epf_peer_db_set(struct ntb_dev *ntb, u64 db_bits) +{ + struct ntb_epf_dev *ndev = ntb_ndev(ntb); + u32 interrupt_num = ffs(db_bits) + 1; + struct device *dev = ndev->dev; + u32 db_entry_size; + u32 db_offset; + u32 db_data; + + if (interrupt_num > ndev->db_count) { + dev_err(dev, "db interrupt %d greater than max supported %d ", + interrupt_num, ndev->db_count); + return -einval; + } + + db_entry_size = readl(ndev->ctrl_reg + ntb_epf_db_entry_size); + + db_data = readl(ndev->ctrl_reg + ntb_epf_db_data(interrupt_num)); + db_offset = readl(ndev->ctrl_reg + ntb_epf_db_offset(interrupt_num)); + writel(db_data, ndev->db_reg + (db_entry_size * interrupt_num) + + db_offset); + + return 0; +} + +static u64 ntb_epf_db_read(struct ntb_dev *ntb) +{ + struct ntb_epf_dev *ndev = ntb_ndev(ntb); + + return ndev->db_val; +} + +static int ntb_epf_db_clear_mask(struct ntb_dev *ntb, u64 db_bits) +{ + return 0; +} + +static int ntb_epf_db_clear(struct ntb_dev *ntb, u64 db_bits) +{ + struct ntb_epf_dev *ndev = ntb_ndev(ntb); + + ndev->db_val = 0; + + return 0; +} + +static const struct ntb_dev_ops ntb_epf_ops = { + .mw_count = ntb_epf_mw_count, + .spad_count = ntb_epf_spad_count, + .peer_mw_count = ntb_epf_peer_mw_count, + .db_valid_mask = ntb_epf_db_valid_mask, + .db_set_mask = ntb_epf_db_set_mask, + .mw_set_trans = ntb_epf_mw_set_trans, + .mw_clear_trans = ntb_epf_mw_clear_trans, + .peer_mw_get_addr = ntb_epf_peer_mw_get_addr, + .link_enable = ntb_epf_link_enable, + .spad_read = ntb_epf_spad_read, + .spad_write = ntb_epf_spad_write, + .peer_spad_read = ntb_epf_peer_spad_read, + .peer_spad_write = ntb_epf_peer_spad_write, + .peer_db_set = ntb_epf_peer_db_set, + .db_read = ntb_epf_db_read, + .mw_get_align = ntb_epf_mw_get_align, + .link_is_up = ntb_epf_link_is_up, + .db_clear_mask = ntb_epf_db_clear_mask, + .db_clear = ntb_epf_db_clear, + .link_disable = ntb_epf_link_disable, +}; + +static inline void ntb_epf_init_struct(struct ntb_epf_dev *ndev, + struct pci_dev *pdev) +{ + ndev->ntb.pdev = pdev; + ndev->ntb.topo = ntb_topo_none; + ndev->ntb.ops = &ntb_epf_ops; +} + +static int ntb_epf_init_dev(struct ntb_epf_dev *ndev) +{ + struct device *dev = ndev->dev; + int ret; + + /* one link interrupt and rest doorbell interrupt */ + ret = ntb_epf_init_isr(ndev, ntb_epf_min_db_count + 1, + ntb_epf_max_db_count + 1); + if (ret) { + dev_err(dev, "failed to init isr "); + return ret; + } + + ndev->db_valid_mask = bit_ull(ndev->db_count) - 1; + ndev->mw_count = readl(ndev->ctrl_reg + ntb_epf_mw_count); + ndev->spad_count = readl(ndev->ctrl_reg + ntb_epf_spad_count); + + return 0; +} + +static int ntb_epf_init_pci(struct ntb_epf_dev *ndev, + struct pci_dev *pdev) +{ + struct device *dev = ndev->dev; + int ret; + + pci_set_drvdata(pdev, ndev); + + ret = pci_enable_device(pdev); + if (ret) { + dev_err(dev, "cannot enable pci device "); + goto err_pci_enable; + } + + ret = pci_request_regions(pdev, "ntb"); + if (ret) { + dev_err(dev, "cannot obtain pci resources "); + goto err_pci_regions; + } + + pci_set_master(pdev); + + ret = dma_set_mask_and_coherent(dev, dma_bit_mask(64)); + if (ret) { + ret = dma_set_mask_and_coherent(dev, dma_bit_mask(32)); + if (ret) { + dev_err(dev, "cannot set dma mask "); + goto err_dma_mask; + } + dev_warn(&pdev->dev, "cannot dma highmem "); + } + + ndev->ctrl_reg = pci_iomap(pdev, ndev->ctrl_reg_bar, 0); + if (!ndev->ctrl_reg) { + ret = -eio; + goto err_dma_mask; + } + + ndev->peer_spad_reg = pci_iomap(pdev, ndev->peer_spad_reg_bar, 0); + if (!ndev->peer_spad_reg) { + ret = -eio; + goto err_dma_mask; + } + + ndev->db_reg = pci_iomap(pdev, ndev->db_reg_bar, 0); + if (!ndev->db_reg) { + ret = -eio; + goto err_dma_mask; + } + + return 0; + +err_dma_mask: + pci_clear_master(pdev); + +err_pci_regions: + pci_disable_device(pdev); + +err_pci_enable: + pci_set_drvdata(pdev, null); + + return ret; +} + +static void ntb_epf_deinit_pci(struct ntb_epf_dev *ndev) +{ + struct pci_dev *pdev = ndev->ntb.pdev; + + pci_iounmap(pdev, ndev->ctrl_reg); + pci_iounmap(pdev, ndev->peer_spad_reg); + pci_iounmap(pdev, ndev->db_reg); + + pci_clear_master(pdev); + pci_release_regions(pdev); + pci_disable_device(pdev); + pci_set_drvdata(pdev, null); +} + +static void ntb_epf_cleanup_isr(struct ntb_epf_dev *ndev) +{ + struct pci_dev *pdev = ndev->ntb.pdev; + int i; + + ntb_epf_send_command(ndev, cmd_teardown_doorbell, ndev->db_count + 1); + + for (i = 0; i < ndev->db_count + 1; i++) + free_irq(pci_irq_vector(pdev, i), ndev); + pci_free_irq_vectors(pdev); +} + +static int ntb_epf_pci_probe(struct pci_dev *pdev, + const struct pci_device_id *id) +{ + enum pci_barno peer_spad_reg_bar = bar_1; + enum pci_barno ctrl_reg_bar = bar_0; + enum pci_barno db_reg_bar = bar_2; + struct device *dev = &pdev->dev; + struct ntb_epf_data *data; + struct ntb_epf_dev *ndev; + int ret; + + if (pci_is_bridge(pdev)) + return -enodev; + + ndev = devm_kzalloc(dev, sizeof(*ndev), gfp_kernel); + if (!ndev) + return -enomem; + + data = (struct ntb_epf_data *)id->driver_data; + if (data) { + if (data->peer_spad_reg_bar) + peer_spad_reg_bar = data->peer_spad_reg_bar; + if (data->ctrl_reg_bar) + ctrl_reg_bar = data->ctrl_reg_bar; + if (data->db_reg_bar) + db_reg_bar = data->db_reg_bar; + } + + ndev->peer_spad_reg_bar = peer_spad_reg_bar; + ndev->ctrl_reg_bar = ctrl_reg_bar; + ndev->db_reg_bar = db_reg_bar; + ndev->dev = dev; + + ntb_epf_init_struct(ndev, pdev); + mutex_init(&ndev->cmd_lock); + + ret = ntb_epf_init_pci(ndev, pdev); + if (ret) { + dev_err(dev, "failed to init pci "); + return ret; + } + + ret = ntb_epf_init_dev(ndev); + if (ret) { + dev_err(dev, "failed to init device "); + goto err_init_dev; + } + + ret = ntb_register_device(&ndev->ntb); + if (ret) { + dev_err(dev, "failed to register ntb device "); + goto err_register_dev; + } + + return 0; + +err_register_dev: + ntb_epf_cleanup_isr(ndev); + +err_init_dev: + ntb_epf_deinit_pci(ndev); + + return ret; +} + +static void ntb_epf_pci_remove(struct pci_dev *pdev) +{ + struct ntb_epf_dev *ndev = pci_get_drvdata(pdev); + + ntb_unregister_device(&ndev->ntb); + ntb_epf_cleanup_isr(ndev); + ntb_epf_deinit_pci(ndev); +} + +static const struct ntb_epf_data j721e_data = { + .ctrl_reg_bar = bar_0, + .peer_spad_reg_bar = bar_1, + .db_reg_bar = bar_2, +}; + +static const struct pci_device_id ntb_epf_pci_tbl[] = { + { + pci_device(pci_vendor_id_ti, pci_device_id_ti_j721e), + .class = pci_class_memory_ram << 8, .class_mask = 0xffff00, + .driver_data = (kernel_ulong_t)&j721e_data, + }, + { }, +}; + +static struct pci_driver ntb_epf_pci_driver = { + .name = kbuild_modname, + .id_table = ntb_epf_pci_tbl, + .probe = ntb_epf_pci_probe, + .remove = ntb_epf_pci_remove, +}; +module_pci_driver(ntb_epf_pci_driver); + +module_description("pci endpoint ntb host driver"); +module_author("kishon vijay abraham i <kishon@ti.com>"); +module_license("gpl v2");
|
Non-Transparent Bridge (NTB)
|
812ce2f8d14ea791edd88c36ebcc9017bf4c88cb
|
kishon vijay abraham i dave jiang dave jiang intel com
|
drivers
|
ntb
|
epf, hw
|
thunderbolt: add support for de-authorizing devices
|
in some cases it is useful to be able de-authorize devices. for example if user logs out the userspace can have a policy that disconnects pcie devices until logged in again. this is only possible for software based connection manager as it directly controls the tunnels.
|
this release allows to map an uid to a different one in a mount; it also adds support for selecting the preemption model at runtime; support for a low-overhead memory error detector designed to be used in production; support for the acrn hypervisor designed for embedded systems; btrfs initial support for zoned devices, subpage blocks sizes and performance improvements; support for eager nfs writes; support for a thermal power management to control the surface temperature of embedded devices in an unified way; the napi polling can be moved to a kernel thread; and support for non-blocking path lookups. as always, there are many other features, new drivers, improvements and fixes.
|
add support for de-authorizing devices
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'cryptography', 'security', 'networking', 'architectures x86 arm risc-v powerpc mips csky s390 pa-risc c6x']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'device voltage and frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'non-transparent bridge (ntb)', 'thunderbolt', 'clock', 'phy ("physical layer" framework)', 'cxl (compute express link)', 'various']
|
[]
|
['sysfs-bus-thunderbolt', 'h', 'c', 'rst']
| 6
| 118
| 7
|
--- diff --git a/documentation/abi/testing/sysfs-bus-thunderbolt b/documentation/abi/testing/sysfs-bus-thunderbolt --- a/documentation/abi/testing/sysfs-bus-thunderbolt +++ b/documentation/abi/testing/sysfs-bus-thunderbolt +what: /sys/bus/thunderbolt/devices/.../domainx/deauthorization +date: may 2021 +kernelversion: 5.12 +contact: mika westerberg <mika.westerberg@linux.intel.com> +description: this attribute tells whether the system supports + de-authorization of devices. value of 1 means user can + de-authorize pcie tunnel by writing 0 to authorized + attribute under each device. + - authorized, no devices such as pcie and display port are - available to the system. + authorized, no pcie devices are available to the system. - == =========================================== + == =================================================== + 0 the device will be de-authorized (only supported if + deauthorization attribute under domain contains 1) - == =========================================== + == =================================================== + 0 the device will be de-authorized (only supported if + deauthorization attribute under domain contains 1) diff --git a/documentation/admin-guide/thunderbolt.rst b/documentation/admin-guide/thunderbolt.rst --- a/documentation/admin-guide/thunderbolt.rst +++ b/documentation/admin-guide/thunderbolt.rst +de-authorizing devices +---------------------- +it is possible to de-authorize devices by writing ''0'' to their +''authorized'' attribute. this requires support from the connection +manager implementation and can be checked by reading domain +''deauthorization'' attribute. if it reads ''1'' then the feature is +supported. + +when a device is de-authorized the pcie tunnel from the parent device +pcie downstream (or root) port to the device pcie upstream port is torn +down. this is essentially the same thing as pcie hot-remove and the pcie +toplogy in question will not be accessible anymore until the device is +authorized again. if there is storage such as nvme or similar involved, +there is a risk for data loss if the filesystem on that storage is not +properly shut down. you have been warned! + ------------------------------ diff --git a/drivers/thunderbolt/domain.c b/drivers/thunderbolt/domain.c --- a/drivers/thunderbolt/domain.c +++ b/drivers/thunderbolt/domain.c +static ssize_t deauthorization_show(struct device *dev, + struct device_attribute *attr, + char *buf) +{ + const struct tb *tb = container_of(dev, struct tb, dev); + + return sprintf(buf, "%d ", !!tb->cm_ops->disapprove_switch); +} +static device_attr_ro(deauthorization); + + &dev_attr_deauthorization.attr, +/** + * tb_domain_disapprove_switch() - disapprove switch + * @tb: domain the switch belongs to + * @sw: switch to disapprove + * + * this will disconnect pcie tunnel from parent to this @sw. + * + * return: %0 on success and negative errno in case of failure. + */ +int tb_domain_disapprove_switch(struct tb *tb, struct tb_switch *sw) +{ + if (!tb->cm_ops->disapprove_switch) + return -eperm; + + return tb->cm_ops->disapprove_switch(tb, sw); +} + - * case of success the connection manager will create tunnels for all - * supported protocols. + * case of success the connection manager will create pcie tunnel from + * parent to @sw. diff --git a/drivers/thunderbolt/switch.c b/drivers/thunderbolt/switch.c --- a/drivers/thunderbolt/switch.c +++ b/drivers/thunderbolt/switch.c +static int disapprove_switch(struct device *dev, void *not_used) +{ + struct tb_switch *sw; + + sw = tb_to_switch(dev); + if (sw && sw->authorized) { + int ret; + + /* first children */ + ret = device_for_each_child_reverse(&sw->dev, null, disapprove_switch); + if (ret) + return ret; + + ret = tb_domain_disapprove_switch(sw->tb, sw); + if (ret) + return ret; + + sw->authorized = 0; + kobject_uevent(&sw->dev.kobj, kobj_change); + } + + return 0; +} + - if (sw->authorized) + if (!!sw->authorized == !!val) + /* disapprove switch */ + case 0: + if (tb_route(sw)) { + ret = disapprove_switch(&sw->dev, null); + goto unlock; + } + break; + diff --git a/drivers/thunderbolt/tb.c b/drivers/thunderbolt/tb.c --- a/drivers/thunderbolt/tb.c +++ b/drivers/thunderbolt/tb.c +static int tb_disconnect_pci(struct tb *tb, struct tb_switch *sw) +{ + struct tb_tunnel *tunnel; + struct tb_port *up; + + up = tb_switch_find_port(sw, tb_type_pcie_up); + if (warn_on(!up)) + return -enodev; + + tunnel = tb_find_tunnel(tb, tb_tunnel_pci, null, up); + if (warn_on(!tunnel)) + return -enodev; + + tb_tunnel_deactivate(tunnel); + list_del(&tunnel->list); + tb_tunnel_free(tunnel); + return 0; +} + + .disapprove_switch = tb_disconnect_pci, diff --git a/drivers/thunderbolt/tb.h b/drivers/thunderbolt/tb.h --- a/drivers/thunderbolt/tb.h +++ b/drivers/thunderbolt/tb.h + * @disapprove_switch: disapprove switch (disconnect pcie tunnel) + int (*disapprove_switch)(struct tb *tb, struct tb_switch *sw); +int tb_domain_disapprove_switch(struct tb *tb, struct tb_switch *sw);
|
Thunderbolt
|
3da88be249973f7b74e7b24ed559e6abc2fc5af4
|
mika westerberg
|
drivers
|
thunderbolt
|
testing
|
thunderbolt: add support for native usb4 _osc
|
acpi 6.4 introduced a new _osc capability used to negotiate whether the os is supposed to use software (native) or firmware based connection manager. if the native support is granted then there are set of bits that enable/disable different tunnel types that the software connection manager is allowed to tunnel.
|
this release allows to map an uid to a different one in a mount; it also adds support for selecting the preemption model at runtime; support for a low-overhead memory error detector designed to be used in production; support for the acrn hypervisor designed for embedded systems; btrfs initial support for zoned devices, subpage blocks sizes and performance improvements; support for eager nfs writes; support for a thermal power management to control the surface temperature of embedded devices in an unified way; the napi polling can be moved to a kernel thread; and support for non-blocking path lookups. as always, there are many other features, new drivers, improvements and fixes.
|
add support for native usb4 _osc
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'cryptography', 'security', 'networking', 'architectures x86 arm risc-v powerpc mips csky s390 pa-risc c6x']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'device voltage and frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'non-transparent bridge (ntb)', 'thunderbolt', 'clock', 'phy ("physical layer" framework)', 'cxl (compute express link)', 'various']
|
[]
|
['h', 'c']
| 7
| 134
| 12
|
--- diff --git a/drivers/thunderbolt/acpi.c b/drivers/thunderbolt/acpi.c --- a/drivers/thunderbolt/acpi.c +++ b/drivers/thunderbolt/acpi.c + +/** + * tb_acpi_is_native() - did the platform grant native tbt/usb4 control + * + * returns %true if the platform granted os native control over + * tbt/usb4. in this case software based connection manager can be used, + * otherwise there is firmware based connection manager running. + */ +bool tb_acpi_is_native(void) +{ + return osc_sb_native_usb4_support_confirmed && + osc_sb_native_usb4_control; +} + +/** + * tb_acpi_may_tunnel_usb3() - is usb3 tunneling allowed by the platform + * + * when software based connection manager is used, this function + * returns %true if platform allows native usb3 tunneling. + */ +bool tb_acpi_may_tunnel_usb3(void) +{ + if (tb_acpi_is_native()) + return osc_sb_native_usb4_control & osc_usb_usb3_tunneling; + return true; +} + +/** + * tb_acpi_may_tunnel_dp() - is displayport tunneling allowed by the platform + * + * when software based connection manager is used, this function + * returns %true if platform allows native dp tunneling. + */ +bool tb_acpi_may_tunnel_dp(void) +{ + if (tb_acpi_is_native()) + return osc_sb_native_usb4_control & osc_usb_dp_tunneling; + return true; +} + +/** + * tb_acpi_may_tunnel_pcie() - is pcie tunneling allowed by the platform + * + * when software based connection manager is used, this function + * returns %true if platform allows native pcie tunneling. + */ +bool tb_acpi_may_tunnel_pcie(void) +{ + if (tb_acpi_is_native()) + return osc_sb_native_usb4_control & osc_usb_pcie_tunneling; + return true; +} + +/** + * tb_acpi_is_xdomain_allowed() - are xdomain connections allowed + * + * when software based connection manager is used, this function + * returns %true if platform allows xdomain connections. + */ +bool tb_acpi_is_xdomain_allowed(void) +{ + if (tb_acpi_is_native()) + return osc_sb_native_usb4_control & osc_usb_xdomain; + return true; +} diff --git a/drivers/thunderbolt/nhi.c b/drivers/thunderbolt/nhi.c --- a/drivers/thunderbolt/nhi.c +++ b/drivers/thunderbolt/nhi.c +static struct tb *nhi_select_cm(struct tb_nhi *nhi) +{ + struct tb *tb; + + /* + * usb4 case is simple. if we got control of any of the + * capabilities, we use software cm. + */ + if (tb_acpi_is_native()) + return tb_probe(nhi); + + /* + * either firmware based cm is running (we did not get control + * from the firmware) or this is pre-usb4 pc so try first + * firmware cm and then fallback to software cm. + */ + tb = icm_probe(nhi); + if (!tb) + tb = tb_probe(nhi); + + return tb; +} + - tb = icm_probe(nhi); - if (!tb) - tb = tb_probe(nhi); + tb = nhi_select_cm(nhi); diff --git a/drivers/thunderbolt/tb.c b/drivers/thunderbolt/tb.c --- a/drivers/thunderbolt/tb.c +++ b/drivers/thunderbolt/tb.c + if (!tb_acpi_may_tunnel_usb3()) { + tb_dbg(tb, "usb3 tunneling disabled, not creating tunnel "); + return 0; + } + + if (!tb_acpi_may_tunnel_usb3()) + return 0; + + if (!tb_acpi_may_tunnel_dp()) { + tb_dbg(tb, "dp tunneling disabled, not creating tunnel "); + return; + } + - tb->security_level = tb_security_user; + if (tb_acpi_may_tunnel_pcie()) + tb->security_level = tb_security_user; + else + tb->security_level = tb_security_nopcie; + diff --git a/drivers/thunderbolt/tb.h b/drivers/thunderbolt/tb.h --- a/drivers/thunderbolt/tb.h +++ b/drivers/thunderbolt/tb.h + +bool tb_acpi_is_native(void); +bool tb_acpi_may_tunnel_usb3(void); +bool tb_acpi_may_tunnel_dp(void); +bool tb_acpi_may_tunnel_pcie(void); +bool tb_acpi_is_xdomain_allowed(void); + +static inline bool tb_acpi_is_native(void) { return true; } +static inline bool tb_acpi_may_tunnel_usb3(void) { return true; } +static inline bool tb_acpi_may_tunnel_dp(void) { return true; } +static inline bool tb_acpi_may_tunnel_pcie(void) { return true; } +static inline bool tb_acpi_is_xdomain_allowed(void) { return true; } diff --git a/drivers/thunderbolt/tunnel.c b/drivers/thunderbolt/tunnel.c --- a/drivers/thunderbolt/tunnel.c +++ b/drivers/thunderbolt/tunnel.c + int pcie_enabled = tb_acpi_may_tunnel_pcie(); + - * pcie tunneling affects the usb3 bandwidth so take that it - * into account here. + * pcie tunneling, if enabled, affects the usb3 bandwidth so + * take that it into account here. - *consumed_up = tunnel->allocated_up * (3 + 1) / 3; - *consumed_down = tunnel->allocated_down * (3 + 1) / 3; + *consumed_up = tunnel->allocated_up * (3 + pcie_enabled) / 3; + *consumed_down = tunnel->allocated_down * (3 + pcie_enabled) / 3; diff --git a/drivers/thunderbolt/usb4.c b/drivers/thunderbolt/usb4.c --- a/drivers/thunderbolt/usb4.c +++ b/drivers/thunderbolt/usb4.c - if (sw->link_usb4 && tb_switch_find_port(parent, tb_type_usb3_down)) { + if (tb_acpi_may_tunnel_usb3() && sw->link_usb4 && + tb_switch_find_port(parent, tb_type_usb3_down)) { - /* only enable pcie tunneling if the parent router supports it */ - if (tb_switch_find_port(parent, tb_type_pcie_down)) { + /* + * only enable pcie tunneling if the parent router supports it + * and it is not disabled. + */ + if (tb_acpi_may_tunnel_pcie() && + tb_switch_find_port(parent, tb_type_pcie_down)) { diff --git a/drivers/thunderbolt/xdomain.c b/drivers/thunderbolt/xdomain.c --- a/drivers/thunderbolt/xdomain.c +++ b/drivers/thunderbolt/xdomain.c - return tb_xdomain_enabled; + return tb_xdomain_enabled && tb_acpi_is_xdomain_allowed();
|
Thunderbolt
|
c6da62a219d028de10f2e22e93a34c7ee2b88d03
|
mika westerberg
|
drivers
|
thunderbolt
| |
clk: add risc-v canaan kendryte k210 clock driver
|
add a clock provider driver for the canaan kendryte k210 risc-v soc. this new driver with the compatible string "canaan,k210-clk" implements support for the full clock structure of the k210 soc. since it is required for the correct operation of the soc, this driver is selected by default for compilation when the soc_canaan option is selected.
|
this release allows to map an uid to a different one in a mount; it also adds support for selecting the preemption model at runtime; support for a low-overhead memory error detector designed to be used in production; support for the acrn hypervisor designed for embedded systems; btrfs initial support for zoned devices, subpage blocks sizes and performance improvements; support for eager nfs writes; support for a thermal power management to control the surface temperature of embedded devices in an unified way; the napi polling can be moved to a kernel thread; and support for non-blocking path lookups. as always, there are many other features, new drivers, improvements and fixes.
|
add risc-v canaan kendryte k210 clock driver
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'cryptography', 'security', 'networking', 'architectures x86 arm risc-v powerpc mips csky s390 pa-risc c6x']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'device voltage and frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'non-transparent bridge (ntb)', 'thunderbolt', 'clock', 'phy ("physical layer" framework)', 'cxl (compute express link)', 'various']
|
[]
|
['h', 'kconfig', 'c', 'makefile']
| 7
| 1,064
| 178
|
--- diff --git a/drivers/clk/kconfig b/drivers/clk/kconfig --- a/drivers/clk/kconfig +++ b/drivers/clk/kconfig +config common_clk_k210 + bool "clock driver for the canaan kendryte k210 soc" + depends on of && riscv && soc_canaan + default soc_canaan + help + support for the canaan kendryte k210 risc-v soc clocks. + diff --git a/drivers/clk/makefile b/drivers/clk/makefile --- a/drivers/clk/makefile +++ b/drivers/clk/makefile +obj-$(config_common_clk_k210) += clk-k210.o diff --git a/drivers/clk/clk-k210.c b/drivers/clk/clk-k210.c --- /dev/null +++ b/drivers/clk/clk-k210.c +// spdx-license-identifier: gpl-2.0-or-later +/* + * copyright (c) 2019-20 sean anderson <seanga2@gmail.com> + * copyright (c) 2019 western digital corporation or its affiliates. + */ +#define pr_fmt(fmt) "k210-clk: " fmt + +#include <linux/io.h> +#include <linux/slab.h> +#include <linux/spinlock.h> +#include <linux/platform_device.h> +#include <linux/of.h> +#include <linux/of_clk.h> +#include <linux/of_platform.h> +#include <linux/of_address.h> +#include <linux/clk-provider.h> +#include <linux/bitfield.h> +#include <linux/delay.h> +#include <soc/canaan/k210-sysctl.h> + +#include <dt-bindings/clock/k210-clk.h> + +struct k210_sysclk; + +struct k210_clk { + int id; + struct k210_sysclk *ksc; + struct clk_hw hw; +}; + +struct k210_clk_cfg { + const char *name; + u8 gate_reg; + u8 gate_bit; + u8 div_reg; + u8 div_shift; + u8 div_width; + u8 div_type; + u8 mux_reg; + u8 mux_bit; +}; + +enum k210_clk_div_type { + k210_div_none, + k210_div_one_based, + k210_div_double_one_based, + k210_div_power_of_two, +}; + +#define k210_gate(_reg, _bit) \ + .gate_reg = (_reg), \ + .gate_bit = (_bit) + +#define k210_div(_reg, _shift, _width, _type) \ + .div_reg = (_reg), \ + .div_shift = (_shift), \ + .div_width = (_width), \ + .div_type = (_type) + +#define k210_mux(_reg, _bit) \ + .mux_reg = (_reg), \ + .mux_bit = (_bit) + +static struct k210_clk_cfg k210_clk_cfgs[k210_num_clks] = { + /* gated clocks, no mux, no divider */ + [k210_clk_cpu] = { + .name = "cpu", + k210_gate(k210_sysctl_en_cent, 0) + }, + [k210_clk_dma] = { + .name = "dma", + k210_gate(k210_sysctl_en_peri, 1) + }, + [k210_clk_fft] = { + .name = "fft", + k210_gate(k210_sysctl_en_peri, 4) + }, + [k210_clk_gpio] = { + .name = "gpio", + k210_gate(k210_sysctl_en_peri, 5) + }, + [k210_clk_uart1] = { + .name = "uart1", + k210_gate(k210_sysctl_en_peri, 16) + }, + [k210_clk_uart2] = { + .name = "uart2", + k210_gate(k210_sysctl_en_peri, 17) + }, + [k210_clk_uart3] = { + .name = "uart3", + k210_gate(k210_sysctl_en_peri, 18) + }, + [k210_clk_fpioa] = { + .name = "fpioa", + k210_gate(k210_sysctl_en_peri, 20) + }, + [k210_clk_sha] = { + .name = "sha", + k210_gate(k210_sysctl_en_peri, 26) + }, + [k210_clk_aes] = { + .name = "aes", + k210_gate(k210_sysctl_en_peri, 19) + }, + [k210_clk_otp] = { + .name = "otp", + k210_gate(k210_sysctl_en_peri, 27) + }, + [k210_clk_rtc] = { + .name = "rtc", + k210_gate(k210_sysctl_en_peri, 29) + }, + + /* gated divider clocks */ + [k210_clk_sram0] = { + .name = "sram0", + k210_gate(k210_sysctl_en_cent, 1), + k210_div(k210_sysctl_thr0, 0, 4, k210_div_one_based) + }, + [k210_clk_sram1] = { + .name = "sram1", + k210_gate(k210_sysctl_en_cent, 2), + k210_div(k210_sysctl_thr0, 4, 4, k210_div_one_based) + }, + [k210_clk_rom] = { + .name = "rom", + k210_gate(k210_sysctl_en_peri, 0), + k210_div(k210_sysctl_thr0, 16, 4, k210_div_one_based) + }, + [k210_clk_dvp] = { + .name = "dvp", + k210_gate(k210_sysctl_en_peri, 3), + k210_div(k210_sysctl_thr0, 12, 4, k210_div_one_based) + }, + [k210_clk_apb0] = { + .name = "apb0", + k210_gate(k210_sysctl_en_cent, 3), + k210_div(k210_sysctl_sel0, 3, 3, k210_div_one_based) + }, + [k210_clk_apb1] = { + .name = "apb1", + k210_gate(k210_sysctl_en_cent, 4), + k210_div(k210_sysctl_sel0, 6, 3, k210_div_one_based) + }, + [k210_clk_apb2] = { + .name = "apb2", + k210_gate(k210_sysctl_en_cent, 5), + k210_div(k210_sysctl_sel0, 9, 3, k210_div_one_based) + }, + [k210_clk_ai] = { + .name = "ai", + k210_gate(k210_sysctl_en_peri, 2), + k210_div(k210_sysctl_thr0, 8, 4, k210_div_one_based) + }, + [k210_clk_spi0] = { + .name = "spi0", + k210_gate(k210_sysctl_en_peri, 6), + k210_div(k210_sysctl_thr1, 0, 8, k210_div_double_one_based) + }, + [k210_clk_spi1] = { + .name = "spi1", + k210_gate(k210_sysctl_en_peri, 7), + k210_div(k210_sysctl_thr1, 8, 8, k210_div_double_one_based) + }, + [k210_clk_spi2] = { + .name = "spi2", + k210_gate(k210_sysctl_en_peri, 8), + k210_div(k210_sysctl_thr1, 16, 8, k210_div_double_one_based) + }, + [k210_clk_i2c0] = { + .name = "i2c0", + k210_gate(k210_sysctl_en_peri, 13), + k210_div(k210_sysctl_thr5, 8, 8, k210_div_double_one_based) + }, + [k210_clk_i2c1] = { + .name = "i2c1", + k210_gate(k210_sysctl_en_peri, 14), + k210_div(k210_sysctl_thr5, 16, 8, k210_div_double_one_based) + }, + [k210_clk_i2c2] = { + .name = "i2c2", + k210_gate(k210_sysctl_en_peri, 15), + k210_div(k210_sysctl_thr5, 24, 8, k210_div_double_one_based) + }, + [k210_clk_wdt0] = { + .name = "wdt0", + k210_gate(k210_sysctl_en_peri, 24), + k210_div(k210_sysctl_thr6, 0, 8, k210_div_double_one_based) + }, + [k210_clk_wdt1] = { + .name = "wdt1", + k210_gate(k210_sysctl_en_peri, 25), + k210_div(k210_sysctl_thr6, 8, 8, k210_div_double_one_based) + }, + [k210_clk_i2s0] = { + .name = "i2s0", + k210_gate(k210_sysctl_en_peri, 10), + k210_div(k210_sysctl_thr3, 0, 16, k210_div_double_one_based) + }, + [k210_clk_i2s1] = { + .name = "i2s1", + k210_gate(k210_sysctl_en_peri, 11), + k210_div(k210_sysctl_thr3, 16, 16, k210_div_double_one_based) + }, + [k210_clk_i2s2] = { + .name = "i2s2", + k210_gate(k210_sysctl_en_peri, 12), + k210_div(k210_sysctl_thr4, 0, 16, k210_div_double_one_based) + }, + + /* divider clocks, no gate, no mux */ + [k210_clk_i2s0_m] = { + .name = "i2s0_m", + k210_div(k210_sysctl_thr4, 16, 8, k210_div_double_one_based) + }, + [k210_clk_i2s1_m] = { + .name = "i2s1_m", + k210_div(k210_sysctl_thr4, 24, 8, k210_div_double_one_based) + }, + [k210_clk_i2s2_m] = { + .name = "i2s2_m", + k210_div(k210_sysctl_thr4, 0, 8, k210_div_double_one_based) + }, + + /* muxed gated divider clocks */ + [k210_clk_spi3] = { + .name = "spi3", + k210_gate(k210_sysctl_en_peri, 9), + k210_div(k210_sysctl_thr1, 24, 8, k210_div_double_one_based), + k210_mux(k210_sysctl_sel0, 12) + }, + [k210_clk_timer0] = { + .name = "timer0", + k210_gate(k210_sysctl_en_peri, 21), + k210_div(k210_sysctl_thr2, 0, 8, k210_div_double_one_based), + k210_mux(k210_sysctl_sel0, 13) + }, + [k210_clk_timer1] = { + .name = "timer1", + k210_gate(k210_sysctl_en_peri, 22), + k210_div(k210_sysctl_thr2, 8, 8, k210_div_double_one_based), + k210_mux(k210_sysctl_sel0, 14) + }, + [k210_clk_timer2] = { + .name = "timer2", + k210_gate(k210_sysctl_en_peri, 23), + k210_div(k210_sysctl_thr2, 16, 8, k210_div_double_one_based), + k210_mux(k210_sysctl_sel0, 15) + }, +}; + +/* + * pll control register bits. + */ +#define k210_pll_clkr genmask(3, 0) +#define k210_pll_clkf genmask(9, 4) +#define k210_pll_clkod genmask(13, 10) +#define k210_pll_bwadj genmask(19, 14) +#define k210_pll_reset (1 << 20) +#define k210_pll_pwrd (1 << 21) +#define k210_pll_intfb (1 << 22) +#define k210_pll_bypass (1 << 23) +#define k210_pll_test (1 << 24) +#define k210_pll_en (1 << 25) +#define k210_pll_sel genmask(27, 26) /* pll2 only */ + +/* + * pll lock register bits. + */ +#define k210_pll_lock 0 +#define k210_pll_clear_slip 2 +#define k210_pll_test_out 3 + +/* + * clock selector register bits. + */ +#define k210_aclk_sel bit(0) +#define k210_aclk_div genmask(2, 1) + +/* + * plls. + */ +enum k210_pll_id { + k210_pll0, k210_pll1, k210_pll2, k210_pll_num +}; + +struct k210_pll { + enum k210_pll_id id; + struct k210_sysclk *ksc; + void __iomem *base; + void __iomem *reg; + void __iomem *lock; + u8 lock_shift; + u8 lock_width; + struct clk_hw hw; +}; +#define to_k210_pll(_hw) container_of(_hw, struct k210_pll, hw) + +/* + * plls configuration: by default pll0 runs at 780 mhz and pll1 at 299 mhz. + * the first 2 sram banks depend on aclk/cpu clock which is by default pll0 + * rate divided by 2. set pll1 to 390 mhz so that the third sram bank has the + * same clock as the first 2. + */ +struct k210_pll_cfg { + u32 reg; + u8 lock_shift; + u8 lock_width; + u32 r; + u32 f; + u32 od; + u32 bwadj; +}; + +static struct k210_pll_cfg k210_plls_cfg[] = { + { k210_sysctl_pll0, 0, 2, 0, 59, 1, 59 }, /* 780 mhz */ + { k210_sysctl_pll1, 8, 1, 0, 59, 3, 59 }, /* 390 mhz */ + { k210_sysctl_pll2, 16, 1, 0, 22, 1, 22 }, /* 299 mhz */ +}; + +/** + * struct k210_sysclk - sysclk driver data + * @regs: system controller registers start address + * @clk_lock: clock setting spinlock + * @plls: soc plls descriptors + * @aclk: aclk clock + * @clks: all other clocks + */ +struct k210_sysclk { + void __iomem *regs; + spinlock_t clk_lock; + struct k210_pll plls[k210_pll_num]; + struct clk_hw aclk; + struct k210_clk clks[k210_num_clks]; +}; + +#define to_k210_sysclk(_hw) container_of(_hw, struct k210_sysclk, aclk) + +/* + * set aclk parent selector: 0 for in0, 1 for pll0. + */ +static void k210_aclk_set_selector(void __iomem *regs, u8 sel) +{ + u32 reg = readl(regs + k210_sysctl_sel0); + + if (sel) + reg |= k210_aclk_sel; + else + reg &= k210_aclk_sel; + writel(reg, regs + k210_sysctl_sel0); +} + +static void k210_init_pll(void __iomem *regs, enum k210_pll_id pllid, + struct k210_pll *pll) +{ + pll->id = pllid; + pll->reg = regs + k210_plls_cfg[pllid].reg; + pll->lock = regs + k210_sysctl_pll_lock; + pll->lock_shift = k210_plls_cfg[pllid].lock_shift; + pll->lock_width = k210_plls_cfg[pllid].lock_width; +} + +static void k210_pll_wait_for_lock(struct k210_pll *pll) +{ + u32 reg, mask = genmask(pll->lock_shift + pll->lock_width - 1, + pll->lock_shift); + + while (true) { + reg = readl(pll->lock); + if ((reg & mask) == mask) + break; + + reg |= bit(pll->lock_shift + k210_pll_clear_slip); + writel(reg, pll->lock); + } +} + +static bool k210_pll_hw_is_enabled(struct k210_pll *pll) +{ + u32 reg = readl(pll->reg); + u32 mask = k210_pll_pwrd | k210_pll_en; + + if (reg & k210_pll_reset) + return false; + + return (reg & mask) == mask; +} + +static void k210_pll_enable_hw(void __iomem *regs, struct k210_pll *pll) +{ + struct k210_pll_cfg *pll_cfg = &k210_plls_cfg[pll->id]; + u32 reg; + + if (k210_pll_hw_is_enabled(pll)) + return; + + /* + * for pll0, we need to re-parent aclk to in0 to keep the cpu cores and + * sram running. + */ + if (pll->id == k210_pll0) + k210_aclk_set_selector(regs, 0); + + /* set pll factors */ + reg = readl(pll->reg); + reg &= ~genmask(19, 0); + reg |= field_prep(k210_pll_clkr, pll_cfg->r); + reg |= field_prep(k210_pll_clkf, pll_cfg->f); + reg |= field_prep(k210_pll_clkod, pll_cfg->od); + reg |= field_prep(k210_pll_bwadj, pll_cfg->bwadj); + reg |= k210_pll_pwrd; + writel(reg, pll->reg); + + /* + * reset the pll: ensure reset is low before asserting it. + * the magic nops come from the kendryte reference sdk. + */ + reg &= ~k210_pll_reset; + writel(reg, pll->reg); + reg |= k210_pll_reset; + writel(reg, pll->reg); + nop(); + nop(); + reg &= ~k210_pll_reset; + writel(reg, pll->reg); + + k210_pll_wait_for_lock(pll); + + reg &= ~k210_pll_bypass; + reg |= k210_pll_en; + writel(reg, pll->reg); + + if (pll->id == k210_pll0) + k210_aclk_set_selector(regs, 1); +} + +static int k210_pll_enable(struct clk_hw *hw) +{ + struct k210_pll *pll = to_k210_pll(hw); + struct k210_sysclk *ksc = pll->ksc; + unsigned long flags; + + spin_lock_irqsave(&ksc->clk_lock, flags); + + k210_pll_enable_hw(ksc->regs, pll); + + spin_unlock_irqrestore(&ksc->clk_lock, flags); + + return 0; +} + +static void k210_pll_disable(struct clk_hw *hw) +{ + struct k210_pll *pll = to_k210_pll(hw); + struct k210_sysclk *ksc = pll->ksc; + unsigned long flags; + u32 reg; + + /* + * bypassing before powering off is important so child clocks do not + * stop working. this is especially important for pll0, the indirect + * parent of the cpu clock. + */ + spin_lock_irqsave(&ksc->clk_lock, flags); + reg = readl(pll->reg); + reg |= k210_pll_bypass; + writel(reg, pll->reg); + + reg &= ~k210_pll_pwrd; + reg &= ~k210_pll_en; + writel(reg, pll->reg); + spin_unlock_irqrestore(&ksc->clk_lock, flags); +} + +static int k210_pll_is_enabled(struct clk_hw *hw) +{ + return k210_pll_hw_is_enabled(to_k210_pll(hw)); +} + +static unsigned long k210_pll_get_rate(struct clk_hw *hw, + unsigned long parent_rate) +{ + struct k210_pll *pll = to_k210_pll(hw); + u32 reg = readl(pll->reg); + u32 r, f, od; + + if (reg & k210_pll_bypass) + return parent_rate; + + if (!(reg & k210_pll_pwrd)) + return 0; + + r = field_get(k210_pll_clkr, reg) + 1; + f = field_get(k210_pll_clkf, reg) + 1; + od = field_get(k210_pll_clkod, reg) + 1; + + return (u64)parent_rate * f / (r * od); +} + +static const struct clk_ops k210_pll_ops = { + .enable = k210_pll_enable, + .disable = k210_pll_disable, + .is_enabled = k210_pll_is_enabled, + .recalc_rate = k210_pll_get_rate, +}; + +static int k210_pll2_set_parent(struct clk_hw *hw, u8 index) +{ + struct k210_pll *pll = to_k210_pll(hw); + struct k210_sysclk *ksc = pll->ksc; + unsigned long flags; + u32 reg; + + spin_lock_irqsave(&ksc->clk_lock, flags); + + reg = readl(pll->reg); + reg &= ~k210_pll_sel; + reg |= field_prep(k210_pll_sel, index); + writel(reg, pll->reg); + + spin_unlock_irqrestore(&ksc->clk_lock, flags); + + return 0; +} + +static u8 k210_pll2_get_parent(struct clk_hw *hw) +{ + struct k210_pll *pll = to_k210_pll(hw); + u32 reg = readl(pll->reg); + + return field_get(k210_pll_sel, reg); +} + +static const struct clk_ops k210_pll2_ops = { + .enable = k210_pll_enable, + .disable = k210_pll_disable, + .is_enabled = k210_pll_is_enabled, + .recalc_rate = k210_pll_get_rate, + .set_parent = k210_pll2_set_parent, + .get_parent = k210_pll2_get_parent, +}; + +static int __init k210_register_pll(struct device_node *np, + struct k210_sysclk *ksc, + enum k210_pll_id pllid, const char *name, + int num_parents, const struct clk_ops *ops) +{ + struct k210_pll *pll = &ksc->plls[pllid]; + struct clk_init_data init = {}; + const struct clk_parent_data parent_data[] = { + { /* .index = 0 for in0 */ }, + { .hw = &ksc->plls[k210_pll0].hw }, + { .hw = &ksc->plls[k210_pll1].hw }, + }; + + init.name = name; + init.parent_data = parent_data; + init.num_parents = num_parents; + init.ops = ops; + + pll->hw.init = &init; + pll->ksc = ksc; + + return of_clk_hw_register(np, &pll->hw); +} + +static int __init k210_register_plls(struct device_node *np, + struct k210_sysclk *ksc) +{ + int i, ret; + + for (i = 0; i < k210_pll_num; i++) + k210_init_pll(ksc->regs, i, &ksc->plls[i]); + + /* pll0 and pll1 only have in0 as parent */ + ret = k210_register_pll(np, ksc, k210_pll0, "pll0", 1, &k210_pll_ops); + if (ret) { + pr_err("%pofp: register pll0 failed ", np); + return ret; + } + ret = k210_register_pll(np, ksc, k210_pll1, "pll1", 1, &k210_pll_ops); + if (ret) { + pr_err("%pofp: register pll1 failed ", np); + return ret; + } + + /* pll2 has in0, pll0 and pll1 as parents */ + ret = k210_register_pll(np, ksc, k210_pll2, "pll2", 3, &k210_pll2_ops); + if (ret) { + pr_err("%pofp: register pll2 failed ", np); + return ret; + } + + return 0; +} + +static int k210_aclk_set_parent(struct clk_hw *hw, u8 index) +{ + struct k210_sysclk *ksc = to_k210_sysclk(hw); + unsigned long flags; + + spin_lock_irqsave(&ksc->clk_lock, flags); + + k210_aclk_set_selector(ksc->regs, index); + + spin_unlock_irqrestore(&ksc->clk_lock, flags); + + return 0; +} + +static u8 k210_aclk_get_parent(struct clk_hw *hw) +{ + struct k210_sysclk *ksc = to_k210_sysclk(hw); + u32 sel; + + sel = readl(ksc->regs + k210_sysctl_sel0) & k210_aclk_sel; + + return sel ? 1 : 0; +} + +static unsigned long k210_aclk_get_rate(struct clk_hw *hw, + unsigned long parent_rate) +{ + struct k210_sysclk *ksc = to_k210_sysclk(hw); + u32 reg = readl(ksc->regs + k210_sysctl_sel0); + unsigned int shift; + + if (!(reg & 0x1)) + return parent_rate; + + shift = field_get(k210_aclk_div, reg); + + return parent_rate / (2ul << shift); +} + +static const struct clk_ops k210_aclk_ops = { + .set_parent = k210_aclk_set_parent, + .get_parent = k210_aclk_get_parent, + .recalc_rate = k210_aclk_get_rate, +}; + +/* + * aclk has in0 and pll0 as parents. + */ +static int __init k210_register_aclk(struct device_node *np, + struct k210_sysclk *ksc) +{ + struct clk_init_data init = {}; + const struct clk_parent_data parent_data[] = { + { /* .index = 0 for in0 */ }, + { .hw = &ksc->plls[k210_pll0].hw }, + }; + int ret; + + init.name = "aclk"; + init.parent_data = parent_data; + init.num_parents = 2; + init.ops = &k210_aclk_ops; + ksc->aclk.init = &init; + + ret = of_clk_hw_register(np, &ksc->aclk); + if (ret) { + pr_err("%pofp: register aclk failed ", np); + return ret; + } + + return 0; +} + +#define to_k210_clk(_hw) container_of(_hw, struct k210_clk, hw) + +static int k210_clk_enable(struct clk_hw *hw) +{ + struct k210_clk *kclk = to_k210_clk(hw); + struct k210_sysclk *ksc = kclk->ksc; + struct k210_clk_cfg *cfg = &k210_clk_cfgs[kclk->id]; + unsigned long flags; + u32 reg; + + if (!cfg->gate_reg) + return 0; + + spin_lock_irqsave(&ksc->clk_lock, flags); + reg = readl(ksc->regs + cfg->gate_reg); + reg |= bit(cfg->gate_bit); + writel(reg, ksc->regs + cfg->gate_reg); + spin_unlock_irqrestore(&ksc->clk_lock, flags); + + return 0; +} + +static void k210_clk_disable(struct clk_hw *hw) +{ + struct k210_clk *kclk = to_k210_clk(hw); + struct k210_sysclk *ksc = kclk->ksc; + struct k210_clk_cfg *cfg = &k210_clk_cfgs[kclk->id]; + unsigned long flags; + u32 reg; + + if (!cfg->gate_reg) + return; + + spin_lock_irqsave(&ksc->clk_lock, flags); + reg = readl(ksc->regs + cfg->gate_reg); + reg &= ~bit(cfg->gate_bit); + writel(reg, ksc->regs + cfg->gate_reg); + spin_unlock_irqrestore(&ksc->clk_lock, flags); +} + +static int k210_clk_set_parent(struct clk_hw *hw, u8 index) +{ + struct k210_clk *kclk = to_k210_clk(hw); + struct k210_sysclk *ksc = kclk->ksc; + struct k210_clk_cfg *cfg = &k210_clk_cfgs[kclk->id]; + unsigned long flags; + u32 reg; + + spin_lock_irqsave(&ksc->clk_lock, flags); + reg = readl(ksc->regs + cfg->mux_reg); + if (index) + reg |= bit(cfg->mux_bit); + else + reg &= ~bit(cfg->mux_bit); + spin_unlock_irqrestore(&ksc->clk_lock, flags); + + return 0; +} + +static u8 k210_clk_get_parent(struct clk_hw *hw) +{ + struct k210_clk *kclk = to_k210_clk(hw); + struct k210_sysclk *ksc = kclk->ksc; + struct k210_clk_cfg *cfg = &k210_clk_cfgs[kclk->id]; + unsigned long flags; + u32 reg, idx; + + spin_lock_irqsave(&ksc->clk_lock, flags); + reg = readl(ksc->regs + cfg->mux_reg); + idx = (reg & bit(cfg->mux_bit)) ? 1 : 0; + spin_unlock_irqrestore(&ksc->clk_lock, flags); + + return idx; +} + +static unsigned long k210_clk_get_rate(struct clk_hw *hw, + unsigned long parent_rate) +{ + struct k210_clk *kclk = to_k210_clk(hw); + struct k210_sysclk *ksc = kclk->ksc; + struct k210_clk_cfg *cfg = &k210_clk_cfgs[kclk->id]; + u32 reg, div_val; + + if (!cfg->div_reg) + return parent_rate; + + reg = readl(ksc->regs + cfg->div_reg); + div_val = (reg >> cfg->div_shift) & genmask(cfg->div_width - 1, 0); + + switch (cfg->div_type) { + case k210_div_one_based: + return parent_rate / (div_val + 1); + case k210_div_double_one_based: + return parent_rate / ((div_val + 1) * 2); + case k210_div_power_of_two: + return parent_rate / (2ul << div_val); + case k210_div_none: + default: + return 0; + } +} + +static const struct clk_ops k210_clk_mux_ops = { + .enable = k210_clk_enable, + .disable = k210_clk_disable, + .set_parent = k210_clk_set_parent, + .get_parent = k210_clk_get_parent, + .recalc_rate = k210_clk_get_rate, +}; + +static const struct clk_ops k210_clk_ops = { + .enable = k210_clk_enable, + .disable = k210_clk_disable, + .recalc_rate = k210_clk_get_rate, +}; + +static void __init k210_register_clk(struct device_node *np, + struct k210_sysclk *ksc, int id, + const struct clk_parent_data *parent_data, + int num_parents, unsigned long flags) +{ + struct k210_clk *kclk = &ksc->clks[id]; + struct clk_init_data init = {}; + int ret; + + init.name = k210_clk_cfgs[id].name; + init.flags = flags; + init.parent_data = parent_data; + init.num_parents = num_parents; + if (num_parents > 1) + init.ops = &k210_clk_mux_ops; + else + init.ops = &k210_clk_ops; + + kclk->id = id; + kclk->ksc = ksc; + kclk->hw.init = &init; + + ret = of_clk_hw_register(np, &kclk->hw); + if (ret) { + pr_err("%pofp: register clock %s failed ", + np, k210_clk_cfgs[id].name); + kclk->id = -1; + } +} + +/* + * all muxed clocks have in0 and pll0 as parents. + */ +static inline void __init k210_register_mux_clk(struct device_node *np, + struct k210_sysclk *ksc, int id) +{ + const struct clk_parent_data parent_data[2] = { + { /* .index = 0 for in0 */ }, + { .hw = &ksc->plls[k210_pll0].hw } + }; + + k210_register_clk(np, ksc, id, parent_data, 2, 0); +} + +static inline void __init k210_register_in0_child(struct device_node *np, + struct k210_sysclk *ksc, int id) +{ + const struct clk_parent_data parent_data = { + /* .index = 0 for in0 */ + }; + + k210_register_clk(np, ksc, id, &parent_data, 1, 0); +} + +static inline void __init k210_register_pll_child(struct device_node *np, + struct k210_sysclk *ksc, int id, + enum k210_pll_id pllid, + unsigned long flags) +{ + const struct clk_parent_data parent_data = { + .hw = &ksc->plls[pllid].hw, + }; + + k210_register_clk(np, ksc, id, &parent_data, 1, flags); +} + +static inline void __init k210_register_aclk_child(struct device_node *np, + struct k210_sysclk *ksc, int id, + unsigned long flags) +{ + const struct clk_parent_data parent_data = { + .hw = &ksc->aclk, + }; + + k210_register_clk(np, ksc, id, &parent_data, 1, flags); +} + +static inline void __init k210_register_clk_child(struct device_node *np, + struct k210_sysclk *ksc, int id, + int parent_id) +{ + const struct clk_parent_data parent_data = { + .hw = &ksc->clks[parent_id].hw, + }; + + k210_register_clk(np, ksc, id, &parent_data, 1, 0); +} + +static struct clk_hw *k210_clk_hw_onecell_get(struct of_phandle_args *clkspec, + void *data) +{ + struct k210_sysclk *ksc = data; + unsigned int idx = clkspec->args[0]; + + if (idx >= k210_num_clks) + return err_ptr(-einval); + + return &ksc->clks[idx].hw; +} + +static void __init k210_clk_init(struct device_node *np) +{ + struct device_node *sysctl_np; + struct k210_sysclk *ksc; + int i, ret; + + ksc = kzalloc(sizeof(*ksc), gfp_kernel); + if (!ksc) + return; + + spin_lock_init(&ksc->clk_lock); + sysctl_np = of_get_parent(np); + ksc->regs = of_iomap(sysctl_np, 0); + of_node_put(sysctl_np); + if (!ksc->regs) { + pr_err("%pofp: failed to map registers ", np); + return; + } + + ret = k210_register_plls(np, ksc); + if (ret) + return; + + ret = k210_register_aclk(np, ksc); + if (ret) + return; + + /* + * critical clocks: there are no consumers of the sram clocks, + * including the ai clock for the third sram bank. the cpu clock + * is only referenced by the uarths serial device and so would be + * disabled if the serial console is disabled to switch to another + * console. mark all these clocks as critical so that they are never + * disabled by the core clock management. + */ + k210_register_aclk_child(np, ksc, k210_clk_cpu, clk_is_critical); + k210_register_aclk_child(np, ksc, k210_clk_sram0, clk_is_critical); + k210_register_aclk_child(np, ksc, k210_clk_sram1, clk_is_critical); + k210_register_pll_child(np, ksc, k210_clk_ai, k210_pll1, + clk_is_critical); + + /* clocks with aclk as source */ + k210_register_aclk_child(np, ksc, k210_clk_dma, 0); + k210_register_aclk_child(np, ksc, k210_clk_fft, 0); + k210_register_aclk_child(np, ksc, k210_clk_rom, 0); + k210_register_aclk_child(np, ksc, k210_clk_dvp, 0); + k210_register_aclk_child(np, ksc, k210_clk_apb0, 0); + k210_register_aclk_child(np, ksc, k210_clk_apb1, 0); + k210_register_aclk_child(np, ksc, k210_clk_apb2, 0); + + /* clocks with pll0 as source */ + k210_register_pll_child(np, ksc, k210_clk_spi0, k210_pll0, 0); + k210_register_pll_child(np, ksc, k210_clk_spi1, k210_pll0, 0); + k210_register_pll_child(np, ksc, k210_clk_spi2, k210_pll0, 0); + k210_register_pll_child(np, ksc, k210_clk_i2c0, k210_pll0, 0); + k210_register_pll_child(np, ksc, k210_clk_i2c1, k210_pll0, 0); + k210_register_pll_child(np, ksc, k210_clk_i2c2, k210_pll0, 0); + + /* clocks with pll2 as source */ + k210_register_pll_child(np, ksc, k210_clk_i2s0, k210_pll2, 0); + k210_register_pll_child(np, ksc, k210_clk_i2s1, k210_pll2, 0); + k210_register_pll_child(np, ksc, k210_clk_i2s2, k210_pll2, 0); + k210_register_pll_child(np, ksc, k210_clk_i2s0_m, k210_pll2, 0); + k210_register_pll_child(np, ksc, k210_clk_i2s1_m, k210_pll2, 0); + k210_register_pll_child(np, ksc, k210_clk_i2s2_m, k210_pll2, 0); + + /* clocks with in0 as source */ + k210_register_in0_child(np, ksc, k210_clk_wdt0); + k210_register_in0_child(np, ksc, k210_clk_wdt1); + k210_register_in0_child(np, ksc, k210_clk_rtc); + + /* clocks with apb0 as source */ + k210_register_clk_child(np, ksc, k210_clk_gpio, k210_clk_apb0); + k210_register_clk_child(np, ksc, k210_clk_uart1, k210_clk_apb0); + k210_register_clk_child(np, ksc, k210_clk_uart2, k210_clk_apb0); + k210_register_clk_child(np, ksc, k210_clk_uart3, k210_clk_apb0); + k210_register_clk_child(np, ksc, k210_clk_fpioa, k210_clk_apb0); + k210_register_clk_child(np, ksc, k210_clk_sha, k210_clk_apb0); + + /* clocks with apb1 as source */ + k210_register_clk_child(np, ksc, k210_clk_aes, k210_clk_apb1); + k210_register_clk_child(np, ksc, k210_clk_otp, k210_clk_apb1); + + /* mux clocks with in0 or pll0 as source */ + k210_register_mux_clk(np, ksc, k210_clk_spi3); + k210_register_mux_clk(np, ksc, k210_clk_timer0); + k210_register_mux_clk(np, ksc, k210_clk_timer1); + k210_register_mux_clk(np, ksc, k210_clk_timer2); + + /* check for registration errors */ + for (i = 0; i < k210_num_clks; i++) { + if (ksc->clks[i].id != i) + return; + } + + ret = of_clk_add_hw_provider(np, k210_clk_hw_onecell_get, ksc); + if (ret) { + pr_err("%pofp: add clock provider failed %d ", np, ret); + return; + } + + pr_info("%pofp: cpu running at %lu mhz ", + np, clk_hw_get_rate(&ksc->clks[k210_clk_cpu].hw) / 1000000); +} + +clk_of_declare(k210_clk, "canaan,k210-clk", k210_clk_init); + +/* + * enable pll1 to be able to use the ai sram. + */ +void __init k210_clk_early_init(void __iomem *regs) +{ + struct k210_pll pll1; + + /* make sure aclk selector is set to pll0 */ + k210_aclk_set_selector(regs, 1); + + /* startup pll1 to enable the aisram bank for general memory use */ + k210_init_pll(regs, k210_pll1, &pll1); + k210_pll_enable_hw(regs, &pll1); +} diff --git a/drivers/soc/canaan/kconfig b/drivers/soc/canaan/kconfig --- a/drivers/soc/canaan/kconfig +++ b/drivers/soc/canaan/kconfig -if soc_canaan - -config k210_sysctl +config soc_k210_sysctl - default y - depends on riscv + depends on riscv && soc_canaan && of + default soc_canaan + select pm + select simple_pm_bus + select syscon + select mfd_syscon - enables controlling the k210 various clocks and to enable - general purpose use of the extra 2mb of sram normally - reserved for the ai engine. - -endif + canaan kendryte k210 soc system controller driver. diff --git a/drivers/soc/canaan/makefile b/drivers/soc/canaan/makefile --- a/drivers/soc/canaan/makefile +++ b/drivers/soc/canaan/makefile -obj-$(config_k210_sysctl) += k210-sysctl.o +obj-$(config_soc_k210_sysctl) += k210-sysctl.o diff --git a/drivers/soc/canaan/k210-sysctl.c b/drivers/soc/canaan/k210-sysctl.c --- a/drivers/soc/canaan/k210-sysctl.c +++ b/drivers/soc/canaan/k210-sysctl.c -#include <linux/types.h> -#include <linux/of.h> -#include <linux/clk-provider.h> -#include <linux/clkdev.h> -#include <linux/bitfield.h> +#include <linux/of_platform.h> +#include <linux/clk.h> -#define k210_sysctl_clk0_freq 26000000ul - -/* registers base address */ -#define k210_sysctl_sysctl_base_addr 0x50440000ull - -/* register bits */ -/* k210_sysctl_pll1: clkr: 4bits, clkf1: 6bits, clkod: 4bits, bwadj: 4bits */ -#define pll_reset (1 << 20) -#define pll_pwr (1 << 21) -#define pll_bypass (1 << 23) -#define pll_out_en (1 << 25) -/* k210_sysctl_pll_lock */ -#define pll1_lock1 (1 << 8) -#define pll1_lock2 (1 << 9) -#define pll1_slip_clear (1 << 10) -/* k210_sysctl_sel0 */ -#define clksel_aclk (1 << 0) -/* k210_sysctl_clken_cent */ -#define clken_cpu (1 << 0) -#define clken_sram0 (1 << 1) -#define clken_sram1 (1 << 2) -/* k210_sysctl_en_peri */ -#define clken_rom (1 << 0) -#define clken_timer0 (1 << 21) -#define clken_rtc (1 << 29) - -struct k210_sysctl { - void __iomem *regs; - struct clk_hw hw; -}; - -static void k210_set_bits(u32 val, void __iomem *reg) -{ - writel(readl(reg) | val, reg); -} - -static void k210_clear_bits(u32 val, void __iomem *reg) -{ - writel(readl(reg) & ~val, reg); -} - -static void k210_pll1_enable(void __iomem *regs) -{ - u32 val; - - val = readl(regs + k210_sysctl_pll1); - val &= ~genmask(19, 0); /* clkr1 = 0 */ - val |= field_prep(genmask(9, 4), 0x3b); /* clkf1 = 59 */ - val |= field_prep(genmask(13, 10), 0x3); /* clkod1 = 3 */ - val |= field_prep(genmask(19, 14), 0x3b); /* bwadj1 = 59 */ - writel(val, regs + k210_sysctl_pll1); - - k210_clear_bits(pll_bypass, regs + k210_sysctl_pll1); - k210_set_bits(pll_pwr, regs + k210_sysctl_pll1); - - /* - * reset the pll. the magic nops come from the kendryte reference sdk. - */ - k210_clear_bits(pll_reset, regs + k210_sysctl_pll1); - k210_set_bits(pll_reset, regs + k210_sysctl_pll1); - nop(); - nop(); - k210_clear_bits(pll_reset, regs + k210_sysctl_pll1); - - for (;;) { - val = readl(regs + k210_sysctl_pll_lock); - if (val & pll1_lock2) - break; - writel(val | pll1_slip_clear, regs + k210_sysctl_pll_lock); - } - - k210_set_bits(pll_out_en, regs + k210_sysctl_pll1); -} - -static unsigned long k210_sysctl_clk_recalc_rate(struct clk_hw *hw, - unsigned long parent_rate) -{ - struct k210_sysctl *s = container_of(hw, struct k210_sysctl, hw); - u32 clksel0, pll0; - u64 pll0_freq, clkr0, clkf0, clkod0; - - /* - * if the clock selector is not set, use the base frequency. - * otherwise, use pll0 frequency with a frequency divisor. - */ - clksel0 = readl(s->regs + k210_sysctl_sel0); - if (!(clksel0 & clksel_aclk)) - return k210_sysctl_clk0_freq; - - /* - * get pll0 frequency: - * freq = base frequency * clkf0 / (clkr0 * clkod0) - */ - pll0 = readl(s->regs + k210_sysctl_pll0); - clkr0 = 1 + field_get(genmask(3, 0), pll0); - clkf0 = 1 + field_get(genmask(9, 4), pll0); - clkod0 = 1 + field_get(genmask(13, 10), pll0); - pll0_freq = clkf0 * k210_sysctl_clk0_freq / (clkr0 * clkod0); - - /* get the frequency divisor from the clock selector */ - return pll0_freq / (2ull << field_get(0x00000006, clksel0)); -} - -static const struct clk_ops k210_sysctl_clk_ops = { - .recalc_rate = k210_sysctl_clk_recalc_rate, -}; - -static const struct clk_init_data k210_clk_init_data = { - .name = "k210-sysctl-pll1", - .ops = &k210_sysctl_clk_ops, -}; - - struct k210_sysctl *s; - int error; - - pr_info("kendryte k210 soc sysctl "); - - s = devm_kzalloc(&pdev->dev, sizeof(*s), gfp_kernel); - if (!s) - return -enomem; - - s->regs = devm_ioremap_resource(&pdev->dev, - platform_get_resource(pdev, ioresource_mem, 0)); - if (is_err(s->regs)) - return ptr_err(s->regs); - - s->hw.init = &k210_clk_init_data; - error = devm_clk_hw_register(&pdev->dev, &s->hw); - if (error) { - dev_err(&pdev->dev, "failed to register clk"); - return error; + struct device *dev = &pdev->dev; + struct clk *pclk; + int ret; + + dev_info(dev, "k210 system controller "); + + /* get power bus clock */ + pclk = devm_clk_get(dev, null); + if (is_err(pclk)) + return dev_err_probe(dev, ptr_err(pclk), + "get bus clock failed "); + + ret = clk_prepare_enable(pclk); + if (ret) { + dev_err(dev, "enable bus clock failed "); + return ret; - error = devm_of_clk_add_hw_provider(&pdev->dev, of_clk_hw_simple_get, - &s->hw); - if (error) { - dev_err(&pdev->dev, "adding clk provider failed "); - return error; - } + /* populate children */ + ret = devm_of_platform_populate(dev); + if (ret) + dev_err(dev, "populate platform failed %d ", ret); - return 0; + return ret; - { .compatible = "kendryte,k210-sysctl", }, - {} + { .compatible = "canaan,k210-sysctl", }, + { /* sentinel */ }, +builtin_platform_driver(k210_sysctl_driver); -static int __init k210_sysctl_init(void) -{ - return platform_driver_register(&k210_sysctl_driver); -} -core_initcall(k210_sysctl_init); +/* + * system controller registers base address and size. + */ +#define k210_sysctl_base_addr 0x50440000ull +#define k210_sysctl_base_size 0x1000 - void __iomem *regs; - - regs = ioremap(k210_sysctl_sysctl_base_addr, 0x1000); - if (!regs) - panic("k210 sysctl ioremap"); - - /* enable pll1 to make the kpu sram useable */ - k210_pll1_enable(regs); - - k210_set_bits(pll_out_en, regs + k210_sysctl_pll0); + void __iomem *sysctl_base; - k210_set_bits(clken_cpu | clken_sram0 | clken_sram1, - regs + k210_sysctl_en_cent); - k210_set_bits(clken_rom | clken_timer0 | clken_rtc, - regs + k210_sysctl_en_peri); + sysctl_base = ioremap(k210_sysctl_base_addr, k210_sysctl_base_size); + if (!sysctl_base) + panic("k210-sysctl: ioremap failed"); - k210_set_bits(clksel_aclk, regs + k210_sysctl_sel0); + k210_clk_early_init(sysctl_base); - iounmap(regs); + iounmap(sysctl_base); -soc_early_init_declare(generic_k210, "kendryte,k210", k210_soc_early_init); +soc_early_init_declare(k210_soc, "canaan,kendryte-k210", k210_soc_early_init); diff --git a/include/soc/canaan/k210-sysctl.h b/include/soc/canaan/k210-sysctl.h --- a/include/soc/canaan/k210-sysctl.h +++ b/include/soc/canaan/k210-sysctl.h +void k210_clk_early_init(void __iomem *regs); +
|
Clock
|
c6ca7616f7d5c2ce166280107ba74db1d528fcb7
|
damien le moal
|
drivers
|
soc
|
canaan
|
clk: clk-axiclkgen: add zynqmp pfd and vco limits
|
for zynqmp (ultrascale) the pfd and vco limits are different. in order to support these, this change adds a compatible string (i.e. 'adi,zynqmp-axi-clkgen-2.00.a') which will take into account for these limits and apply them.
|
this release allows to map an uid to a different one in a mount; it also adds support for selecting the preemption model at runtime; support for a low-overhead memory error detector designed to be used in production; support for the acrn hypervisor designed for embedded systems; btrfs initial support for zoned devices, subpage blocks sizes and performance improvements; support for eager nfs writes; support for a thermal power management to control the surface temperature of embedded devices in an unified way; the napi polling can be moved to a kernel thread; and support for non-blocking path lookups. as always, there are many other features, new drivers, improvements and fixes.
|
add zynqmp pfd and vco limits
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'cryptography', 'security', 'networking', 'architectures x86 arm risc-v powerpc mips csky s390 pa-risc c6x']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'device voltage and frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'non-transparent bridge (ntb)', 'thunderbolt', 'clock', 'phy ("physical layer" framework)', 'cxl (compute express link)', 'various']
|
['clk-axiclkgen']
|
['c']
| 1
| 11
| 0
|
--- diff --git a/drivers/clk/clk-axi-clkgen.c b/drivers/clk/clk-axi-clkgen.c --- a/drivers/clk/clk-axi-clkgen.c +++ b/drivers/clk/clk-axi-clkgen.c +static const struct axi_clkgen_limits axi_clkgen_zynqmp_default_limits = { + .fpfd_min = 10000, + .fpfd_max = 450000, + .fvco_min = 800000, + .fvco_max = 1600000, +}; + + { + .compatible = "adi,zynqmp-axi-clkgen-2.00.a", + .data = &axi_clkgen_zynqmp_default_limits, + },
|
Clock
|
da68c30963c04d7badbda53021418df1f043c985
|
alexandru ardelean moritz fischer mdf kernel org
|
drivers
|
clk
| |
clk: imx8mm: add clkout1/2 support
|
clkout1 and clkout2 allow to supply clocks from the soc to the board, which is used by some board designs to provide reference clocks.
|
this release allows to map an uid to a different one in a mount; it also adds support for selecting the preemption model at runtime; support for a low-overhead memory error detector designed to be used in production; support for the acrn hypervisor designed for embedded systems; btrfs initial support for zoned devices, subpage blocks sizes and performance improvements; support for eager nfs writes; support for a thermal power management to control the surface temperature of embedded devices in an unified way; the napi polling can be moved to a kernel thread; and support for non-blocking path lookups. as always, there are many other features, new drivers, improvements and fixes.
|
add clkout1/2 support
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'cryptography', 'security', 'networking', 'architectures x86 arm risc-v powerpc mips csky s390 pa-risc c6x']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'device voltage and frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'non-transparent bridge (ntb)', 'thunderbolt', 'clock', 'phy ("physical layer" framework)', 'cxl (compute express link)', 'various']
|
['imx8mm']
|
['h', 'c']
| 2
| 21
| 1
|
--- diff --git a/drivers/clk/imx/clk-imx8mm.c b/drivers/clk/imx/clk-imx8mm.c --- a/drivers/clk/imx/clk-imx8mm.c +++ b/drivers/clk/imx/clk-imx8mm.c +static const char * const clkout_sels[] = {"audio_pll1_out", "audio_pll2_out", "video_pll1_out", + "dummy", "dummy", "gpu_pll_out", "vpu_pll_out", + "arm_pll_out", "sys_pll1", "sys_pll2", "sys_pll3", + "dummy", "dummy", "osc_24m", "dummy", "osc_32k"}; + + hws[imx8mm_clk_clkout1_sel] = imx_clk_hw_mux("clkout1_sel", base + 0x128, 4, 4, clkout_sels, array_size(clkout_sels)); + hws[imx8mm_clk_clkout1_div] = imx_clk_hw_divider("clkout1_div", "clkout1_sel", base + 0x128, 0, 4); + hws[imx8mm_clk_clkout1] = imx_clk_hw_gate("clkout1", "clkout1_div", base + 0x128, 8); + hws[imx8mm_clk_clkout2_sel] = imx_clk_hw_mux("clkout2_sel", base + 0x128, 20, 4, clkout_sels, array_size(clkout_sels)); + hws[imx8mm_clk_clkout2_div] = imx_clk_hw_divider("clkout2_div", "clkout2_sel", base + 0x128, 16, 4); + hws[imx8mm_clk_clkout2] = imx_clk_hw_gate("clkout2", "clkout2_div", base + 0x128, 24); + diff --git a/include/dt-bindings/clock/imx8mm-clock.h b/include/dt-bindings/clock/imx8mm-clock.h --- a/include/dt-bindings/clock/imx8mm-clock.h +++ b/include/dt-bindings/clock/imx8mm-clock.h -#define imx8mm_clk_end 252 +#define imx8mm_clk_clkout1_sel 252 +#define imx8mm_clk_clkout1_div 253 +#define imx8mm_clk_clkout1 254 +#define imx8mm_clk_clkout2_sel 255 +#define imx8mm_clk_clkout2_div 256 +#define imx8mm_clk_clkout2 257 + + +#define imx8mm_clk_end 258
|
Clock
|
c1ae5c6f789acde2ad32226cb5461cc1bc60cdf3
|
lucas stach abel vesa abel vesa nxp com
|
include
|
dt-bindings
|
clock, imx
|
clk: imx8mn: add clkout1/2 support
|
clkout1 and clkout2 allow to supply clocks from the soc to the board, which is used by some board designs to provide reference clocks.
|
this release allows to map an uid to a different one in a mount; it also adds support for selecting the preemption model at runtime; support for a low-overhead memory error detector designed to be used in production; support for the acrn hypervisor designed for embedded systems; btrfs initial support for zoned devices, subpage blocks sizes and performance improvements; support for eager nfs writes; support for a thermal power management to control the surface temperature of embedded devices in an unified way; the napi polling can be moved to a kernel thread; and support for non-blocking path lookups. as always, there are many other features, new drivers, improvements and fixes.
|
add clkout1/2 support
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'cryptography', 'security', 'networking', 'architectures x86 arm risc-v powerpc mips csky s390 pa-risc c6x']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'device voltage and frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'non-transparent bridge (ntb)', 'thunderbolt', 'clock', 'phy ("physical layer" framework)', 'cxl (compute express link)', 'various']
|
['imx8mn']
|
['h', 'c']
| 2
| 20
| 1
|
--- diff --git a/drivers/clk/imx/clk-imx8mn.c b/drivers/clk/imx/clk-imx8mn.c --- a/drivers/clk/imx/clk-imx8mn.c +++ b/drivers/clk/imx/clk-imx8mn.c +static const char * const clkout_sels[] = {"audio_pll1_out", "audio_pll2_out", "video_pll1_out", + "dummy", "dummy", "gpu_pll_out", "dummy", + "arm_pll_out", "sys_pll1", "sys_pll2", "sys_pll3", + "dummy", "dummy", "osc_24m", "dummy", "osc_32k"}; + + hws[imx8mn_clk_clkout1_sel] = imx_clk_hw_mux("clkout1_sel", base + 0x128, 4, 4, clkout_sels, array_size(clkout_sels)); + hws[imx8mn_clk_clkout1_div] = imx_clk_hw_divider("clkout1_div", "clkout1_sel", base + 0x128, 0, 4); + hws[imx8mn_clk_clkout1] = imx_clk_hw_gate("clkout1", "clkout1_div", base + 0x128, 8); + hws[imx8mn_clk_clkout2_sel] = imx_clk_hw_mux("clkout2_sel", base + 0x128, 20, 4, clkout_sels, array_size(clkout_sels)); + hws[imx8mn_clk_clkout2_div] = imx_clk_hw_divider("clkout2_div", "clkout2_sel", base + 0x128, 16, 4); + hws[imx8mn_clk_clkout2] = imx_clk_hw_gate("clkout2", "clkout2_div", base + 0x128, 24); + diff --git a/include/dt-bindings/clock/imx8mn-clock.h b/include/dt-bindings/clock/imx8mn-clock.h --- a/include/dt-bindings/clock/imx8mn-clock.h +++ b/include/dt-bindings/clock/imx8mn-clock.h -#define imx8mn_clk_end 215 +#define imx8mn_clk_clkout1_sel 215 +#define imx8mn_clk_clkout1_div 216 +#define imx8mn_clk_clkout1 217 +#define imx8mn_clk_clkout2_sel 218 +#define imx8mn_clk_clkout2_div 219 +#define imx8mn_clk_clkout2 220 + +#define imx8mn_clk_end 221
|
Clock
|
3af4df65504088e9a7d20c0251e1016e521ad4fc
|
lucas stach abel vesa abel vesa nxp com
|
include
|
dt-bindings
|
clock, imx
|
clk: imx8mq: add pll monitor output
|
the pll monitor is mentioned as a debug feature in the reference manual, but there are some boards that use this clock output as a reference clock for board level components. add support for those clocks in the clock driver, so this clock output can be used properly.
|
this release allows to map an uid to a different one in a mount; it also adds support for selecting the preemption model at runtime; support for a low-overhead memory error detector designed to be used in production; support for the acrn hypervisor designed for embedded systems; btrfs initial support for zoned devices, subpage blocks sizes and performance improvements; support for eager nfs writes; support for a thermal power management to control the surface temperature of embedded devices in an unified way; the napi polling can be moved to a kernel thread; and support for non-blocking path lookups. as always, there are many other features, new drivers, improvements and fixes.
|
add pll monitor output
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'cryptography', 'security', 'networking', 'architectures x86 arm risc-v powerpc mips csky s390 pa-risc c6x']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'device voltage and frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'non-transparent bridge (ntb)', 'thunderbolt', 'clock', 'phy ("physical layer" framework)', 'cxl (compute express link)', 'various']
|
['imx8mq']
|
['h', 'c']
| 2
| 37
| 1
|
--- diff --git a/drivers/clk/imx/clk-imx8mq.c b/drivers/clk/imx/clk-imx8mq.c --- a/drivers/clk/imx/clk-imx8mq.c +++ b/drivers/clk/imx/clk-imx8mq.c +static const char * const pllout_monitor_sels[] = {"osc_25m", "osc_27m", "dummy", "dummy", "ckil", + "audio_pll1_out_monitor", "audio_pll2_out_monitor", + "video_pll1_out_monitor", "gpu_pll_out_monitor", + "vpu_pll_out_monitor", "arm_pll_out_monitor", + "sys_pll1_out_monitor", "sys_pll2_out_monitor", + "sys_pll3_out_monitor", "dram_pll_out_monitor", + "video_pll2_out_monitor", }; + + hws[imx8mq_clk_mon_audio_pll1_div] = imx_clk_hw_divider("audio_pll1_out_monitor", "audio_pll1_bypass", base + 0x78, 0, 3); + hws[imx8mq_clk_mon_audio_pll2_div] = imx_clk_hw_divider("audio_pll2_out_monitor", "audio_pll2_bypass", base + 0x78, 4, 3); + hws[imx8mq_clk_mon_video_pll1_div] = imx_clk_hw_divider("video_pll1_out_monitor", "video_pll1_bypass", base + 0x78, 8, 3); + hws[imx8mq_clk_mon_gpu_pll_div] = imx_clk_hw_divider("gpu_pll_out_monitor", "gpu_pll_bypass", base + 0x78, 12, 3); + hws[imx8mq_clk_mon_vpu_pll_div] = imx_clk_hw_divider("vpu_pll_out_monitor", "vpu_pll_bypass", base + 0x78, 16, 3); + hws[imx8mq_clk_mon_arm_pll_div] = imx_clk_hw_divider("arm_pll_out_monitor", "arm_pll_bypass", base + 0x78, 20, 3); + hws[imx8mq_clk_mon_sys_pll1_div] = imx_clk_hw_divider("sys_pll1_out_monitor", "sys1_pll_out", base + 0x7c, 0, 3); + hws[imx8mq_clk_mon_sys_pll2_div] = imx_clk_hw_divider("sys_pll2_out_monitor", "sys2_pll_out", base + 0x7c, 4, 3); + hws[imx8mq_clk_mon_sys_pll3_div] = imx_clk_hw_divider("sys_pll3_out_monitor", "sys3_pll_out", base + 0x7c, 8, 3); + hws[imx8mq_clk_mon_dram_pll_div] = imx_clk_hw_divider("dram_pll_out_monitor", "dram_pll_out", base + 0x7c, 12, 3); + hws[imx8mq_clk_mon_video_pll2_div] = imx_clk_hw_divider("video_pll2_out_monitor", "video2_pll_out", base + 0x7c, 16, 3); + hws[imx8mq_clk_mon_sel] = imx_clk_hw_mux("pllout_monitor_sel", base + 0x74, 0, 4, pllout_monitor_sels, array_size(pllout_monitor_sels)); + hws[imx8mq_clk_mon_clk2_out] = imx_clk_hw_gate("pllout_monitor_clk2", "pllout_monitor_sel", base + 0x74, 4); + diff --git a/include/dt-bindings/clock/imx8mq-clock.h b/include/dt-bindings/clock/imx8mq-clock.h --- a/include/dt-bindings/clock/imx8mq-clock.h +++ b/include/dt-bindings/clock/imx8mq-clock.h -#define imx8mq_clk_end 290 +#define imx8mq_clk_mon_audio_pll1_div 290 +#define imx8mq_clk_mon_audio_pll2_div 291 +#define imx8mq_clk_mon_video_pll1_div 292 +#define imx8mq_clk_mon_gpu_pll_div 293 +#define imx8mq_clk_mon_vpu_pll_div 294 +#define imx8mq_clk_mon_arm_pll_div 295 +#define imx8mq_clk_mon_sys_pll1_div 296 +#define imx8mq_clk_mon_sys_pll2_div 297 +#define imx8mq_clk_mon_sys_pll3_div 298 +#define imx8mq_clk_mon_dram_pll_div 299 +#define imx8mq_clk_mon_video_pll2_div 300 +#define imx8mq_clk_mon_sel 301 +#define imx8mq_clk_mon_clk2_out 302 + +#define imx8mq_clk_end 303
|
Clock
|
75a352bc6611e79227328e39d42086b0eebf24f3
|
lucas stach
|
include
|
dt-bindings
|
clock, imx
|
clk: mstar: mstar/sigmastar mpll driver
|
this adds a basic driver for the mpll block found in mstar/sigmastar armv7 socs.
|
this release allows to map an uid to a different one in a mount; it also adds support for selecting the preemption model at runtime; support for a low-overhead memory error detector designed to be used in production; support for the acrn hypervisor designed for embedded systems; btrfs initial support for zoned devices, subpage blocks sizes and performance improvements; support for eager nfs writes; support for a thermal power management to control the surface temperature of embedded devices in an unified way; the napi polling can be moved to a kernel thread; and support for non-blocking path lookups. as always, there are many other features, new drivers, improvements and fixes.
|
mstar/sigmastar mpll driver
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'cryptography', 'security', 'networking', 'architectures x86 arm risc-v powerpc mips csky s390 pa-risc c6x']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'device voltage and frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'non-transparent bridge (ntb)', 'thunderbolt', 'clock', 'phy ("physical layer" framework)', 'cxl (compute express link)', 'various']
|
['mstar']
|
['kconfig', 'maintainers', 'c', 'makefile']
| 6
| 169
| 0
|
--- diff --git a/maintainers b/maintainers --- a/maintainers +++ b/maintainers +f: drivers/clk/mstar/ diff --git a/drivers/clk/kconfig b/drivers/clk/kconfig --- a/drivers/clk/kconfig +++ b/drivers/clk/kconfig +source "drivers/clk/mstar/kconfig" diff --git a/drivers/clk/makefile b/drivers/clk/makefile --- a/drivers/clk/makefile +++ b/drivers/clk/makefile +obj-$(config_arch_mstarv7) += mstar/ diff --git a/drivers/clk/mstar/kconfig b/drivers/clk/mstar/kconfig --- /dev/null +++ b/drivers/clk/mstar/kconfig +# spdx-license-identifier: gpl-2.0-only +config mstar_msc313_mpll + bool + select regmap + select regmap_mmio diff --git a/drivers/clk/mstar/makefile b/drivers/clk/mstar/makefile --- /dev/null +++ b/drivers/clk/mstar/makefile +# spdx-license-identifier: gpl-2.0 +# +# makefile for mstar specific clk +# + +obj-$(config_mstar_msc313_mpll) += clk-msc313-mpll.o diff --git a/drivers/clk/mstar/clk-msc313-mpll.c b/drivers/clk/mstar/clk-msc313-mpll.c --- /dev/null +++ b/drivers/clk/mstar/clk-msc313-mpll.c +// spdx-license-identifier: gpl-2.0 +/* + * mstar msc313 mpll driver + * + * copyright (c) 2020 daniel palmer <daniel@thingy.jp> + */ + +#include <linux/platform_device.h> +#include <linux/of_address.h> +#include <linux/clk-provider.h> +#include <linux/regmap.h> + +#define reg_config1 0x8 +#define reg_config2 0xc + +static const struct regmap_config msc313_mpll_regmap_config = { + .reg_bits = 16, + .val_bits = 16, + .reg_stride = 4, +}; + +static const struct reg_field config1_loop_div_first = reg_field(reg_config1, 8, 9); +static const struct reg_field config1_input_div_first = reg_field(reg_config1, 4, 5); +static const struct reg_field config2_output_div_first = reg_field(reg_config2, 12, 13); +static const struct reg_field config2_loop_div_second = reg_field(reg_config2, 0, 7); + +static const unsigned int output_dividers[] = { + 2, 3, 4, 5, 6, 7, 10 +}; + +#define numoutputs (array_size(output_dividers) + 1) + +struct msc313_mpll { + struct clk_hw clk_hw; + struct regmap_field *input_div; + struct regmap_field *loop_div_first; + struct regmap_field *loop_div_second; + struct regmap_field *output_div; + struct clk_hw_onecell_data *clk_data; +}; + +#define to_mpll(_hw) container_of(_hw, struct msc313_mpll, clk_hw) + +static unsigned long msc313_mpll_recalc_rate(struct clk_hw *hw, + unsigned long parent_rate) +{ + struct msc313_mpll *mpll = to_mpll(hw); + unsigned int input_div, output_div, loop_first, loop_second; + unsigned long output_rate; + + regmap_field_read(mpll->input_div, &input_div); + regmap_field_read(mpll->output_div, &output_div); + regmap_field_read(mpll->loop_div_first, &loop_first); + regmap_field_read(mpll->loop_div_second, &loop_second); + + output_rate = parent_rate / (1 << input_div); + output_rate *= (1 << loop_first) * max(loop_second, 1u); + output_rate /= max(output_div, 1u); + + return output_rate; +} + +static const struct clk_ops msc313_mpll_ops = { + .recalc_rate = msc313_mpll_recalc_rate, +}; + +static const struct clk_parent_data mpll_parent = { + .index = 0, +}; + +static int msc313_mpll_probe(struct platform_device *pdev) +{ + void __iomem *base; + struct msc313_mpll *mpll; + struct clk_init_data clk_init = { }; + struct device *dev = &pdev->dev; + struct regmap *regmap; + char *outputname; + struct clk_hw *divhw; + int ret, i; + + mpll = devm_kzalloc(dev, sizeof(*mpll), gfp_kernel); + if (!mpll) + return -enomem; + + base = devm_platform_ioremap_resource(pdev, 0); + if (is_err(base)) + return ptr_err(base); + + regmap = devm_regmap_init_mmio(dev, base, &msc313_mpll_regmap_config); + if (is_err(regmap)) + return ptr_err(regmap); + + mpll->input_div = devm_regmap_field_alloc(dev, regmap, config1_input_div_first); + if (is_err(mpll->input_div)) + return ptr_err(mpll->input_div); + mpll->output_div = devm_regmap_field_alloc(dev, regmap, config2_output_div_first); + if (is_err(mpll->output_div)) + return ptr_err(mpll->output_div); + mpll->loop_div_first = devm_regmap_field_alloc(dev, regmap, config1_loop_div_first); + if (is_err(mpll->loop_div_first)) + return ptr_err(mpll->loop_div_first); + mpll->loop_div_second = devm_regmap_field_alloc(dev, regmap, config2_loop_div_second); + if (is_err(mpll->loop_div_second)) + return ptr_err(mpll->loop_div_second); + + mpll->clk_data = devm_kzalloc(dev, struct_size(mpll->clk_data, hws, + array_size(output_dividers)), gfp_kernel); + if (!mpll->clk_data) + return -enomem; + + clk_init.name = dev_name(dev); + clk_init.ops = &msc313_mpll_ops; + clk_init.parent_data = &mpll_parent; + clk_init.num_parents = 1; + mpll->clk_hw.init = &clk_init; + + ret = devm_clk_hw_register(dev, &mpll->clk_hw); + if (ret) + return ret; + + mpll->clk_data->num = numoutputs; + mpll->clk_data->hws[0] = &mpll->clk_hw; + + for (i = 0; i < array_size(output_dividers); i++) { + outputname = devm_kasprintf(dev, gfp_kernel, "%s_div_%d", + clk_init.name, output_dividers[i]); + if (!outputname) + return -enomem; + divhw = devm_clk_hw_register_fixed_factor(dev, outputname, + clk_init.name, 0, 1, output_dividers[i]); + if (is_err(divhw)) + return ptr_err(divhw); + mpll->clk_data->hws[i + 1] = divhw; + } + + platform_set_drvdata(pdev, mpll); + + return devm_of_clk_add_hw_provider(&pdev->dev, of_clk_hw_onecell_get, + mpll->clk_data); +} + +static const struct of_device_id msc313_mpll_of_match[] = { + { .compatible = "mstar,msc313-mpll", }, + {} +}; + +static struct platform_driver msc313_mpll_driver = { + .driver = { + .name = "mstar-msc313-mpll", + .of_match_table = msc313_mpll_of_match, + }, + .probe = msc313_mpll_probe, +}; +builtin_platform_driver(msc313_mpll_driver);
|
Clock
|
bef7a78da71687838a6bb5b316c4f5dfd31582f5
|
daniel palmer
|
drivers
|
clk
|
mstar
|
clk: qcom: add a7 pll support
|
add support for pll found in qualcomm sdx55 platforms which is used to provide clock to the cortex a7 cpu via a mux. this pll can provide high frequency clock to the cpu above 1ghz as compared to the other sources like gpll0.
|
this release allows to map an uid to a different one in a mount; it also adds support for selecting the preemption model at runtime; support for a low-overhead memory error detector designed to be used in production; support for the acrn hypervisor designed for embedded systems; btrfs initial support for zoned devices, subpage blocks sizes and performance improvements; support for eager nfs writes; support for a thermal power management to control the surface temperature of embedded devices in an unified way; the napi polling can be moved to a kernel thread; and support for non-blocking path lookups. as always, there are many other features, new drivers, improvements and fixes.
|
add a7 pll support
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'cryptography', 'security', 'networking', 'architectures x86 arm risc-v powerpc mips csky s390 pa-risc c6x']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'device voltage and frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'non-transparent bridge (ntb)', 'thunderbolt', 'clock', 'phy ("physical layer" framework)', 'cxl (compute express link)', 'various']
|
['qcom']
|
['kconfig', 'c', 'makefile']
| 3
| 109
| 0
|
--- diff --git a/drivers/clk/qcom/kconfig b/drivers/clk/qcom/kconfig --- a/drivers/clk/qcom/kconfig +++ b/drivers/clk/qcom/kconfig +config qcom_a7pll + tristate "sdx55 a7 pll" + help + support for the a7 pll on sdx55 devices. it provides the cpu with + frequencies above 1ghz. + say y if you want to support higher cpu frequencies on sdx55 + devices. + diff --git a/drivers/clk/qcom/makefile b/drivers/clk/qcom/makefile --- a/drivers/clk/qcom/makefile +++ b/drivers/clk/qcom/makefile +obj-$(config_qcom_a7pll) += a7-pll.o diff --git a/drivers/clk/qcom/a7-pll.c b/drivers/clk/qcom/a7-pll.c --- /dev/null +++ b/drivers/clk/qcom/a7-pll.c +// spdx-license-identifier: gpl-2.0 +/* + * qualcomm a7 pll driver + * + * copyright (c) 2020, linaro limited + * author: manivannan sadhasivam <manivannan.sadhasivam@linaro.org> + */ + +#include <linux/clk-provider.h> +#include <linux/module.h> +#include <linux/platform_device.h> +#include <linux/regmap.h> + +#include "clk-alpha-pll.h" + +#define lucid_pll_off_l_val 0x04 + +static const struct pll_vco lucid_vco[] = { + { 249600000, 2000000000, 0 }, +}; + +static struct clk_alpha_pll a7pll = { + .offset = 0x100, + .vco_table = lucid_vco, + .num_vco = array_size(lucid_vco), + .regs = clk_alpha_pll_regs[clk_alpha_pll_type_lucid], + .clkr = { + .hw.init = &(struct clk_init_data){ + .name = "a7pll", + .parent_data = &(const struct clk_parent_data){ + .fw_name = "bi_tcxo", + }, + .num_parents = 1, + .ops = &clk_alpha_pll_lucid_ops, + }, + }, +}; + +static const struct alpha_pll_config a7pll_config = { + .l = 0x39, + .config_ctl_val = 0x20485699, + .config_ctl_hi_val = 0x2261, + .config_ctl_hi1_val = 0x029a699c, + .user_ctl_val = 0x1, + .user_ctl_hi_val = 0x805, +}; + +static const struct regmap_config a7pll_regmap_config = { + .reg_bits = 32, + .reg_stride = 4, + .val_bits = 32, + .max_register = 0x1000, + .fast_io = true, +}; + +static int qcom_a7pll_probe(struct platform_device *pdev) +{ + struct device *dev = &pdev->dev; + struct regmap *regmap; + void __iomem *base; + u32 l_val; + int ret; + + base = devm_platform_ioremap_resource(pdev, 0); + if (is_err(base)) + return ptr_err(base); + + regmap = devm_regmap_init_mmio(dev, base, &a7pll_regmap_config); + if (is_err(regmap)) + return ptr_err(regmap); + + /* configure pll only if the l_val is zero */ + regmap_read(regmap, a7pll.offset + lucid_pll_off_l_val, &l_val); + if (!l_val) + clk_lucid_pll_configure(&a7pll, regmap, &a7pll_config); + + ret = devm_clk_register_regmap(dev, &a7pll.clkr); + if (ret) + return ret; + + return devm_of_clk_add_hw_provider(dev, of_clk_hw_simple_get, + &a7pll.clkr.hw); +} + +static const struct of_device_id qcom_a7pll_match_table[] = { + { .compatible = "qcom,sdx55-a7pll" }, + { } +}; + +static struct platform_driver qcom_a7pll_driver = { + .probe = qcom_a7pll_probe, + .driver = { + .name = "qcom-a7pll", + .of_match_table = qcom_a7pll_match_table, + }, +}; +module_platform_driver(qcom_a7pll_driver); + +module_description("qualcomm a7 pll driver"); +module_license("gpl v2");
|
Clock
|
5a5223ffd7ef721b59be38e2ce83e0a73dbb8942
|
manivannan sadhasivam
|
drivers
|
clk
|
qcom
|
clk: qcom: add global clock controller (gcc) driver for sc7280
|
add support for the global clock controller found on sc7280 based devices. this should allow most non-multimedia device drivers to probe and control their clocks.
|
this release allows to map an uid to a different one in a mount; it also adds support for selecting the preemption model at runtime; support for a low-overhead memory error detector designed to be used in production; support for the acrn hypervisor designed for embedded systems; btrfs initial support for zoned devices, subpage blocks sizes and performance improvements; support for eager nfs writes; support for a thermal power management to control the surface temperature of embedded devices in an unified way; the napi polling can be moved to a kernel thread; and support for non-blocking path lookups. as always, there are many other features, new drivers, improvements and fixes.
|
add global clock controller (gcc) driver for sc7280
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'cryptography', 'security', 'networking', 'architectures x86 arm risc-v powerpc mips csky s390 pa-risc c6x']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'device voltage and frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'non-transparent bridge (ntb)', 'thunderbolt', 'clock', 'phy ("physical layer" framework)', 'cxl (compute express link)', 'various']
|
['qcom']
|
['kconfig', 'c', 'makefile']
| 3
| 3,613
| 0
|
--- diff --git a/drivers/clk/qcom/kconfig b/drivers/clk/qcom/kconfig --- a/drivers/clk/qcom/kconfig +++ b/drivers/clk/qcom/kconfig +config sc_gcc_7280 + tristate "sc7280 global clock controller" + select qcom_gdsc + depends on common_clk_qcom + help + support for the global clock controller on sc7280 devices. + say y if you want to use peripheral devices such as uart, spi, + i2c, usb, ufs, sdcc, pcie etc. + diff --git a/drivers/clk/qcom/makefile b/drivers/clk/qcom/makefile --- a/drivers/clk/qcom/makefile +++ b/drivers/clk/qcom/makefile +obj-$(config_sc_gcc_7280) += gcc-sc7280.o diff --git a/drivers/clk/qcom/gcc-sc7280.c b/drivers/clk/qcom/gcc-sc7280.c --- /dev/null +++ b/drivers/clk/qcom/gcc-sc7280.c +// spdx-license-identifier: gpl-2.0-only +/* + * copyright (c) 2020-2021, the linux foundation. all rights reserved. + */ + +#include <linux/clk-provider.h> +#include <linux/kernel.h> +#include <linux/module.h> +#include <linux/of_device.h> +#include <linux/of.h> +#include <linux/regmap.h> + +#include <dt-bindings/clock/qcom,gcc-sc7280.h> + +#include "clk-alpha-pll.h" +#include "clk-branch.h" +#include "clk-rcg.h" +#include "clk-regmap-divider.h" +#include "clk-regmap-mux.h" +#include "common.h" +#include "gdsc.h" +#include "reset.h" + +enum { + p_bi_tcxo, + p_gcc_gpll0_out_even, + p_gcc_gpll0_out_main, + p_gcc_gpll0_out_odd, + p_gcc_gpll10_out_main, + p_gcc_gpll4_out_main, + p_gcc_gpll9_out_main, + p_pcie_0_pipe_clk, + p_pcie_1_pipe_clk, + p_sleep_clk, + p_ufs_phy_rx_symbol_0_clk, + p_ufs_phy_rx_symbol_1_clk, + p_ufs_phy_tx_symbol_0_clk, + p_usb3_phy_wrapper_gcc_usb30_pipe_clk, + p_gcc_mss_gpll0_main_div_clk, +}; + +static struct clk_alpha_pll gcc_gpll0 = { + .offset = 0x0, + .regs = clk_alpha_pll_regs[clk_alpha_pll_type_lucid], + .clkr = { + .enable_reg = 0x52010, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_gpll0", + .parent_data = &(const struct clk_parent_data){ + .fw_name = "bi_tcxo", + }, + .num_parents = 1, + .ops = &clk_alpha_pll_fixed_lucid_ops, + }, + }, +}; + +static const struct clk_div_table post_div_table_gcc_gpll0_out_even[] = { + { 0x1, 2 }, + { } +}; + +static struct clk_alpha_pll_postdiv gcc_gpll0_out_even = { + .offset = 0x0, + .post_div_shift = 8, + .post_div_table = post_div_table_gcc_gpll0_out_even, + .num_post_div = array_size(post_div_table_gcc_gpll0_out_even), + .width = 4, + .regs = clk_alpha_pll_regs[clk_alpha_pll_type_lucid], + .clkr.hw.init = &(struct clk_init_data){ + .name = "gcc_gpll0_out_even", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_gpll0.clkr.hw, + }, + .num_parents = 1, + .ops = &clk_alpha_pll_postdiv_lucid_ops, + }, +}; + +static const struct clk_div_table post_div_table_gcc_gpll0_out_odd[] = { + { 0x3, 3 }, + { } +}; + +static struct clk_alpha_pll_postdiv gcc_gpll0_out_odd = { + .offset = 0x0, + .post_div_shift = 12, + .post_div_table = post_div_table_gcc_gpll0_out_odd, + .num_post_div = array_size(post_div_table_gcc_gpll0_out_odd), + .width = 4, + .regs = clk_alpha_pll_regs[clk_alpha_pll_type_lucid], + .clkr.hw.init = &(struct clk_init_data){ + .name = "gcc_gpll0_out_odd", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_gpll0.clkr.hw, + }, + .num_parents = 1, + .ops = &clk_alpha_pll_postdiv_lucid_ops, + }, +}; + +static struct clk_alpha_pll gcc_gpll1 = { + .offset = 0x1000, + .regs = clk_alpha_pll_regs[clk_alpha_pll_type_lucid], + .clkr = { + .enable_reg = 0x52010, + .enable_mask = bit(1), + .hw.init = &(struct clk_init_data){ + .name = "gcc_gpll1", + .parent_data = &(const struct clk_parent_data){ + .fw_name = "bi_tcxo", + }, + .num_parents = 1, + .ops = &clk_alpha_pll_fixed_lucid_ops, + }, + }, +}; + +static struct clk_alpha_pll gcc_gpll10 = { + .offset = 0x1e000, + .regs = clk_alpha_pll_regs[clk_alpha_pll_type_lucid], + .clkr = { + .enable_reg = 0x52010, + .enable_mask = bit(9), + .hw.init = &(struct clk_init_data){ + .name = "gcc_gpll10", + .parent_data = &(const struct clk_parent_data){ + .fw_name = "bi_tcxo", + }, + .num_parents = 1, + .ops = &clk_alpha_pll_fixed_lucid_ops, + }, + }, +}; + +static struct clk_alpha_pll gcc_gpll4 = { + .offset = 0x76000, + .regs = clk_alpha_pll_regs[clk_alpha_pll_type_lucid], + .clkr = { + .enable_reg = 0x52010, + .enable_mask = bit(4), + .hw.init = &(struct clk_init_data){ + .name = "gcc_gpll4", + .parent_data = &(const struct clk_parent_data){ + .fw_name = "bi_tcxo", + }, + .num_parents = 1, + .ops = &clk_alpha_pll_fixed_lucid_ops, + }, + }, +}; + +static struct clk_alpha_pll gcc_gpll9 = { + .offset = 0x1c000, + .regs = clk_alpha_pll_regs[clk_alpha_pll_type_lucid], + .clkr = { + .enable_reg = 0x52010, + .enable_mask = bit(8), + .hw.init = &(struct clk_init_data){ + .name = "gcc_gpll9", + .parent_data = &(const struct clk_parent_data){ + .fw_name = "bi_tcxo", + }, + .num_parents = 1, + .ops = &clk_alpha_pll_fixed_lucid_ops, + }, + }, +}; + +static struct clk_branch gcc_mss_gpll0_main_div_clk_src = { + .halt_check = branch_halt_delay, + .clkr = { + .enable_reg = 0x52000, + .enable_mask = bit(17), + .hw.init = &(struct clk_init_data){ + .name = "gcc_mss_gpll0_main_div_clk_src", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_gpll0_out_even.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static const struct parent_map gcc_parent_map_0[] = { + { p_bi_tcxo, 0 }, + { p_gcc_gpll0_out_main, 1 }, + { p_gcc_gpll0_out_even, 6 }, +}; + +static const struct clk_parent_data gcc_parent_data_0[] = { + { .fw_name = "bi_tcxo" }, + { .hw = &gcc_gpll0.clkr.hw }, + { .hw = &gcc_gpll0_out_even.clkr.hw }, +}; + +static const struct clk_parent_data gcc_parent_data_0_ao[] = { + { .fw_name = "bi_tcxo_ao" }, + { .hw = &gcc_gpll0.clkr.hw }, + { .hw = &gcc_gpll0_out_even.clkr.hw }, +}; + +static const struct parent_map gcc_parent_map_1[] = { + { p_bi_tcxo, 0 }, + { p_gcc_gpll0_out_main, 1 }, + { p_gcc_gpll0_out_odd, 3 }, + { p_gcc_gpll0_out_even, 6 }, +}; + +static const struct clk_parent_data gcc_parent_data_1[] = { + { .fw_name = "bi_tcxo" }, + { .hw = &gcc_gpll0.clkr.hw }, + { .hw = &gcc_gpll0_out_odd.clkr.hw }, + { .hw = &gcc_gpll0_out_even.clkr.hw }, +}; + +static const struct parent_map gcc_parent_map_2[] = { + { p_bi_tcxo, 0 }, + { p_sleep_clk, 5 }, +}; + +static const struct clk_parent_data gcc_parent_data_2[] = { + { .fw_name = "bi_tcxo" }, + { .fw_name = "sleep_clk" }, +}; + +static const struct parent_map gcc_parent_map_3[] = { + { p_bi_tcxo, 0 }, +}; + +static const struct clk_parent_data gcc_parent_data_3[] = { + { .fw_name = "bi_tcxo" }, +}; + +static const struct parent_map gcc_parent_map_4[] = { + { p_bi_tcxo, 0 }, + { p_gcc_gpll0_out_main, 1 }, + { p_gcc_gpll0_out_odd, 3 }, + { p_sleep_clk, 5 }, + { p_gcc_gpll0_out_even, 6 }, +}; + +static const struct clk_parent_data gcc_parent_data_4[] = { + { .fw_name = "bi_tcxo" }, + { .hw = &gcc_gpll0.clkr.hw }, + { .hw = &gcc_gpll0_out_odd.clkr.hw }, + { .fw_name = "sleep_clk" }, + { .hw = &gcc_gpll0_out_even.clkr.hw }, +}; + +static const struct parent_map gcc_parent_map_5[] = { + { p_bi_tcxo, 0 }, + { p_gcc_gpll0_out_even, 6 }, +}; + +static const struct clk_parent_data gcc_parent_data_5[] = { + { .fw_name = "bi_tcxo" }, + { .hw = &gcc_gpll0_out_even.clkr.hw }, +}; + +static const struct parent_map gcc_parent_map_6[] = { + { p_pcie_0_pipe_clk, 0 }, + { p_bi_tcxo, 2 }, +}; + +static const struct clk_parent_data gcc_parent_data_6[] = { + { .fw_name = "pcie_0_pipe_clk", .name = "pcie_0_pipe_clk" }, + { .fw_name = "bi_tcxo" }, +}; + +static const struct parent_map gcc_parent_map_7[] = { + { p_pcie_1_pipe_clk, 0 }, + { p_bi_tcxo, 2 }, +}; + +static const struct clk_parent_data gcc_parent_data_7[] = { + { .fw_name = "pcie_1_pipe_clk", .name = "pcie_1_pipe_clk" }, + { .fw_name = "bi_tcxo" }, +}; + +static const struct parent_map gcc_parent_map_8[] = { + { p_bi_tcxo, 0 }, + { p_gcc_gpll0_out_main, 1 }, + { p_gcc_gpll0_out_odd, 3 }, + { p_gcc_gpll10_out_main, 5 }, + { p_gcc_gpll0_out_even, 6 }, +}; + +static const struct clk_parent_data gcc_parent_data_8[] = { + { .fw_name = "bi_tcxo" }, + { .hw = &gcc_gpll0.clkr.hw }, + { .hw = &gcc_gpll0_out_odd.clkr.hw }, + { .hw = &gcc_gpll10.clkr.hw }, + { .hw = &gcc_gpll0_out_even.clkr.hw }, +}; + +static const struct parent_map gcc_parent_map_9[] = { + { p_bi_tcxo, 0 }, + { p_gcc_gpll0_out_main, 1 }, + { p_gcc_gpll9_out_main, 2 }, + { p_gcc_gpll0_out_odd, 3 }, + { p_gcc_gpll4_out_main, 5 }, + { p_gcc_gpll0_out_even, 6 }, +}; + +static const struct clk_parent_data gcc_parent_data_9[] = { + { .fw_name = "bi_tcxo" }, + { .hw = &gcc_gpll0.clkr.hw }, + { .hw = &gcc_gpll9.clkr.hw }, + { .hw = &gcc_gpll0_out_odd.clkr.hw }, + { .hw = &gcc_gpll4.clkr.hw }, + { .hw = &gcc_gpll0_out_even.clkr.hw }, +}; + +static const struct parent_map gcc_parent_map_10[] = { + { p_ufs_phy_rx_symbol_0_clk, 0 }, + { p_bi_tcxo, 2 }, +}; + +static const struct clk_parent_data gcc_parent_data_10[] = { + { .fw_name = "ufs_phy_rx_symbol_0_clk" }, + { .fw_name = "bi_tcxo" }, +}; + +static const struct parent_map gcc_parent_map_11[] = { + { p_ufs_phy_rx_symbol_1_clk, 0 }, + { p_bi_tcxo, 2 }, +}; + +static const struct clk_parent_data gcc_parent_data_11[] = { + { .fw_name = "ufs_phy_rx_symbol_1_clk" }, + { .fw_name = "bi_tcxo" }, +}; + +static const struct parent_map gcc_parent_map_12[] = { + { p_ufs_phy_tx_symbol_0_clk, 0 }, + { p_bi_tcxo, 2 }, +}; + +static const struct clk_parent_data gcc_parent_data_12[] = { + { .fw_name = "ufs_phy_tx_symbol_0_clk" }, + { .fw_name = "bi_tcxo" }, +}; + +static const struct parent_map gcc_parent_map_13[] = { + { p_usb3_phy_wrapper_gcc_usb30_pipe_clk, 0 }, + { p_bi_tcxo, 2 }, +}; + +static const struct clk_parent_data gcc_parent_data_13[] = { + { .fw_name = "usb3_phy_wrapper_gcc_usb30_pipe_clk" }, + { .fw_name = "bi_tcxo" }, +}; + +static const struct parent_map gcc_parent_map_14[] = { + { p_usb3_phy_wrapper_gcc_usb30_pipe_clk, 0 }, + { p_bi_tcxo, 2 }, +}; + +static const struct clk_parent_data gcc_parent_data_14[] = { + { .fw_name = "usb3_phy_wrapper_gcc_usb30_pipe_clk" }, + { .fw_name = "bi_tcxo" }, +}; + +static const struct parent_map gcc_parent_map_15[] = { + { p_bi_tcxo, 0 }, + { p_gcc_mss_gpll0_main_div_clk, 1 }, +}; + +static const struct clk_parent_data gcc_parent_data_15[] = { + { .fw_name = "bi_tcxo" }, + { .hw = &gcc_mss_gpll0_main_div_clk_src.clkr.hw }, +}; + +static struct clk_regmap_mux gcc_pcie_0_pipe_clk_src = { + .reg = 0x6b054, + .shift = 0, + .width = 2, + .parent_map = gcc_parent_map_6, + .clkr = { + .hw.init = &(struct clk_init_data){ + .name = "gcc_pcie_0_pipe_clk_src", + .parent_data = gcc_parent_data_6, + .num_parents = array_size(gcc_parent_data_6), + .ops = &clk_regmap_mux_closest_ops, + }, + }, +}; + +static struct clk_regmap_mux gcc_pcie_1_pipe_clk_src = { + .reg = 0x8d054, + .shift = 0, + .width = 2, + .parent_map = gcc_parent_map_7, + .clkr = { + .hw.init = &(struct clk_init_data){ + .name = "gcc_pcie_1_pipe_clk_src", + .parent_data = gcc_parent_data_7, + .num_parents = array_size(gcc_parent_data_7), + .ops = &clk_regmap_mux_closest_ops, + }, + }, +}; + +static struct clk_regmap_mux gcc_ufs_phy_rx_symbol_0_clk_src = { + .reg = 0x77058, + .shift = 0, + .width = 2, + .parent_map = gcc_parent_map_10, + .clkr = { + .hw.init = &(struct clk_init_data){ + .name = "gcc_ufs_phy_rx_symbol_0_clk_src", + .parent_data = gcc_parent_data_10, + .num_parents = array_size(gcc_parent_data_10), + .ops = &clk_regmap_mux_closest_ops, + }, + }, +}; + +static struct clk_regmap_mux gcc_ufs_phy_rx_symbol_1_clk_src = { + .reg = 0x770c8, + .shift = 0, + .width = 2, + .parent_map = gcc_parent_map_11, + .clkr = { + .hw.init = &(struct clk_init_data){ + .name = "gcc_ufs_phy_rx_symbol_1_clk_src", + .parent_data = gcc_parent_data_11, + .num_parents = array_size(gcc_parent_data_11), + .ops = &clk_regmap_mux_closest_ops, + }, + }, +}; + +static struct clk_regmap_mux gcc_ufs_phy_tx_symbol_0_clk_src = { + .reg = 0x77048, + .shift = 0, + .width = 2, + .parent_map = gcc_parent_map_12, + .clkr = { + .hw.init = &(struct clk_init_data){ + .name = "gcc_ufs_phy_tx_symbol_0_clk_src", + .parent_data = gcc_parent_data_12, + .num_parents = array_size(gcc_parent_data_12), + .ops = &clk_regmap_mux_closest_ops, + }, + }, +}; + +static struct clk_regmap_mux gcc_usb3_prim_phy_pipe_clk_src = { + .reg = 0xf060, + .shift = 0, + .width = 2, + .parent_map = gcc_parent_map_13, + .clkr = { + .hw.init = &(struct clk_init_data){ + .name = "gcc_usb3_prim_phy_pipe_clk_src", + .parent_data = gcc_parent_data_13, + .num_parents = array_size(gcc_parent_data_13), + .ops = &clk_regmap_mux_closest_ops, + }, + }, +}; + +static struct clk_regmap_mux gcc_usb3_sec_phy_pipe_clk_src = { + .reg = 0x9e060, + .shift = 0, + .width = 2, + .parent_map = gcc_parent_map_14, + .clkr = { + .hw.init = &(struct clk_init_data){ + .name = "gcc_usb3_sec_phy_pipe_clk_src", + .parent_data = gcc_parent_data_14, + .num_parents = array_size(gcc_parent_data_14), + .ops = &clk_regmap_mux_closest_ops, + }, + }, +}; +static const struct freq_tbl ftbl_gcc_cpuss_ahb_clk_src[] = { + f(19200000, p_bi_tcxo, 1, 0, 0), + { } +}; + +static struct clk_rcg2 gcc_cpuss_ahb_clk_src = { + .cmd_rcgr = 0x4800c, + .mnd_width = 0, + .hid_width = 5, + .parent_map = gcc_parent_map_0, + .freq_tbl = ftbl_gcc_cpuss_ahb_clk_src, + .clkr.hw.init = &(struct clk_init_data){ + .name = "gcc_cpuss_ahb_clk_src", + .parent_data = gcc_parent_data_0_ao, + .num_parents = array_size(gcc_parent_data_0_ao), + .ops = &clk_rcg2_ops, + }, +}; + +static const struct freq_tbl ftbl_gcc_gp1_clk_src[] = { + f(50000000, p_gcc_gpll0_out_even, 6, 0, 0), + f(100000000, p_gcc_gpll0_out_even, 3, 0, 0), + f(200000000, p_gcc_gpll0_out_odd, 1, 0, 0), + { } +}; + +static struct clk_rcg2 gcc_gp1_clk_src = { + .cmd_rcgr = 0x64004, + .mnd_width = 16, + .hid_width = 5, + .parent_map = gcc_parent_map_4, + .freq_tbl = ftbl_gcc_gp1_clk_src, + .clkr.hw.init = &(struct clk_init_data){ + .name = "gcc_gp1_clk_src", + .parent_data = gcc_parent_data_4, + .num_parents = array_size(gcc_parent_data_4), + .ops = &clk_rcg2_ops, + }, +}; + +static struct clk_rcg2 gcc_gp2_clk_src = { + .cmd_rcgr = 0x65004, + .mnd_width = 16, + .hid_width = 5, + .parent_map = gcc_parent_map_4, + .freq_tbl = ftbl_gcc_gp1_clk_src, + .clkr.hw.init = &(struct clk_init_data){ + .name = "gcc_gp2_clk_src", + .parent_data = gcc_parent_data_4, + .num_parents = array_size(gcc_parent_data_4), + .ops = &clk_rcg2_ops, + }, +}; + +static struct clk_rcg2 gcc_gp3_clk_src = { + .cmd_rcgr = 0x66004, + .mnd_width = 16, + .hid_width = 5, + .parent_map = gcc_parent_map_4, + .freq_tbl = ftbl_gcc_gp1_clk_src, + .clkr.hw.init = &(struct clk_init_data){ + .name = "gcc_gp3_clk_src", + .parent_data = gcc_parent_data_4, + .num_parents = array_size(gcc_parent_data_4), + .ops = &clk_rcg2_ops, + }, +}; + +static const struct freq_tbl ftbl_gcc_pcie_0_aux_clk_src[] = { + f(9600000, p_bi_tcxo, 2, 0, 0), + f(19200000, p_bi_tcxo, 1, 0, 0), + { } +}; + +static struct clk_rcg2 gcc_pcie_0_aux_clk_src = { + .cmd_rcgr = 0x6b058, + .mnd_width = 16, + .hid_width = 5, + .parent_map = gcc_parent_map_2, + .freq_tbl = ftbl_gcc_pcie_0_aux_clk_src, + .clkr.hw.init = &(struct clk_init_data){ + .name = "gcc_pcie_0_aux_clk_src", + .parent_data = gcc_parent_data_2, + .num_parents = array_size(gcc_parent_data_2), + .ops = &clk_rcg2_ops, + }, +}; + +static const struct freq_tbl ftbl_gcc_pcie_0_phy_rchng_clk_src[] = { + f(19200000, p_bi_tcxo, 1, 0, 0), + f(100000000, p_gcc_gpll0_out_even, 3, 0, 0), + { } +}; + +static struct clk_rcg2 gcc_pcie_0_phy_rchng_clk_src = { + .cmd_rcgr = 0x6b03c, + .mnd_width = 0, + .hid_width = 5, + .parent_map = gcc_parent_map_0, + .freq_tbl = ftbl_gcc_pcie_0_phy_rchng_clk_src, + .clkr.hw.init = &(struct clk_init_data){ + .name = "gcc_pcie_0_phy_rchng_clk_src", + .parent_data = gcc_parent_data_0, + .num_parents = array_size(gcc_parent_data_0), + .ops = &clk_rcg2_ops, + }, +}; + +static struct clk_rcg2 gcc_pcie_1_aux_clk_src = { + .cmd_rcgr = 0x8d058, + .mnd_width = 16, + .hid_width = 5, + .parent_map = gcc_parent_map_2, + .freq_tbl = ftbl_gcc_pcie_0_aux_clk_src, + .clkr.hw.init = &(struct clk_init_data){ + .name = "gcc_pcie_1_aux_clk_src", + .parent_data = gcc_parent_data_2, + .num_parents = array_size(gcc_parent_data_2), + .ops = &clk_rcg2_ops, + }, +}; + +static struct clk_rcg2 gcc_pcie_1_phy_rchng_clk_src = { + .cmd_rcgr = 0x8d03c, + .mnd_width = 0, + .hid_width = 5, + .parent_map = gcc_parent_map_0, + .freq_tbl = ftbl_gcc_pcie_0_phy_rchng_clk_src, + .clkr.hw.init = &(struct clk_init_data){ + .name = "gcc_pcie_1_phy_rchng_clk_src", + .parent_data = gcc_parent_data_0, + .num_parents = array_size(gcc_parent_data_0), + .flags = clk_set_rate_parent, + .ops = &clk_rcg2_ops, + }, +}; + +static const struct freq_tbl ftbl_gcc_pdm2_clk_src[] = { + f(60000000, p_gcc_gpll0_out_even, 5, 0, 0), + { } +}; + +static struct clk_rcg2 gcc_pdm2_clk_src = { + .cmd_rcgr = 0x33010, + .mnd_width = 0, + .hid_width = 5, + .parent_map = gcc_parent_map_0, + .freq_tbl = ftbl_gcc_pdm2_clk_src, + .clkr.hw.init = &(struct clk_init_data){ + .name = "gcc_pdm2_clk_src", + .parent_data = gcc_parent_data_0, + .num_parents = array_size(gcc_parent_data_0), + .flags = clk_set_rate_parent, + .ops = &clk_rcg2_ops, + }, +}; + +static const struct freq_tbl ftbl_gcc_qspi_core_clk_src[] = { + f(100000000, p_gcc_gpll0_out_main, 6, 0, 0), + f(150000000, p_gcc_gpll0_out_main, 4, 0, 0), + f(200000000, p_gcc_gpll0_out_main, 3, 0, 0), + f(300000000, p_gcc_gpll0_out_main, 2, 0, 0), + { } +}; + +static struct clk_rcg2 gcc_qspi_core_clk_src = { + .cmd_rcgr = 0x4b00c, + .mnd_width = 0, + .hid_width = 5, + .parent_map = gcc_parent_map_0, + .freq_tbl = ftbl_gcc_qspi_core_clk_src, + .clkr.hw.init = &(struct clk_init_data){ + .name = "gcc_qspi_core_clk_src", + .parent_data = gcc_parent_data_0, + .num_parents = array_size(gcc_parent_data_0), + .ops = &clk_rcg2_floor_ops, + }, +}; + +static const struct freq_tbl ftbl_gcc_qupv3_wrap0_s0_clk_src[] = { + f(7372800, p_gcc_gpll0_out_even, 1, 384, 15625), + f(14745600, p_gcc_gpll0_out_even, 1, 768, 15625), + f(19200000, p_bi_tcxo, 1, 0, 0), + f(29491200, p_gcc_gpll0_out_even, 1, 1536, 15625), + f(32000000, p_gcc_gpll0_out_even, 1, 8, 75), + f(48000000, p_gcc_gpll0_out_even, 1, 4, 25), + f(64000000, p_gcc_gpll0_out_even, 1, 16, 75), + f(75000000, p_gcc_gpll0_out_even, 4, 0, 0), + f(80000000, p_gcc_gpll0_out_even, 1, 4, 15), + f(96000000, p_gcc_gpll0_out_even, 1, 8, 25), + f(100000000, p_gcc_gpll0_out_main, 6, 0, 0), + f(102400000, p_gcc_gpll0_out_even, 1, 128, 375), + f(112000000, p_gcc_gpll0_out_even, 1, 28, 75), + f(117964800, p_gcc_gpll0_out_even, 1, 6144, 15625), + f(120000000, p_gcc_gpll0_out_even, 2.5, 0, 0), + { } +}; + +static struct clk_init_data gcc_qupv3_wrap0_s0_clk_src_init = { + .name = "gcc_qupv3_wrap0_s0_clk_src", + .parent_data = gcc_parent_data_0, + .num_parents = array_size(gcc_parent_data_0), + .ops = &clk_rcg2_ops, +}; + +static struct clk_rcg2 gcc_qupv3_wrap0_s0_clk_src = { + .cmd_rcgr = 0x17010, + .mnd_width = 16, + .hid_width = 5, + .parent_map = gcc_parent_map_0, + .freq_tbl = ftbl_gcc_qupv3_wrap0_s0_clk_src, + .clkr.hw.init = &gcc_qupv3_wrap0_s0_clk_src_init, +}; + +static struct clk_init_data gcc_qupv3_wrap0_s1_clk_src_init = { + .name = "gcc_qupv3_wrap0_s1_clk_src", + .parent_data = gcc_parent_data_0, + .num_parents = array_size(gcc_parent_data_0), + .ops = &clk_rcg2_ops, +}; + +static struct clk_rcg2 gcc_qupv3_wrap0_s1_clk_src = { + .cmd_rcgr = 0x17140, + .mnd_width = 16, + .hid_width = 5, + .parent_map = gcc_parent_map_0, + .freq_tbl = ftbl_gcc_qupv3_wrap0_s0_clk_src, + .clkr.hw.init = &gcc_qupv3_wrap0_s1_clk_src_init, +}; + +static const struct freq_tbl ftbl_gcc_qupv3_wrap0_s2_clk_src[] = { + f(7372800, p_gcc_gpll0_out_even, 1, 384, 15625), + f(14745600, p_gcc_gpll0_out_even, 1, 768, 15625), + f(19200000, p_bi_tcxo, 1, 0, 0), + f(29491200, p_gcc_gpll0_out_even, 1, 1536, 15625), + f(32000000, p_gcc_gpll0_out_even, 1, 8, 75), + f(48000000, p_gcc_gpll0_out_even, 1, 4, 25), + f(64000000, p_gcc_gpll0_out_even, 1, 16, 75), + f(75000000, p_gcc_gpll0_out_even, 4, 0, 0), + f(80000000, p_gcc_gpll0_out_even, 1, 4, 15), + f(96000000, p_gcc_gpll0_out_even, 1, 8, 25), + f(100000000, p_gcc_gpll0_out_main, 6, 0, 0), + { } +}; + +static struct clk_init_data gcc_qupv3_wrap0_s2_clk_src_init = { + .name = "gcc_qupv3_wrap0_s2_clk_src", + .parent_data = gcc_parent_data_0, + .num_parents = array_size(gcc_parent_data_0), + .ops = &clk_rcg2_ops, +}; + +static struct clk_rcg2 gcc_qupv3_wrap0_s2_clk_src = { + .cmd_rcgr = 0x17270, + .mnd_width = 16, + .hid_width = 5, + .parent_map = gcc_parent_map_0, + .freq_tbl = ftbl_gcc_qupv3_wrap0_s2_clk_src, + .clkr.hw.init = &gcc_qupv3_wrap0_s2_clk_src_init, +}; + +static struct clk_init_data gcc_qupv3_wrap0_s3_clk_src_init = { + .name = "gcc_qupv3_wrap0_s3_clk_src", + .parent_data = gcc_parent_data_0, + .num_parents = array_size(gcc_parent_data_0), + .ops = &clk_rcg2_ops, +}; + +static struct clk_rcg2 gcc_qupv3_wrap0_s3_clk_src = { + .cmd_rcgr = 0x173a0, + .mnd_width = 16, + .hid_width = 5, + .parent_map = gcc_parent_map_0, + .freq_tbl = ftbl_gcc_qupv3_wrap0_s2_clk_src, + .clkr.hw.init = &gcc_qupv3_wrap0_s3_clk_src_init, +}; + +static struct clk_init_data gcc_qupv3_wrap0_s4_clk_src_init = { + .name = "gcc_qupv3_wrap0_s4_clk_src", + .parent_data = gcc_parent_data_0, + .num_parents = array_size(gcc_parent_data_0), + .ops = &clk_rcg2_ops, +}; + +static struct clk_rcg2 gcc_qupv3_wrap0_s4_clk_src = { + .cmd_rcgr = 0x174d0, + .mnd_width = 16, + .hid_width = 5, + .parent_map = gcc_parent_map_0, + .freq_tbl = ftbl_gcc_qupv3_wrap0_s2_clk_src, + .clkr.hw.init = &gcc_qupv3_wrap0_s4_clk_src_init, +}; + +static struct clk_init_data gcc_qupv3_wrap0_s5_clk_src_init = { + .name = "gcc_qupv3_wrap0_s5_clk_src", + .parent_data = gcc_parent_data_0, + .num_parents = array_size(gcc_parent_data_0), + .ops = &clk_rcg2_ops, +}; + +static struct clk_rcg2 gcc_qupv3_wrap0_s5_clk_src = { + .cmd_rcgr = 0x17600, + .mnd_width = 16, + .hid_width = 5, + .parent_map = gcc_parent_map_0, + .freq_tbl = ftbl_gcc_qupv3_wrap0_s2_clk_src, + .clkr.hw.init = &gcc_qupv3_wrap0_s5_clk_src_init, +}; + +static struct clk_init_data gcc_qupv3_wrap0_s6_clk_src_init = { + .name = "gcc_qupv3_wrap0_s6_clk_src", + .parent_data = gcc_parent_data_0, + .num_parents = array_size(gcc_parent_data_0), + .ops = &clk_rcg2_ops, +}; + +static struct clk_rcg2 gcc_qupv3_wrap0_s6_clk_src = { + .cmd_rcgr = 0x17730, + .mnd_width = 16, + .hid_width = 5, + .parent_map = gcc_parent_map_0, + .freq_tbl = ftbl_gcc_qupv3_wrap0_s2_clk_src, + .clkr.hw.init = &gcc_qupv3_wrap0_s6_clk_src_init, +}; + +static struct clk_init_data gcc_qupv3_wrap0_s7_clk_src_init = { + .name = "gcc_qupv3_wrap0_s7_clk_src", + .parent_data = gcc_parent_data_0, + .num_parents = array_size(gcc_parent_data_0), + .ops = &clk_rcg2_ops, +}; + +static struct clk_rcg2 gcc_qupv3_wrap0_s7_clk_src = { + .cmd_rcgr = 0x17860, + .mnd_width = 16, + .hid_width = 5, + .parent_map = gcc_parent_map_0, + .freq_tbl = ftbl_gcc_qupv3_wrap0_s2_clk_src, + .clkr.hw.init = &gcc_qupv3_wrap0_s7_clk_src_init, +}; + +static struct clk_init_data gcc_qupv3_wrap1_s0_clk_src_init = { + .name = "gcc_qupv3_wrap1_s0_clk_src", + .parent_data = gcc_parent_data_0, + .num_parents = array_size(gcc_parent_data_0), + .ops = &clk_rcg2_ops, +}; + +static struct clk_rcg2 gcc_qupv3_wrap1_s0_clk_src = { + .cmd_rcgr = 0x18010, + .mnd_width = 16, + .hid_width = 5, + .parent_map = gcc_parent_map_0, + .freq_tbl = ftbl_gcc_qupv3_wrap0_s0_clk_src, + .clkr.hw.init = &gcc_qupv3_wrap1_s0_clk_src_init, +}; + +static struct clk_init_data gcc_qupv3_wrap1_s1_clk_src_init = { + .name = "gcc_qupv3_wrap1_s1_clk_src", + .parent_data = gcc_parent_data_0, + .num_parents = array_size(gcc_parent_data_0), + .ops = &clk_rcg2_ops, +}; + +static struct clk_rcg2 gcc_qupv3_wrap1_s1_clk_src = { + .cmd_rcgr = 0x18140, + .mnd_width = 16, + .hid_width = 5, + .parent_map = gcc_parent_map_0, + .freq_tbl = ftbl_gcc_qupv3_wrap0_s0_clk_src, + .clkr.hw.init = &gcc_qupv3_wrap1_s1_clk_src_init, +}; + +static struct clk_init_data gcc_qupv3_wrap1_s2_clk_src_init = { + .name = "gcc_qupv3_wrap1_s2_clk_src", + .parent_data = gcc_parent_data_0, + .num_parents = array_size(gcc_parent_data_0), + .ops = &clk_rcg2_ops, +}; + +static struct clk_rcg2 gcc_qupv3_wrap1_s2_clk_src = { + .cmd_rcgr = 0x18270, + .mnd_width = 16, + .hid_width = 5, + .parent_map = gcc_parent_map_0, + .freq_tbl = ftbl_gcc_qupv3_wrap0_s2_clk_src, + .clkr.hw.init = &gcc_qupv3_wrap1_s2_clk_src_init, +}; + +static struct clk_init_data gcc_qupv3_wrap1_s3_clk_src_init = { + .name = "gcc_qupv3_wrap1_s3_clk_src", + .parent_data = gcc_parent_data_0, + .num_parents = array_size(gcc_parent_data_0), + .ops = &clk_rcg2_ops, +}; + +static struct clk_rcg2 gcc_qupv3_wrap1_s3_clk_src = { + .cmd_rcgr = 0x183a0, + .mnd_width = 16, + .hid_width = 5, + .parent_map = gcc_parent_map_0, + .freq_tbl = ftbl_gcc_qupv3_wrap0_s2_clk_src, + .clkr.hw.init = &gcc_qupv3_wrap1_s3_clk_src_init, +}; + +static struct clk_init_data gcc_qupv3_wrap1_s4_clk_src_init = { + .name = "gcc_qupv3_wrap1_s4_clk_src", + .parent_data = gcc_parent_data_0, + .num_parents = array_size(gcc_parent_data_0), + .ops = &clk_rcg2_ops, +}; + +static struct clk_rcg2 gcc_qupv3_wrap1_s4_clk_src = { + .cmd_rcgr = 0x184d0, + .mnd_width = 16, + .hid_width = 5, + .parent_map = gcc_parent_map_0, + .freq_tbl = ftbl_gcc_qupv3_wrap0_s2_clk_src, + .clkr.hw.init = &gcc_qupv3_wrap1_s4_clk_src_init, +}; + +static struct clk_init_data gcc_qupv3_wrap1_s5_clk_src_init = { + .name = "gcc_qupv3_wrap1_s5_clk_src", + .parent_data = gcc_parent_data_0, + .num_parents = array_size(gcc_parent_data_0), + .ops = &clk_rcg2_ops, +}; + +static struct clk_rcg2 gcc_qupv3_wrap1_s5_clk_src = { + .cmd_rcgr = 0x18600, + .mnd_width = 16, + .hid_width = 5, + .parent_map = gcc_parent_map_0, + .freq_tbl = ftbl_gcc_qupv3_wrap0_s2_clk_src, + .clkr.hw.init = &gcc_qupv3_wrap1_s5_clk_src_init, +}; + +static struct clk_init_data gcc_qupv3_wrap1_s6_clk_src_init = { + .name = "gcc_qupv3_wrap1_s6_clk_src", + .parent_data = gcc_parent_data_0, + .num_parents = array_size(gcc_parent_data_0), + .ops = &clk_rcg2_ops, +}; + +static struct clk_rcg2 gcc_qupv3_wrap1_s6_clk_src = { + .cmd_rcgr = 0x18730, + .mnd_width = 16, + .hid_width = 5, + .parent_map = gcc_parent_map_0, + .freq_tbl = ftbl_gcc_qupv3_wrap0_s2_clk_src, + .clkr.hw.init = &gcc_qupv3_wrap1_s6_clk_src_init, +}; + +static struct clk_init_data gcc_qupv3_wrap1_s7_clk_src_init = { + .name = "gcc_qupv3_wrap1_s7_clk_src", + .parent_data = gcc_parent_data_0, + .num_parents = array_size(gcc_parent_data_0), + .ops = &clk_rcg2_ops, +}; + +static struct clk_rcg2 gcc_qupv3_wrap1_s7_clk_src = { + .cmd_rcgr = 0x18860, + .mnd_width = 16, + .hid_width = 5, + .parent_map = gcc_parent_map_0, + .freq_tbl = ftbl_gcc_qupv3_wrap0_s2_clk_src, + .clkr.hw.init = &gcc_qupv3_wrap1_s7_clk_src_init, +}; + +static const struct freq_tbl ftbl_gcc_sdcc1_apps_clk_src[] = { + f(144000, p_bi_tcxo, 16, 3, 25), + f(400000, p_bi_tcxo, 12, 1, 4), + f(19200000, p_bi_tcxo, 1, 0, 0), + f(20000000, p_gcc_gpll0_out_even, 5, 1, 3), + f(25000000, p_gcc_gpll0_out_even, 12, 0, 0), + f(50000000, p_gcc_gpll0_out_even, 6, 0, 0), + f(100000000, p_gcc_gpll0_out_even, 3, 0, 0), + f(192000000, p_gcc_gpll10_out_main, 2, 0, 0), + f(384000000, p_gcc_gpll10_out_main, 1, 0, 0), + { } +}; + +static struct clk_rcg2 gcc_sdcc1_apps_clk_src = { + .cmd_rcgr = 0x7500c, + .mnd_width = 8, + .hid_width = 5, + .parent_map = gcc_parent_map_8, + .freq_tbl = ftbl_gcc_sdcc1_apps_clk_src, + .clkr.hw.init = &(struct clk_init_data){ + .name = "gcc_sdcc1_apps_clk_src", + .parent_data = gcc_parent_data_8, + .num_parents = array_size(gcc_parent_data_8), + .ops = &clk_rcg2_floor_ops, + }, +}; + +static const struct freq_tbl ftbl_gcc_sdcc1_ice_core_clk_src[] = { + f(100000000, p_gcc_gpll0_out_even, 3, 0, 0), + f(150000000, p_gcc_gpll0_out_even, 2, 0, 0), + f(300000000, p_gcc_gpll0_out_even, 1, 0, 0), + { } +}; + +static struct clk_rcg2 gcc_sdcc1_ice_core_clk_src = { + .cmd_rcgr = 0x7502c, + .mnd_width = 0, + .hid_width = 5, + .parent_map = gcc_parent_map_1, + .freq_tbl = ftbl_gcc_sdcc1_ice_core_clk_src, + .clkr.hw.init = &(struct clk_init_data){ + .name = "gcc_sdcc1_ice_core_clk_src", + .parent_data = gcc_parent_data_1, + .num_parents = array_size(gcc_parent_data_1), + .ops = &clk_rcg2_floor_ops, + }, +}; + +static const struct freq_tbl ftbl_gcc_sdcc2_apps_clk_src[] = { + f(400000, p_bi_tcxo, 12, 1, 4), + f(19200000, p_bi_tcxo, 1, 0, 0), + f(25000000, p_gcc_gpll0_out_even, 12, 0, 0), + f(50000000, p_gcc_gpll0_out_even, 6, 0, 0), + f(100000000, p_gcc_gpll0_out_even, 3, 0, 0), + f(202000000, p_gcc_gpll9_out_main, 4, 0, 0), + { } +}; + +static struct clk_rcg2 gcc_sdcc2_apps_clk_src = { + .cmd_rcgr = 0x1400c, + .mnd_width = 8, + .hid_width = 5, + .parent_map = gcc_parent_map_9, + .freq_tbl = ftbl_gcc_sdcc2_apps_clk_src, + .clkr.hw.init = &(struct clk_init_data){ + .name = "gcc_sdcc2_apps_clk_src", + .parent_data = gcc_parent_data_9, + .num_parents = array_size(gcc_parent_data_9), + .flags = clk_ops_parent_enable, + .ops = &clk_rcg2_floor_ops, + }, +}; + +static const struct freq_tbl ftbl_gcc_sdcc4_apps_clk_src[] = { + f(400000, p_bi_tcxo, 12, 1, 4), + f(19200000, p_bi_tcxo, 1, 0, 0), + f(25000000, p_gcc_gpll0_out_even, 12, 0, 0), + f(50000000, p_gcc_gpll0_out_even, 6, 0, 0), + f(100000000, p_gcc_gpll0_out_even, 3, 0, 0), + { } +}; + +static struct clk_rcg2 gcc_sdcc4_apps_clk_src = { + .cmd_rcgr = 0x1600c, + .mnd_width = 8, + .hid_width = 5, + .parent_map = gcc_parent_map_1, + .freq_tbl = ftbl_gcc_sdcc4_apps_clk_src, + .clkr.hw.init = &(struct clk_init_data){ + .name = "gcc_sdcc4_apps_clk_src", + .parent_data = gcc_parent_data_1, + .num_parents = array_size(gcc_parent_data_1), + .ops = &clk_rcg2_floor_ops, + }, +}; + +static const struct freq_tbl ftbl_gcc_ufs_phy_axi_clk_src[] = { + f(25000000, p_gcc_gpll0_out_even, 12, 0, 0), + f(75000000, p_gcc_gpll0_out_even, 4, 0, 0), + f(150000000, p_gcc_gpll0_out_even, 2, 0, 0), + f(300000000, p_gcc_gpll0_out_even, 1, 0, 0), + { } +}; + +static struct clk_rcg2 gcc_ufs_phy_axi_clk_src = { + .cmd_rcgr = 0x77024, + .mnd_width = 8, + .hid_width = 5, + .parent_map = gcc_parent_map_0, + .freq_tbl = ftbl_gcc_ufs_phy_axi_clk_src, + .clkr.hw.init = &(struct clk_init_data){ + .name = "gcc_ufs_phy_axi_clk_src", + .parent_data = gcc_parent_data_0, + .num_parents = array_size(gcc_parent_data_0), + .ops = &clk_rcg2_ops, + }, +}; + +static const struct freq_tbl ftbl_gcc_ufs_phy_ice_core_clk_src[] = { + f(75000000, p_gcc_gpll0_out_even, 4, 0, 0), + f(150000000, p_gcc_gpll0_out_even, 2, 0, 0), + f(300000000, p_gcc_gpll0_out_even, 1, 0, 0), + { } +}; + +static struct clk_rcg2 gcc_ufs_phy_ice_core_clk_src = { + .cmd_rcgr = 0x7706c, + .mnd_width = 0, + .hid_width = 5, + .parent_map = gcc_parent_map_0, + .freq_tbl = ftbl_gcc_ufs_phy_ice_core_clk_src, + .clkr.hw.init = &(struct clk_init_data){ + .name = "gcc_ufs_phy_ice_core_clk_src", + .parent_data = gcc_parent_data_0, + .num_parents = array_size(gcc_parent_data_0), + .ops = &clk_rcg2_ops, + }, +}; + +static struct clk_rcg2 gcc_ufs_phy_phy_aux_clk_src = { + .cmd_rcgr = 0x770a0, + .mnd_width = 0, + .hid_width = 5, + .parent_map = gcc_parent_map_3, + .freq_tbl = ftbl_gcc_pcie_0_aux_clk_src, + .clkr.hw.init = &(struct clk_init_data){ + .name = "gcc_ufs_phy_phy_aux_clk_src", + .parent_data = gcc_parent_data_3, + .num_parents = array_size(gcc_parent_data_3), + .ops = &clk_rcg2_ops, + }, +}; + +static struct clk_rcg2 gcc_ufs_phy_unipro_core_clk_src = { + .cmd_rcgr = 0x77084, + .mnd_width = 0, + .hid_width = 5, + .parent_map = gcc_parent_map_0, + .freq_tbl = ftbl_gcc_ufs_phy_ice_core_clk_src, + .clkr.hw.init = &(struct clk_init_data){ + .name = "gcc_ufs_phy_unipro_core_clk_src", + .parent_data = gcc_parent_data_0, + .num_parents = array_size(gcc_parent_data_0), + .ops = &clk_rcg2_ops, + }, +}; + +static const struct freq_tbl ftbl_gcc_usb30_prim_master_clk_src[] = { + f(66666667, p_gcc_gpll0_out_even, 4.5, 0, 0), + f(133333333, p_gcc_gpll0_out_main, 4.5, 0, 0), + f(200000000, p_gcc_gpll0_out_odd, 1, 0, 0), + f(240000000, p_gcc_gpll0_out_main, 2.5, 0, 0), + { } +}; + +static struct clk_rcg2 gcc_usb30_prim_master_clk_src = { + .cmd_rcgr = 0xf020, + .mnd_width = 8, + .hid_width = 5, + .parent_map = gcc_parent_map_1, + .freq_tbl = ftbl_gcc_usb30_prim_master_clk_src, + .clkr.hw.init = &(struct clk_init_data){ + .name = "gcc_usb30_prim_master_clk_src", + .parent_data = gcc_parent_data_1, + .num_parents = array_size(gcc_parent_data_1), + .ops = &clk_rcg2_ops, + }, +}; + +static const struct freq_tbl ftbl_gcc_usb30_prim_mock_utmi_clk_src[] = { + f(19200000, p_bi_tcxo, 1, 0, 0), + { } +}; + +static struct clk_rcg2 gcc_usb30_prim_mock_utmi_clk_src = { + .cmd_rcgr = 0xf038, + .mnd_width = 0, + .hid_width = 5, + .parent_map = gcc_parent_map_3, + .freq_tbl = ftbl_gcc_usb30_prim_mock_utmi_clk_src, + .clkr.hw.init = &(struct clk_init_data){ + .name = "gcc_usb30_prim_mock_utmi_clk_src", + .parent_data = gcc_parent_data_3, + .num_parents = array_size(gcc_parent_data_3), + .ops = &clk_rcg2_ops, + }, +}; + +static const struct freq_tbl ftbl_gcc_usb30_sec_master_clk_src[] = { + f(60000000, p_gcc_gpll0_out_even, 5, 0, 0), + f(120000000, p_gcc_gpll0_out_even, 2.5, 0, 0), + { } +}; + +static struct clk_rcg2 gcc_usb30_sec_master_clk_src = { + .cmd_rcgr = 0x9e020, + .mnd_width = 8, + .hid_width = 5, + .parent_map = gcc_parent_map_5, + .freq_tbl = ftbl_gcc_usb30_sec_master_clk_src, + .clkr.hw.init = &(struct clk_init_data){ + .name = "gcc_usb30_sec_master_clk_src", + .parent_data = gcc_parent_data_5, + .num_parents = array_size(gcc_parent_data_5), + .ops = &clk_rcg2_ops, + }, +}; + +static struct clk_rcg2 gcc_usb30_sec_mock_utmi_clk_src = { + .cmd_rcgr = 0x9e038, + .mnd_width = 0, + .hid_width = 5, + .parent_map = gcc_parent_map_3, + .freq_tbl = ftbl_gcc_usb30_prim_mock_utmi_clk_src, + .clkr.hw.init = &(struct clk_init_data){ + .name = "gcc_usb30_sec_mock_utmi_clk_src", + .parent_data = gcc_parent_data_3, + .num_parents = array_size(gcc_parent_data_3), + .ops = &clk_rcg2_ops, + }, +}; + +static struct clk_rcg2 gcc_usb3_prim_phy_aux_clk_src = { + .cmd_rcgr = 0xf064, + .mnd_width = 0, + .hid_width = 5, + .parent_map = gcc_parent_map_2, + .freq_tbl = ftbl_gcc_usb30_prim_mock_utmi_clk_src, + .clkr.hw.init = &(struct clk_init_data){ + .name = "gcc_usb3_prim_phy_aux_clk_src", + .parent_data = gcc_parent_data_2, + .num_parents = array_size(gcc_parent_data_2), + .ops = &clk_rcg2_ops, + }, +}; + +static struct clk_rcg2 gcc_usb3_sec_phy_aux_clk_src = { + .cmd_rcgr = 0x9e064, + .mnd_width = 0, + .hid_width = 5, + .parent_map = gcc_parent_map_2, + .freq_tbl = ftbl_gcc_usb30_prim_mock_utmi_clk_src, + .clkr.hw.init = &(struct clk_init_data){ + .name = "gcc_usb3_sec_phy_aux_clk_src", + .parent_data = gcc_parent_data_2, + .num_parents = array_size(gcc_parent_data_2), + .ops = &clk_rcg2_ops, + }, +}; + +static const struct freq_tbl ftbl_gcc_sec_ctrl_clk_src[] = { + f(4800000, p_bi_tcxo, 4, 0, 0), + f(19200000, p_bi_tcxo, 1, 0, 0), + { } +}; + +static struct clk_rcg2 gcc_sec_ctrl_clk_src = { + .cmd_rcgr = 0x3d02c, + .mnd_width = 0, + .hid_width = 5, + .parent_map = gcc_parent_map_3, + .freq_tbl = ftbl_gcc_sec_ctrl_clk_src, + .clkr.hw.init = &(struct clk_init_data){ + .name = "gcc_sec_ctrl_clk_src", + .parent_data = gcc_parent_data_3, + .num_parents = array_size(gcc_parent_data_3), + .ops = &clk_rcg2_ops, + }, +}; + +static struct clk_regmap_div gcc_cpuss_ahb_postdiv_clk_src = { + .reg = 0x48024, + .shift = 0, + .width = 4, + .clkr.hw.init = &(struct clk_init_data) { + .name = "gcc_cpuss_ahb_postdiv_clk_src", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_cpuss_ahb_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_regmap_div_ro_ops, + }, +}; + +static struct clk_regmap_div gcc_usb30_prim_mock_utmi_postdiv_clk_src = { + .reg = 0xf050, + .shift = 0, + .width = 4, + .clkr.hw.init = &(struct clk_init_data) { + .name = "gcc_usb30_prim_mock_utmi_postdiv_clk_src", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_usb30_prim_mock_utmi_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_regmap_div_ro_ops, + }, +}; + +static struct clk_regmap_div gcc_usb30_sec_mock_utmi_postdiv_clk_src = { + .reg = 0x9e050, + .shift = 0, + .width = 4, + .clkr.hw.init = &(struct clk_init_data) { + .name = "gcc_usb30_sec_mock_utmi_postdiv_clk_src", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_usb30_sec_mock_utmi_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_regmap_div_ro_ops, + }, +}; + +static struct clk_branch gcc_pcie_clkref_en = { + .halt_reg = 0x8c004, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0x8c004, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_pcie_clkref_en", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_edp_clkref_en = { + .halt_reg = 0x8c008, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0x8c008, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_edp_clkref_en", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_aggre_noc_pcie_0_axi_clk = { + .halt_reg = 0x6b080, + .halt_check = branch_halt_skip, + .hwcg_reg = 0x6b080, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x52000, + .enable_mask = bit(12), + .hw.init = &(struct clk_init_data){ + .name = "gcc_aggre_noc_pcie_0_axi_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_aggre_noc_pcie_1_axi_clk = { + .halt_reg = 0x8d084, + .halt_check = branch_halt_skip, + .hwcg_reg = 0x8d084, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x52000, + .enable_mask = bit(11), + .hw.init = &(struct clk_init_data){ + .name = "gcc_aggre_noc_pcie_1_axi_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_aggre_noc_pcie_tbu_clk = { + .halt_reg = 0x90010, + .halt_check = branch_halt_voted, + .hwcg_reg = 0x90010, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x52000, + .enable_mask = bit(18), + .hw.init = &(struct clk_init_data){ + .name = "gcc_aggre_noc_pcie_tbu_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_aggre_noc_pcie_center_sf_axi_clk = { + .halt_reg = 0x8d088, + .halt_check = branch_halt_voted, + .hwcg_reg = 0x8d088, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x52008, + .enable_mask = bit(28), + .hw.init = &(struct clk_init_data){ + .name = "gcc_aggre_noc_pcie_center_sf_axi_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_aggre_ufs_phy_axi_clk = { + .halt_reg = 0x770cc, + .halt_check = branch_halt_voted, + .hwcg_reg = 0x770cc, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x770cc, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_aggre_ufs_phy_axi_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_ufs_phy_axi_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_aggre_usb3_prim_axi_clk = { + .halt_reg = 0xf080, + .halt_check = branch_halt_voted, + .hwcg_reg = 0xf080, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0xf080, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_aggre_usb3_prim_axi_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_usb30_prim_master_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_aggre_usb3_sec_axi_clk = { + .halt_reg = 0x9e080, + .halt_check = branch_halt_voted, + .hwcg_reg = 0x9e080, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x9e080, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_aggre_usb3_sec_axi_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_usb30_sec_master_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_camera_hf_axi_clk = { + .halt_reg = 0x26010, + .halt_check = branch_halt_skip, + .hwcg_reg = 0x26010, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x26010, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_camera_hf_axi_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_camera_sf_axi_clk = { + .halt_reg = 0x2601c, + .halt_check = branch_halt_skip, + .hwcg_reg = 0x2601c, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x2601c, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_camera_sf_axi_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_cfg_noc_usb3_prim_axi_clk = { + .halt_reg = 0xf07c, + .halt_check = branch_halt_voted, + .hwcg_reg = 0xf07c, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0xf07c, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_cfg_noc_usb3_prim_axi_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_usb30_prim_master_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_cfg_noc_usb3_sec_axi_clk = { + .halt_reg = 0x9e07c, + .halt_check = branch_halt_voted, + .hwcg_reg = 0x9e07c, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x9e07c, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_cfg_noc_usb3_sec_axi_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_usb30_sec_master_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +/* for cpuss functionality the ahb clock needs to be left enabled */ +static struct clk_branch gcc_cpuss_ahb_clk = { + .halt_reg = 0x48000, + .halt_check = branch_halt_voted, + .hwcg_reg = 0x48000, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x52000, + .enable_mask = bit(21), + .hw.init = &(struct clk_init_data){ + .name = "gcc_cpuss_ahb_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_cpuss_ahb_postdiv_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_is_critical | clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_ddrss_gpu_axi_clk = { + .halt_reg = 0x71154, + .halt_check = branch_halt_skip, + .hwcg_reg = 0x71154, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x71154, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_ddrss_gpu_axi_clk", + .ops = &clk_branch2_aon_ops, + }, + }, +}; + +static struct clk_branch gcc_ddrss_pcie_sf_clk = { + .halt_reg = 0x8d080, + .halt_check = branch_halt_skip, + .hwcg_reg = 0x8d080, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x52000, + .enable_mask = bit(19), + .hw.init = &(struct clk_init_data){ + .name = "gcc_ddrss_pcie_sf_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_disp_gpll0_clk_src = { + .halt_check = branch_halt_delay, + .clkr = { + .enable_reg = 0x52000, + .enable_mask = bit(7), + .hw.init = &(struct clk_init_data){ + .name = "gcc_disp_gpll0_clk_src", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_gpll0.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_disp_hf_axi_clk = { + .halt_reg = 0x2700c, + .halt_check = branch_halt_skip, + .hwcg_reg = 0x2700c, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x2700c, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_disp_hf_axi_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_disp_sf_axi_clk = { + .halt_reg = 0x27014, + .halt_check = branch_halt_skip, + .hwcg_reg = 0x27014, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x27014, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_disp_sf_axi_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_gp1_clk = { + .halt_reg = 0x64000, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0x64000, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_gp1_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_gp1_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_gp2_clk = { + .halt_reg = 0x65000, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0x65000, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_gp2_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_gp2_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_gp3_clk = { + .halt_reg = 0x66000, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0x66000, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_gp3_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_gp3_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_gpu_gpll0_clk_src = { + .halt_check = branch_halt_delay, + .clkr = { + .enable_reg = 0x52000, + .enable_mask = bit(15), + .hw.init = &(struct clk_init_data){ + .name = "gcc_gpu_gpll0_clk_src", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_gpll0.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_gpu_gpll0_div_clk_src = { + .halt_check = branch_halt_delay, + .clkr = { + .enable_reg = 0x52000, + .enable_mask = bit(16), + .hw.init = &(struct clk_init_data){ + .name = "gcc_gpu_gpll0_div_clk_src", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_gpll0_out_even.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_gpu_iref_en = { + .halt_reg = 0x8c014, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0x8c014, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_gpu_iref_en", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_gpu_memnoc_gfx_clk = { + .halt_reg = 0x7100c, + .halt_check = branch_halt_voted, + .hwcg_reg = 0x7100c, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x7100c, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_gpu_memnoc_gfx_clk", + .ops = &clk_branch2_aon_ops, + }, + }, +}; + +static struct clk_branch gcc_gpu_snoc_dvm_gfx_clk = { + .halt_reg = 0x71018, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0x71018, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_gpu_snoc_dvm_gfx_clk", + .ops = &clk_branch2_aon_ops, + }, + }, +}; + +static struct clk_branch gcc_pcie0_phy_rchng_clk = { + .halt_reg = 0x6b038, + .halt_check = branch_halt_voted, + .clkr = { + .enable_reg = 0x52000, + .enable_mask = bit(22), + .hw.init = &(struct clk_init_data){ + .name = "gcc_pcie0_phy_rchng_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_pcie_0_phy_rchng_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_pcie1_phy_rchng_clk = { + .halt_reg = 0x8d038, + .halt_check = branch_halt_voted, + .clkr = { + .enable_reg = 0x52000, + .enable_mask = bit(23), + .hw.init = &(struct clk_init_data){ + .name = "gcc_pcie1_phy_rchng_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_pcie_1_phy_rchng_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_pcie_0_aux_clk = { + .halt_reg = 0x6b028, + .halt_check = branch_halt_voted, + .clkr = { + .enable_reg = 0x52008, + .enable_mask = bit(3), + .hw.init = &(struct clk_init_data){ + .name = "gcc_pcie_0_aux_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_pcie_0_aux_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_pcie_0_cfg_ahb_clk = { + .halt_reg = 0x6b024, + .halt_check = branch_halt_voted, + .hwcg_reg = 0x6b024, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x52008, + .enable_mask = bit(2), + .hw.init = &(struct clk_init_data){ + .name = "gcc_pcie_0_cfg_ahb_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_pcie_0_mstr_axi_clk = { + .halt_reg = 0x6b01c, + .halt_check = branch_halt_skip, + .clkr = { + .enable_reg = 0x52008, + .enable_mask = bit(1), + .hw.init = &(struct clk_init_data){ + .name = "gcc_pcie_0_mstr_axi_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_pcie_0_pipe_clk = { + .halt_reg = 0x6b030, + .halt_check = branch_halt_skip, + .clkr = { + .enable_reg = 0x52008, + .enable_mask = bit(4), + .hw.init = &(struct clk_init_data){ + .name = "gcc_pcie_0_pipe_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_pcie_0_pipe_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_pcie_0_slv_axi_clk = { + .halt_reg = 0x6b014, + .halt_check = branch_halt_voted, + .clkr = { + .enable_reg = 0x52008, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_pcie_0_slv_axi_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_pcie_0_slv_q2a_axi_clk = { + .halt_reg = 0x6b010, + .halt_check = branch_halt_voted, + .clkr = { + .enable_reg = 0x52008, + .enable_mask = bit(5), + .hw.init = &(struct clk_init_data){ + .name = "gcc_pcie_0_slv_q2a_axi_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_pcie_1_aux_clk = { + .halt_reg = 0x8d028, + .halt_check = branch_halt_voted, + .clkr = { + .enable_reg = 0x52000, + .enable_mask = bit(29), + .hw.init = &(struct clk_init_data){ + .name = "gcc_pcie_1_aux_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_pcie_1_aux_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_pcie_1_cfg_ahb_clk = { + .halt_reg = 0x8d024, + .halt_check = branch_halt_voted, + .hwcg_reg = 0x8d024, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x52000, + .enable_mask = bit(28), + .hw.init = &(struct clk_init_data){ + .name = "gcc_pcie_1_cfg_ahb_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_pcie_1_mstr_axi_clk = { + .halt_reg = 0x8d01c, + .halt_check = branch_halt_skip, + .clkr = { + .enable_reg = 0x52000, + .enable_mask = bit(27), + .hw.init = &(struct clk_init_data){ + .name = "gcc_pcie_1_mstr_axi_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_pcie_1_pipe_clk = { + .halt_reg = 0x8d030, + .halt_check = branch_halt_skip, + .clkr = { + .enable_reg = 0x52000, + .enable_mask = bit(30), + .hw.init = &(struct clk_init_data){ + .name = "gcc_pcie_1_pipe_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_pcie_1_pipe_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_pcie_1_slv_axi_clk = { + .halt_reg = 0x8d014, + .halt_check = branch_halt_voted, + .clkr = { + .enable_reg = 0x52000, + .enable_mask = bit(26), + .hw.init = &(struct clk_init_data){ + .name = "gcc_pcie_1_slv_axi_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_pcie_1_slv_q2a_axi_clk = { + .halt_reg = 0x8d010, + .halt_check = branch_halt_voted, + .clkr = { + .enable_reg = 0x52000, + .enable_mask = bit(25), + .hw.init = &(struct clk_init_data){ + .name = "gcc_pcie_1_slv_q2a_axi_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_pcie_throttle_core_clk = { + .halt_reg = 0x90018, + .halt_check = branch_halt_skip, + .hwcg_reg = 0x90018, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x52000, + .enable_mask = bit(20), + .hw.init = &(struct clk_init_data){ + .name = "gcc_pcie_throttle_core_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_pdm2_clk = { + .halt_reg = 0x3300c, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0x3300c, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_pdm2_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_pdm2_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_pdm_ahb_clk = { + .halt_reg = 0x33004, + .halt_check = branch_halt_voted, + .hwcg_reg = 0x33004, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x33004, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_pdm_ahb_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_pdm_xo4_clk = { + .halt_reg = 0x33008, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0x33008, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_pdm_xo4_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_qmip_camera_nrt_ahb_clk = { + .halt_reg = 0x26008, + .halt_check = branch_halt_voted, + .hwcg_reg = 0x26008, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x26008, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_qmip_camera_nrt_ahb_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_qmip_camera_rt_ahb_clk = { + .halt_reg = 0x2600c, + .halt_check = branch_halt_voted, + .hwcg_reg = 0x2600c, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x2600c, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_qmip_camera_rt_ahb_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_qmip_disp_ahb_clk = { + .halt_reg = 0x27008, + .halt_check = branch_halt_voted, + .clkr = { + .enable_reg = 0x27008, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_qmip_disp_ahb_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_qmip_video_vcodec_ahb_clk = { + .halt_reg = 0x28008, + .halt_check = branch_halt_voted, + .hwcg_reg = 0x28008, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x28008, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_qmip_video_vcodec_ahb_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_qspi_cnoc_periph_ahb_clk = { + .halt_reg = 0x4b004, + .halt_check = branch_halt, + .hwcg_reg = 0x4b004, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x4b004, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_qspi_cnoc_periph_ahb_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_qspi_core_clk = { + .halt_reg = 0x4b008, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0x4b008, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_qspi_core_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_qspi_core_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_qupv3_wrap0_core_2x_clk = { + .halt_reg = 0x23008, + .halt_check = branch_halt_voted, + .clkr = { + .enable_reg = 0x52008, + .enable_mask = bit(9), + .hw.init = &(struct clk_init_data){ + .name = "gcc_qupv3_wrap0_core_2x_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_qupv3_wrap0_core_clk = { + .halt_reg = 0x23000, + .halt_check = branch_halt_voted, + .clkr = { + .enable_reg = 0x52008, + .enable_mask = bit(8), + .hw.init = &(struct clk_init_data){ + .name = "gcc_qupv3_wrap0_core_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_qupv3_wrap0_s0_clk = { + .halt_reg = 0x1700c, + .halt_check = branch_halt_voted, + .clkr = { + .enable_reg = 0x52008, + .enable_mask = bit(10), + .hw.init = &(struct clk_init_data){ + .name = "gcc_qupv3_wrap0_s0_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_qupv3_wrap0_s0_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_qupv3_wrap0_s1_clk = { + .halt_reg = 0x1713c, + .halt_check = branch_halt_voted, + .clkr = { + .enable_reg = 0x52008, + .enable_mask = bit(11), + .hw.init = &(struct clk_init_data){ + .name = "gcc_qupv3_wrap0_s1_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_qupv3_wrap0_s1_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_qupv3_wrap0_s2_clk = { + .halt_reg = 0x1726c, + .halt_check = branch_halt_voted, + .clkr = { + .enable_reg = 0x52008, + .enable_mask = bit(12), + .hw.init = &(struct clk_init_data){ + .name = "gcc_qupv3_wrap0_s2_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_qupv3_wrap0_s2_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_qupv3_wrap0_s3_clk = { + .halt_reg = 0x1739c, + .halt_check = branch_halt_voted, + .clkr = { + .enable_reg = 0x52008, + .enable_mask = bit(13), + .hw.init = &(struct clk_init_data){ + .name = "gcc_qupv3_wrap0_s3_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_qupv3_wrap0_s3_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_qupv3_wrap0_s4_clk = { + .halt_reg = 0x174cc, + .halt_check = branch_halt_voted, + .clkr = { + .enable_reg = 0x52008, + .enable_mask = bit(14), + .hw.init = &(struct clk_init_data){ + .name = "gcc_qupv3_wrap0_s4_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_qupv3_wrap0_s4_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_qupv3_wrap0_s5_clk = { + .halt_reg = 0x175fc, + .halt_check = branch_halt_voted, + .clkr = { + .enable_reg = 0x52008, + .enable_mask = bit(15), + .hw.init = &(struct clk_init_data){ + .name = "gcc_qupv3_wrap0_s5_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_qupv3_wrap0_s5_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_qupv3_wrap0_s6_clk = { + .halt_reg = 0x1772c, + .halt_check = branch_halt_voted, + .clkr = { + .enable_reg = 0x52008, + .enable_mask = bit(16), + .hw.init = &(struct clk_init_data){ + .name = "gcc_qupv3_wrap0_s6_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_qupv3_wrap0_s6_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_qupv3_wrap0_s7_clk = { + .halt_reg = 0x1785c, + .halt_check = branch_halt_voted, + .clkr = { + .enable_reg = 0x52008, + .enable_mask = bit(17), + .hw.init = &(struct clk_init_data){ + .name = "gcc_qupv3_wrap0_s7_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_qupv3_wrap0_s7_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_qupv3_wrap1_core_2x_clk = { + .halt_reg = 0x23140, + .halt_check = branch_halt_voted, + .clkr = { + .enable_reg = 0x52008, + .enable_mask = bit(18), + .hw.init = &(struct clk_init_data){ + .name = "gcc_qupv3_wrap1_core_2x_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_qupv3_wrap1_core_clk = { + .halt_reg = 0x23138, + .halt_check = branch_halt_voted, + .clkr = { + .enable_reg = 0x52008, + .enable_mask = bit(19), + .hw.init = &(struct clk_init_data){ + .name = "gcc_qupv3_wrap1_core_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_qupv3_wrap1_s0_clk = { + .halt_reg = 0x1800c, + .halt_check = branch_halt_voted, + .clkr = { + .enable_reg = 0x52008, + .enable_mask = bit(22), + .hw.init = &(struct clk_init_data){ + .name = "gcc_qupv3_wrap1_s0_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_qupv3_wrap1_s0_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_qupv3_wrap1_s1_clk = { + .halt_reg = 0x1813c, + .halt_check = branch_halt_voted, + .clkr = { + .enable_reg = 0x52008, + .enable_mask = bit(23), + .hw.init = &(struct clk_init_data){ + .name = "gcc_qupv3_wrap1_s1_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_qupv3_wrap1_s1_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_qupv3_wrap1_s2_clk = { + .halt_reg = 0x1826c, + .halt_check = branch_halt_voted, + .clkr = { + .enable_reg = 0x52008, + .enable_mask = bit(24), + .hw.init = &(struct clk_init_data){ + .name = "gcc_qupv3_wrap1_s2_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_qupv3_wrap1_s2_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_qupv3_wrap1_s3_clk = { + .halt_reg = 0x1839c, + .halt_check = branch_halt_voted, + .clkr = { + .enable_reg = 0x52008, + .enable_mask = bit(25), + .hw.init = &(struct clk_init_data){ + .name = "gcc_qupv3_wrap1_s3_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_qupv3_wrap1_s3_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_qupv3_wrap1_s4_clk = { + .halt_reg = 0x184cc, + .halt_check = branch_halt_voted, + .clkr = { + .enable_reg = 0x52008, + .enable_mask = bit(26), + .hw.init = &(struct clk_init_data){ + .name = "gcc_qupv3_wrap1_s4_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_qupv3_wrap1_s4_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_qupv3_wrap1_s5_clk = { + .halt_reg = 0x185fc, + .halt_check = branch_halt_voted, + .clkr = { + .enable_reg = 0x52008, + .enable_mask = bit(27), + .hw.init = &(struct clk_init_data){ + .name = "gcc_qupv3_wrap1_s5_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_qupv3_wrap1_s5_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_qupv3_wrap1_s6_clk = { + .halt_reg = 0x1872c, + .halt_check = branch_halt_voted, + .clkr = { + .enable_reg = 0x52000, + .enable_mask = bit(13), + .hw.init = &(struct clk_init_data){ + .name = "gcc_qupv3_wrap1_s6_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_qupv3_wrap1_s6_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_qupv3_wrap1_s7_clk = { + .halt_reg = 0x1885c, + .halt_check = branch_halt_voted, + .clkr = { + .enable_reg = 0x52000, + .enable_mask = bit(14), + .hw.init = &(struct clk_init_data){ + .name = "gcc_qupv3_wrap1_s7_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_qupv3_wrap1_s7_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_qupv3_wrap_0_m_ahb_clk = { + .halt_reg = 0x17004, + .halt_check = branch_halt_voted, + .hwcg_reg = 0x17004, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x52008, + .enable_mask = bit(6), + .hw.init = &(struct clk_init_data){ + .name = "gcc_qupv3_wrap_0_m_ahb_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_qupv3_wrap_0_s_ahb_clk = { + .halt_reg = 0x17008, + .halt_check = branch_halt_voted, + .hwcg_reg = 0x17008, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x52008, + .enable_mask = bit(7), + .hw.init = &(struct clk_init_data){ + .name = "gcc_qupv3_wrap_0_s_ahb_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_qupv3_wrap_1_m_ahb_clk = { + .halt_reg = 0x18004, + .halt_check = branch_halt_voted, + .hwcg_reg = 0x18004, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x52008, + .enable_mask = bit(20), + .hw.init = &(struct clk_init_data){ + .name = "gcc_qupv3_wrap_1_m_ahb_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_qupv3_wrap_1_s_ahb_clk = { + .halt_reg = 0x18008, + .halt_check = branch_halt_voted, + .hwcg_reg = 0x18008, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x52008, + .enable_mask = bit(21), + .hw.init = &(struct clk_init_data){ + .name = "gcc_qupv3_wrap_1_s_ahb_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_sdcc1_ahb_clk = { + .halt_reg = 0x75004, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0x75004, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_sdcc1_ahb_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_sdcc1_apps_clk = { + .halt_reg = 0x75008, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0x75008, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_sdcc1_apps_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_sdcc1_apps_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_sdcc1_ice_core_clk = { + .halt_reg = 0x75024, + .halt_check = branch_halt_voted, + .hwcg_reg = 0x75024, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x75024, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_sdcc1_ice_core_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_sdcc1_ice_core_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_sdcc2_ahb_clk = { + .halt_reg = 0x14008, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0x14008, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_sdcc2_ahb_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_sdcc2_apps_clk = { + .halt_reg = 0x14004, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0x14004, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_sdcc2_apps_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_sdcc2_apps_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_sdcc4_ahb_clk = { + .halt_reg = 0x16008, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0x16008, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_sdcc4_ahb_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_sdcc4_apps_clk = { + .halt_reg = 0x16004, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0x16004, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_sdcc4_apps_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_sdcc4_apps_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +/* for cpuss functionality the ahb clock needs to be left enabled */ +static struct clk_branch gcc_sys_noc_cpuss_ahb_clk = { + .halt_reg = 0x48178, + .halt_check = branch_halt_voted, + .hwcg_reg = 0x48178, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x52000, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_sys_noc_cpuss_ahb_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_cpuss_ahb_postdiv_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_is_critical | clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_throttle_pcie_ahb_clk = { + .halt_reg = 0x9001c, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0x9001c, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_throttle_pcie_ahb_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_titan_nrt_throttle_core_clk = { + .halt_reg = 0x26024, + .halt_check = branch_halt_skip, + .hwcg_reg = 0x26024, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x26024, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_titan_nrt_throttle_core_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_titan_rt_throttle_core_clk = { + .halt_reg = 0x26018, + .halt_check = branch_halt_skip, + .hwcg_reg = 0x26018, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x26018, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_titan_rt_throttle_core_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_ufs_1_clkref_en = { + .halt_reg = 0x8c000, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0x8c000, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_ufs_1_clkref_en", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_ufs_phy_ahb_clk = { + .halt_reg = 0x77018, + .halt_check = branch_halt_voted, + .hwcg_reg = 0x77018, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x77018, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_ufs_phy_ahb_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_ufs_phy_axi_clk = { + .halt_reg = 0x77010, + .halt_check = branch_halt_voted, + .hwcg_reg = 0x77010, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x77010, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_ufs_phy_axi_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_ufs_phy_axi_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_ufs_phy_ice_core_clk = { + .halt_reg = 0x77064, + .halt_check = branch_halt_voted, + .hwcg_reg = 0x77064, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x77064, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_ufs_phy_ice_core_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_ufs_phy_ice_core_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_ufs_phy_phy_aux_clk = { + .halt_reg = 0x7709c, + .halt_check = branch_halt_voted, + .hwcg_reg = 0x7709c, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x7709c, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_ufs_phy_phy_aux_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_ufs_phy_phy_aux_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_ufs_phy_rx_symbol_0_clk = { + .halt_reg = 0x77020, + .halt_check = branch_halt_delay, + .clkr = { + .enable_reg = 0x77020, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_ufs_phy_rx_symbol_0_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_ufs_phy_rx_symbol_0_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_ufs_phy_rx_symbol_1_clk = { + .halt_reg = 0x770b8, + .halt_check = branch_halt_delay, + .clkr = { + .enable_reg = 0x770b8, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_ufs_phy_rx_symbol_1_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_ufs_phy_rx_symbol_1_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_ufs_phy_tx_symbol_0_clk = { + .halt_reg = 0x7701c, + .halt_check = branch_halt_delay, + .clkr = { + .enable_reg = 0x7701c, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_ufs_phy_tx_symbol_0_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_ufs_phy_tx_symbol_0_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_ufs_phy_unipro_core_clk = { + .halt_reg = 0x7705c, + .halt_check = branch_halt_voted, + .hwcg_reg = 0x7705c, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x7705c, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_ufs_phy_unipro_core_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_ufs_phy_unipro_core_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_usb30_prim_master_clk = { + .halt_reg = 0xf010, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0xf010, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_usb30_prim_master_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_usb30_prim_master_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_usb30_prim_mock_utmi_clk = { + .halt_reg = 0xf01c, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0xf01c, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_usb30_prim_mock_utmi_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = + &gcc_usb30_prim_mock_utmi_postdiv_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_usb30_prim_sleep_clk = { + .halt_reg = 0xf018, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0xf018, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_usb30_prim_sleep_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_usb30_sec_master_clk = { + .halt_reg = 0x9e010, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0x9e010, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_usb30_sec_master_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_usb30_sec_master_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_usb30_sec_mock_utmi_clk = { + .halt_reg = 0x9e01c, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0x9e01c, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_usb30_sec_mock_utmi_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = + &gcc_usb30_sec_mock_utmi_postdiv_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_usb30_sec_sleep_clk = { + .halt_reg = 0x9e018, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0x9e018, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_usb30_sec_sleep_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_usb3_prim_phy_aux_clk = { + .halt_reg = 0xf054, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0xf054, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_usb3_prim_phy_aux_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_usb3_prim_phy_aux_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_usb3_prim_phy_com_aux_clk = { + .halt_reg = 0xf058, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0xf058, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_usb3_prim_phy_com_aux_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_usb3_prim_phy_aux_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_usb3_prim_phy_pipe_clk = { + .halt_reg = 0xf05c, + .halt_check = branch_halt_delay, + .hwcg_reg = 0xf05c, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0xf05c, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_usb3_prim_phy_pipe_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_usb3_prim_phy_pipe_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_cfg_noc_lpass_clk = { + .halt_reg = 0x47020, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0x47020, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_cfg_noc_lpass_clk", + .ops = &clk_branch2_ops, + }, + }, +}; +static struct clk_branch gcc_mss_cfg_ahb_clk = { + .halt_reg = 0x8a000, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0x8a000, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_mss_cfg_ahb_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_mss_offline_axi_clk = { + .halt_reg = 0x8a004, + .halt_check = branch_halt_delay, + .clkr = { + .enable_reg = 0x8a004, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_mss_offline_axi_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_mss_snoc_axi_clk = { + .halt_reg = 0x8a154, + .halt_check = branch_halt_delay, + .clkr = { + .enable_reg = 0x8a154, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_mss_snoc_axi_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_mss_q6_memnoc_axi_clk = { + .halt_reg = 0x8a158, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0x8a158, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_mss_q6_memnoc_axi_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_regmap_mux gcc_mss_q6ss_boot_clk_src = { + .reg = 0x8a2a4, + .shift = 0, + .width = 1, + .parent_map = gcc_parent_map_15, + .clkr = { + .hw.init = &(struct clk_init_data){ + .name = "gcc_mss_q6ss_boot_clk_src", + .parent_data = gcc_parent_data_15, + .num_parents = array_size(gcc_parent_data_15), + .ops = &clk_regmap_mux_closest_ops, + }, + }, +}; + +static struct clk_branch gcc_usb3_sec_phy_aux_clk = { + .halt_reg = 0x9e054, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0x9e054, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_usb3_sec_phy_aux_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_usb3_sec_phy_aux_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_usb3_sec_phy_com_aux_clk = { + .halt_reg = 0x9e058, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0x9e058, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_usb3_sec_phy_com_aux_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_usb3_sec_phy_aux_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_usb3_sec_phy_pipe_clk = { + .halt_reg = 0x9e05c, + .halt_check = branch_halt_skip, + .hwcg_reg = 0x9e05c, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x9e05c, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_usb3_sec_phy_pipe_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_usb3_sec_phy_pipe_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_video_axi0_clk = { + .halt_reg = 0x2800c, + .halt_check = branch_halt_skip, + .hwcg_reg = 0x2800c, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x2800c, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_video_axi0_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_video_mvp_throttle_core_clk = { + .halt_reg = 0x28010, + .halt_check = branch_halt_skip, + .hwcg_reg = 0x28010, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x28010, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_video_mvp_throttle_core_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_wpss_ahb_clk = { + .halt_reg = 0x9d154, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0x9d154, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_wpss_ahb_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_wpss_ahb_bdg_mst_clk = { + .halt_reg = 0x9d158, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0x9d158, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_wpss_ahb_bdg_mst_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_wpss_rscp_clk = { + .halt_reg = 0x9d16c, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0x9d16c, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_wpss_rscp_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct gdsc gcc_pcie_0_gdsc = { + .gdscr = 0x6b004, + .pd = { + .name = "gcc_pcie_0_gdsc", + }, + .pwrsts = pwrsts_off_on, + .flags = votable, +}; + +static struct gdsc gcc_pcie_1_gdsc = { + .gdscr = 0x8d004, + .pd = { + .name = "gcc_pcie_1_gdsc", + }, + .pwrsts = pwrsts_off_on, + .flags = votable, +}; + +static struct gdsc gcc_ufs_phy_gdsc = { + .gdscr = 0x77004, + .pd = { + .name = "gcc_ufs_phy_gdsc", + }, + .pwrsts = pwrsts_off_on, + .flags = votable, +}; + +static struct gdsc gcc_usb30_prim_gdsc = { + .gdscr = 0xf004, + .pd = { + .name = "gcc_usb30_prim_gdsc", + }, + .pwrsts = pwrsts_off_on, + .flags = votable, +}; + +static struct gdsc gcc_usb30_sec_gdsc = { + .gdscr = 0x9e004, + .pd = { + .name = "gcc_usb30_sec_gdsc", + }, + .pwrsts = pwrsts_off_on, + .flags = votable, +}; + +static struct gdsc hlos1_vote_mmnoc_mmu_tbu_hf0_gdsc = { + .gdscr = 0x7d050, + .pd = { + .name = "hlos1_vote_mmnoc_mmu_tbu_hf0_gdsc", + }, + .pwrsts = pwrsts_off_on, + .flags = votable, +}; + +static struct gdsc hlos1_vote_mmnoc_mmu_tbu_hf1_gdsc = { + .gdscr = 0x7d058, + .pd = { + .name = "hlos1_vote_mmnoc_mmu_tbu_hf1_gdsc", + }, + .pwrsts = pwrsts_off_on, + .flags = votable, +}; + +static struct gdsc hlos1_vote_mmnoc_mmu_tbu_sf0_gdsc = { + .gdscr = 0x7d054, + .pd = { + .name = "hlos1_vote_mmnoc_mmu_tbu_sf0_gdsc", + }, + .pwrsts = pwrsts_off_on, + .flags = votable, +}; + +static struct gdsc hlos1_vote_turing_mmu_tbu0_gdsc = { + .gdscr = 0x7d05c, + .pd = { + .name = "hlos1_vote_turing_mmu_tbu0_gdsc", + }, + .pwrsts = pwrsts_off_on, + .flags = votable, +}; + +static struct gdsc hlos1_vote_turing_mmu_tbu1_gdsc = { + .gdscr = 0x7d060, + .pd = { + .name = "hlos1_vote_turing_mmu_tbu1_gdsc", + }, + .pwrsts = pwrsts_off_on, + .flags = votable, +}; + +static struct clk_regmap *gcc_sc7280_clocks[] = { + [gcc_aggre_noc_pcie_0_axi_clk] = &gcc_aggre_noc_pcie_0_axi_clk.clkr, + [gcc_aggre_noc_pcie_1_axi_clk] = &gcc_aggre_noc_pcie_1_axi_clk.clkr, + [gcc_aggre_ufs_phy_axi_clk] = &gcc_aggre_ufs_phy_axi_clk.clkr, + [gcc_aggre_usb3_prim_axi_clk] = &gcc_aggre_usb3_prim_axi_clk.clkr, + [gcc_aggre_usb3_sec_axi_clk] = &gcc_aggre_usb3_sec_axi_clk.clkr, + [gcc_camera_hf_axi_clk] = &gcc_camera_hf_axi_clk.clkr, + [gcc_camera_sf_axi_clk] = &gcc_camera_sf_axi_clk.clkr, + [gcc_cfg_noc_usb3_prim_axi_clk] = &gcc_cfg_noc_usb3_prim_axi_clk.clkr, + [gcc_cfg_noc_usb3_sec_axi_clk] = &gcc_cfg_noc_usb3_sec_axi_clk.clkr, + [gcc_cpuss_ahb_clk] = &gcc_cpuss_ahb_clk.clkr, + [gcc_cpuss_ahb_clk_src] = &gcc_cpuss_ahb_clk_src.clkr, + [gcc_cpuss_ahb_postdiv_clk_src] = &gcc_cpuss_ahb_postdiv_clk_src.clkr, + [gcc_ddrss_gpu_axi_clk] = &gcc_ddrss_gpu_axi_clk.clkr, + [gcc_ddrss_pcie_sf_clk] = &gcc_ddrss_pcie_sf_clk.clkr, + [gcc_disp_gpll0_clk_src] = &gcc_disp_gpll0_clk_src.clkr, + [gcc_disp_hf_axi_clk] = &gcc_disp_hf_axi_clk.clkr, + [gcc_disp_sf_axi_clk] = &gcc_disp_sf_axi_clk.clkr, + [gcc_gp1_clk] = &gcc_gp1_clk.clkr, + [gcc_gp1_clk_src] = &gcc_gp1_clk_src.clkr, + [gcc_gp2_clk] = &gcc_gp2_clk.clkr, + [gcc_gp2_clk_src] = &gcc_gp2_clk_src.clkr, + [gcc_gp3_clk] = &gcc_gp3_clk.clkr, + [gcc_gp3_clk_src] = &gcc_gp3_clk_src.clkr, + [gcc_gpll0] = &gcc_gpll0.clkr, + [gcc_gpll0_out_even] = &gcc_gpll0_out_even.clkr, + [gcc_gpll0_out_odd] = &gcc_gpll0_out_odd.clkr, + [gcc_gpll1] = &gcc_gpll1.clkr, + [gcc_gpll10] = &gcc_gpll10.clkr, + [gcc_gpll4] = &gcc_gpll4.clkr, + [gcc_gpll9] = &gcc_gpll9.clkr, + [gcc_gpu_gpll0_clk_src] = &gcc_gpu_gpll0_clk_src.clkr, + [gcc_gpu_gpll0_div_clk_src] = &gcc_gpu_gpll0_div_clk_src.clkr, + [gcc_gpu_iref_en] = &gcc_gpu_iref_en.clkr, + [gcc_gpu_memnoc_gfx_clk] = &gcc_gpu_memnoc_gfx_clk.clkr, + [gcc_gpu_snoc_dvm_gfx_clk] = &gcc_gpu_snoc_dvm_gfx_clk.clkr, + [gcc_pcie0_phy_rchng_clk] = &gcc_pcie0_phy_rchng_clk.clkr, + [gcc_pcie1_phy_rchng_clk] = &gcc_pcie1_phy_rchng_clk.clkr, + [gcc_pcie_0_aux_clk] = &gcc_pcie_0_aux_clk.clkr, + [gcc_pcie_0_aux_clk_src] = &gcc_pcie_0_aux_clk_src.clkr, + [gcc_pcie_0_cfg_ahb_clk] = &gcc_pcie_0_cfg_ahb_clk.clkr, + [gcc_pcie_0_mstr_axi_clk] = &gcc_pcie_0_mstr_axi_clk.clkr, + [gcc_pcie_0_phy_rchng_clk_src] = &gcc_pcie_0_phy_rchng_clk_src.clkr, + [gcc_pcie_0_pipe_clk] = &gcc_pcie_0_pipe_clk.clkr, + [gcc_pcie_0_pipe_clk_src] = &gcc_pcie_0_pipe_clk_src.clkr, + [gcc_pcie_0_slv_axi_clk] = &gcc_pcie_0_slv_axi_clk.clkr, + [gcc_pcie_0_slv_q2a_axi_clk] = &gcc_pcie_0_slv_q2a_axi_clk.clkr, + [gcc_pcie_1_aux_clk] = &gcc_pcie_1_aux_clk.clkr, + [gcc_pcie_1_aux_clk_src] = &gcc_pcie_1_aux_clk_src.clkr, + [gcc_pcie_1_cfg_ahb_clk] = &gcc_pcie_1_cfg_ahb_clk.clkr, + [gcc_pcie_1_mstr_axi_clk] = &gcc_pcie_1_mstr_axi_clk.clkr, + [gcc_pcie_1_phy_rchng_clk_src] = &gcc_pcie_1_phy_rchng_clk_src.clkr, + [gcc_pcie_1_pipe_clk] = &gcc_pcie_1_pipe_clk.clkr, + [gcc_pcie_1_pipe_clk_src] = &gcc_pcie_1_pipe_clk_src.clkr, + [gcc_pcie_1_slv_axi_clk] = &gcc_pcie_1_slv_axi_clk.clkr, + [gcc_pcie_1_slv_q2a_axi_clk] = &gcc_pcie_1_slv_q2a_axi_clk.clkr, + [gcc_pcie_throttle_core_clk] = &gcc_pcie_throttle_core_clk.clkr, + [gcc_pdm2_clk] = &gcc_pdm2_clk.clkr, + [gcc_pdm2_clk_src] = &gcc_pdm2_clk_src.clkr, + [gcc_pdm_ahb_clk] = &gcc_pdm_ahb_clk.clkr, + [gcc_pdm_xo4_clk] = &gcc_pdm_xo4_clk.clkr, + [gcc_qmip_camera_nrt_ahb_clk] = &gcc_qmip_camera_nrt_ahb_clk.clkr, + [gcc_qmip_camera_rt_ahb_clk] = &gcc_qmip_camera_rt_ahb_clk.clkr, + [gcc_qmip_disp_ahb_clk] = &gcc_qmip_disp_ahb_clk.clkr, + [gcc_qmip_video_vcodec_ahb_clk] = &gcc_qmip_video_vcodec_ahb_clk.clkr, + [gcc_qspi_cnoc_periph_ahb_clk] = &gcc_qspi_cnoc_periph_ahb_clk.clkr, + [gcc_qspi_core_clk] = &gcc_qspi_core_clk.clkr, + [gcc_qspi_core_clk_src] = &gcc_qspi_core_clk_src.clkr, + [gcc_qupv3_wrap0_core_2x_clk] = &gcc_qupv3_wrap0_core_2x_clk.clkr, + [gcc_qupv3_wrap0_core_clk] = &gcc_qupv3_wrap0_core_clk.clkr, + [gcc_qupv3_wrap0_s0_clk] = &gcc_qupv3_wrap0_s0_clk.clkr, + [gcc_qupv3_wrap0_s0_clk_src] = &gcc_qupv3_wrap0_s0_clk_src.clkr, + [gcc_qupv3_wrap0_s1_clk] = &gcc_qupv3_wrap0_s1_clk.clkr, + [gcc_qupv3_wrap0_s1_clk_src] = &gcc_qupv3_wrap0_s1_clk_src.clkr, + [gcc_qupv3_wrap0_s2_clk] = &gcc_qupv3_wrap0_s2_clk.clkr, + [gcc_qupv3_wrap0_s2_clk_src] = &gcc_qupv3_wrap0_s2_clk_src.clkr, + [gcc_qupv3_wrap0_s3_clk] = &gcc_qupv3_wrap0_s3_clk.clkr, + [gcc_qupv3_wrap0_s3_clk_src] = &gcc_qupv3_wrap0_s3_clk_src.clkr, + [gcc_qupv3_wrap0_s4_clk] = &gcc_qupv3_wrap0_s4_clk.clkr, + [gcc_qupv3_wrap0_s4_clk_src] = &gcc_qupv3_wrap0_s4_clk_src.clkr, + [gcc_qupv3_wrap0_s5_clk] = &gcc_qupv3_wrap0_s5_clk.clkr, + [gcc_qupv3_wrap0_s5_clk_src] = &gcc_qupv3_wrap0_s5_clk_src.clkr, + [gcc_qupv3_wrap0_s6_clk] = &gcc_qupv3_wrap0_s6_clk.clkr, + [gcc_qupv3_wrap0_s6_clk_src] = &gcc_qupv3_wrap0_s6_clk_src.clkr, + [gcc_qupv3_wrap0_s7_clk] = &gcc_qupv3_wrap0_s7_clk.clkr, + [gcc_qupv3_wrap0_s7_clk_src] = &gcc_qupv3_wrap0_s7_clk_src.clkr, + [gcc_qupv3_wrap1_core_2x_clk] = &gcc_qupv3_wrap1_core_2x_clk.clkr, + [gcc_qupv3_wrap1_core_clk] = &gcc_qupv3_wrap1_core_clk.clkr, + [gcc_qupv3_wrap1_s0_clk] = &gcc_qupv3_wrap1_s0_clk.clkr, + [gcc_qupv3_wrap1_s0_clk_src] = &gcc_qupv3_wrap1_s0_clk_src.clkr, + [gcc_qupv3_wrap1_s1_clk] = &gcc_qupv3_wrap1_s1_clk.clkr, + [gcc_qupv3_wrap1_s1_clk_src] = &gcc_qupv3_wrap1_s1_clk_src.clkr, + [gcc_qupv3_wrap1_s2_clk] = &gcc_qupv3_wrap1_s2_clk.clkr, + [gcc_qupv3_wrap1_s2_clk_src] = &gcc_qupv3_wrap1_s2_clk_src.clkr, + [gcc_qupv3_wrap1_s3_clk] = &gcc_qupv3_wrap1_s3_clk.clkr, + [gcc_qupv3_wrap1_s3_clk_src] = &gcc_qupv3_wrap1_s3_clk_src.clkr, + [gcc_qupv3_wrap1_s4_clk] = &gcc_qupv3_wrap1_s4_clk.clkr, + [gcc_qupv3_wrap1_s4_clk_src] = &gcc_qupv3_wrap1_s4_clk_src.clkr, + [gcc_qupv3_wrap1_s5_clk] = &gcc_qupv3_wrap1_s5_clk.clkr, + [gcc_qupv3_wrap1_s5_clk_src] = &gcc_qupv3_wrap1_s5_clk_src.clkr, + [gcc_qupv3_wrap1_s6_clk] = &gcc_qupv3_wrap1_s6_clk.clkr, + [gcc_qupv3_wrap1_s6_clk_src] = &gcc_qupv3_wrap1_s6_clk_src.clkr, + [gcc_qupv3_wrap1_s7_clk] = &gcc_qupv3_wrap1_s7_clk.clkr, + [gcc_qupv3_wrap1_s7_clk_src] = &gcc_qupv3_wrap1_s7_clk_src.clkr, + [gcc_qupv3_wrap_0_m_ahb_clk] = &gcc_qupv3_wrap_0_m_ahb_clk.clkr, + [gcc_qupv3_wrap_0_s_ahb_clk] = &gcc_qupv3_wrap_0_s_ahb_clk.clkr, + [gcc_qupv3_wrap_1_m_ahb_clk] = &gcc_qupv3_wrap_1_m_ahb_clk.clkr, + [gcc_qupv3_wrap_1_s_ahb_clk] = &gcc_qupv3_wrap_1_s_ahb_clk.clkr, + [gcc_sdcc1_ahb_clk] = &gcc_sdcc1_ahb_clk.clkr, + [gcc_sdcc1_apps_clk] = &gcc_sdcc1_apps_clk.clkr, + [gcc_sdcc1_apps_clk_src] = &gcc_sdcc1_apps_clk_src.clkr, + [gcc_sdcc1_ice_core_clk] = &gcc_sdcc1_ice_core_clk.clkr, + [gcc_sdcc1_ice_core_clk_src] = &gcc_sdcc1_ice_core_clk_src.clkr, + [gcc_sdcc2_ahb_clk] = &gcc_sdcc2_ahb_clk.clkr, + [gcc_sdcc2_apps_clk] = &gcc_sdcc2_apps_clk.clkr, + [gcc_sdcc2_apps_clk_src] = &gcc_sdcc2_apps_clk_src.clkr, + [gcc_sdcc4_ahb_clk] = &gcc_sdcc4_ahb_clk.clkr, + [gcc_sdcc4_apps_clk] = &gcc_sdcc4_apps_clk.clkr, + [gcc_sdcc4_apps_clk_src] = &gcc_sdcc4_apps_clk_src.clkr, + [gcc_sys_noc_cpuss_ahb_clk] = &gcc_sys_noc_cpuss_ahb_clk.clkr, + [gcc_throttle_pcie_ahb_clk] = &gcc_throttle_pcie_ahb_clk.clkr, + [gcc_titan_nrt_throttle_core_clk] = + &gcc_titan_nrt_throttle_core_clk.clkr, + [gcc_titan_rt_throttle_core_clk] = &gcc_titan_rt_throttle_core_clk.clkr, + [gcc_ufs_1_clkref_en] = &gcc_ufs_1_clkref_en.clkr, + [gcc_ufs_phy_ahb_clk] = &gcc_ufs_phy_ahb_clk.clkr, + [gcc_ufs_phy_axi_clk] = &gcc_ufs_phy_axi_clk.clkr, + [gcc_ufs_phy_axi_clk_src] = &gcc_ufs_phy_axi_clk_src.clkr, + [gcc_ufs_phy_ice_core_clk] = &gcc_ufs_phy_ice_core_clk.clkr, + [gcc_ufs_phy_ice_core_clk_src] = &gcc_ufs_phy_ice_core_clk_src.clkr, + [gcc_ufs_phy_phy_aux_clk] = &gcc_ufs_phy_phy_aux_clk.clkr, + [gcc_ufs_phy_phy_aux_clk_src] = &gcc_ufs_phy_phy_aux_clk_src.clkr, + [gcc_ufs_phy_rx_symbol_0_clk] = &gcc_ufs_phy_rx_symbol_0_clk.clkr, + [gcc_ufs_phy_rx_symbol_0_clk_src] = + &gcc_ufs_phy_rx_symbol_0_clk_src.clkr, + [gcc_ufs_phy_rx_symbol_1_clk] = &gcc_ufs_phy_rx_symbol_1_clk.clkr, + [gcc_ufs_phy_rx_symbol_1_clk_src] = + &gcc_ufs_phy_rx_symbol_1_clk_src.clkr, + [gcc_ufs_phy_tx_symbol_0_clk] = &gcc_ufs_phy_tx_symbol_0_clk.clkr, + [gcc_ufs_phy_tx_symbol_0_clk_src] = + &gcc_ufs_phy_tx_symbol_0_clk_src.clkr, + [gcc_ufs_phy_unipro_core_clk] = &gcc_ufs_phy_unipro_core_clk.clkr, + [gcc_ufs_phy_unipro_core_clk_src] = + &gcc_ufs_phy_unipro_core_clk_src.clkr, + [gcc_usb30_prim_master_clk] = &gcc_usb30_prim_master_clk.clkr, + [gcc_usb30_prim_master_clk_src] = &gcc_usb30_prim_master_clk_src.clkr, + [gcc_usb30_prim_mock_utmi_clk] = &gcc_usb30_prim_mock_utmi_clk.clkr, + [gcc_usb30_prim_mock_utmi_clk_src] = + &gcc_usb30_prim_mock_utmi_clk_src.clkr, + [gcc_usb30_prim_mock_utmi_postdiv_clk_src] = + &gcc_usb30_prim_mock_utmi_postdiv_clk_src.clkr, + [gcc_usb30_prim_sleep_clk] = &gcc_usb30_prim_sleep_clk.clkr, + [gcc_usb30_sec_master_clk] = &gcc_usb30_sec_master_clk.clkr, + [gcc_usb30_sec_master_clk_src] = &gcc_usb30_sec_master_clk_src.clkr, + [gcc_usb30_sec_mock_utmi_clk] = &gcc_usb30_sec_mock_utmi_clk.clkr, + [gcc_usb30_sec_mock_utmi_clk_src] = + &gcc_usb30_sec_mock_utmi_clk_src.clkr, + [gcc_usb30_sec_mock_utmi_postdiv_clk_src] = + &gcc_usb30_sec_mock_utmi_postdiv_clk_src.clkr, + [gcc_usb30_sec_sleep_clk] = &gcc_usb30_sec_sleep_clk.clkr, + [gcc_usb3_prim_phy_aux_clk] = &gcc_usb3_prim_phy_aux_clk.clkr, + [gcc_usb3_prim_phy_aux_clk_src] = &gcc_usb3_prim_phy_aux_clk_src.clkr, + [gcc_usb3_prim_phy_com_aux_clk] = &gcc_usb3_prim_phy_com_aux_clk.clkr, + [gcc_usb3_prim_phy_pipe_clk] = &gcc_usb3_prim_phy_pipe_clk.clkr, + [gcc_usb3_prim_phy_pipe_clk_src] = &gcc_usb3_prim_phy_pipe_clk_src.clkr, + [gcc_usb3_sec_phy_aux_clk] = &gcc_usb3_sec_phy_aux_clk.clkr, + [gcc_usb3_sec_phy_aux_clk_src] = &gcc_usb3_sec_phy_aux_clk_src.clkr, + [gcc_usb3_sec_phy_com_aux_clk] = &gcc_usb3_sec_phy_com_aux_clk.clkr, + [gcc_usb3_sec_phy_pipe_clk] = &gcc_usb3_sec_phy_pipe_clk.clkr, + [gcc_usb3_sec_phy_pipe_clk_src] = &gcc_usb3_sec_phy_pipe_clk_src.clkr, + [gcc_video_axi0_clk] = &gcc_video_axi0_clk.clkr, + [gcc_video_mvp_throttle_core_clk] = + &gcc_video_mvp_throttle_core_clk.clkr, + [gcc_cfg_noc_lpass_clk] = &gcc_cfg_noc_lpass_clk.clkr, + [gcc_mss_gpll0_main_div_clk_src] = &gcc_mss_gpll0_main_div_clk_src.clkr, + [gcc_mss_cfg_ahb_clk] = &gcc_mss_cfg_ahb_clk.clkr, + [gcc_mss_offline_axi_clk] = &gcc_mss_offline_axi_clk.clkr, + [gcc_mss_snoc_axi_clk] = &gcc_mss_snoc_axi_clk.clkr, + [gcc_mss_q6_memnoc_axi_clk] = &gcc_mss_q6_memnoc_axi_clk.clkr, + [gcc_mss_q6ss_boot_clk_src] = &gcc_mss_q6ss_boot_clk_src.clkr, + [gcc_aggre_noc_pcie_tbu_clk] = &gcc_aggre_noc_pcie_tbu_clk.clkr, + [gcc_aggre_noc_pcie_center_sf_axi_clk] = + &gcc_aggre_noc_pcie_center_sf_axi_clk.clkr, + [gcc_pcie_clkref_en] = &gcc_pcie_clkref_en.clkr, + [gcc_edp_clkref_en] = &gcc_edp_clkref_en.clkr, + [gcc_sec_ctrl_clk_src] = &gcc_sec_ctrl_clk_src.clkr, + [gcc_wpss_ahb_clk] = &gcc_wpss_ahb_clk.clkr, + [gcc_wpss_ahb_bdg_mst_clk] = &gcc_wpss_ahb_bdg_mst_clk.clkr, + [gcc_wpss_rscp_clk] = &gcc_wpss_rscp_clk.clkr, +}; + +static struct gdsc *gcc_sc7280_gdscs[] = { + [gcc_pcie_0_gdsc] = &gcc_pcie_0_gdsc, + [gcc_pcie_1_gdsc] = &gcc_pcie_1_gdsc, + [gcc_ufs_phy_gdsc] = &gcc_ufs_phy_gdsc, + [gcc_usb30_prim_gdsc] = &gcc_usb30_prim_gdsc, + [gcc_usb30_sec_gdsc] = &gcc_usb30_sec_gdsc, + [hlos1_vote_mmnoc_mmu_tbu_hf0_gdsc] = &hlos1_vote_mmnoc_mmu_tbu_hf0_gdsc, + [hlos1_vote_mmnoc_mmu_tbu_hf1_gdsc] = &hlos1_vote_mmnoc_mmu_tbu_hf1_gdsc, + [hlos1_vote_mmnoc_mmu_tbu_sf0_gdsc] = &hlos1_vote_mmnoc_mmu_tbu_sf0_gdsc, + [hlos1_vote_turing_mmu_tbu0_gdsc] = &hlos1_vote_turing_mmu_tbu0_gdsc, + [hlos1_vote_turing_mmu_tbu1_gdsc] = &hlos1_vote_turing_mmu_tbu1_gdsc, +}; + +static const struct qcom_reset_map gcc_sc7280_resets[] = { + [gcc_pcie_0_bcr] = { 0x6b000 }, + [gcc_pcie_0_phy_bcr] = { 0x6c01c }, + [gcc_pcie_1_bcr] = { 0x8d000 }, + [gcc_pcie_1_phy_bcr] = { 0x8e01c }, + [gcc_qusb2phy_prim_bcr] = { 0x12000 }, + [gcc_qusb2phy_sec_bcr] = { 0x12004 }, + [gcc_sdcc1_bcr] = { 0x75000 }, + [gcc_sdcc2_bcr] = { 0x14000 }, + [gcc_sdcc4_bcr] = { 0x16000 }, + [gcc_ufs_phy_bcr] = { 0x77000 }, + [gcc_usb30_prim_bcr] = { 0xf000 }, + [gcc_usb30_sec_bcr] = { 0x9e000 }, + [gcc_usb3_dp_phy_prim_bcr] = { 0x50008 }, + [gcc_usb3_phy_prim_bcr] = { 0x50000 }, + [gcc_usb3phy_phy_prim_bcr] = { 0x50004 }, + [gcc_usb_phy_cfg_ahb2phy_bcr] = { 0x6a000 }, +}; + +static const struct clk_rcg_dfs_data gcc_dfs_clocks[] = { + define_rcg_dfs(gcc_qupv3_wrap0_s0_clk_src), + define_rcg_dfs(gcc_qupv3_wrap0_s1_clk_src), + define_rcg_dfs(gcc_qupv3_wrap0_s2_clk_src), + define_rcg_dfs(gcc_qupv3_wrap0_s3_clk_src), + define_rcg_dfs(gcc_qupv3_wrap0_s4_clk_src), + define_rcg_dfs(gcc_qupv3_wrap0_s5_clk_src), + define_rcg_dfs(gcc_qupv3_wrap0_s6_clk_src), + define_rcg_dfs(gcc_qupv3_wrap0_s7_clk_src), + define_rcg_dfs(gcc_qupv3_wrap1_s0_clk_src), + define_rcg_dfs(gcc_qupv3_wrap1_s1_clk_src), + define_rcg_dfs(gcc_qupv3_wrap1_s2_clk_src), + define_rcg_dfs(gcc_qupv3_wrap1_s3_clk_src), + define_rcg_dfs(gcc_qupv3_wrap1_s4_clk_src), + define_rcg_dfs(gcc_qupv3_wrap1_s5_clk_src), + define_rcg_dfs(gcc_qupv3_wrap1_s6_clk_src), + define_rcg_dfs(gcc_qupv3_wrap1_s7_clk_src), +}; + +static const struct regmap_config gcc_sc7280_regmap_config = { + .reg_bits = 32, + .reg_stride = 4, + .val_bits = 32, + .max_register = 0x9f128, + .fast_io = true, +}; + +static const struct qcom_cc_desc gcc_sc7280_desc = { + .config = &gcc_sc7280_regmap_config, + .clks = gcc_sc7280_clocks, + .num_clks = array_size(gcc_sc7280_clocks), + .resets = gcc_sc7280_resets, + .num_resets = array_size(gcc_sc7280_resets), + .gdscs = gcc_sc7280_gdscs, + .num_gdscs = array_size(gcc_sc7280_gdscs), +}; + +static const struct of_device_id gcc_sc7280_match_table[] = { + { .compatible = "qcom,gcc-sc7280" }, + { } +}; +module_device_table(of, gcc_sc7280_match_table); + +static int gcc_sc7280_probe(struct platform_device *pdev) +{ + struct regmap *regmap; + int ret; + + regmap = qcom_cc_map(pdev, &gcc_sc7280_desc); + if (is_err(regmap)) + return ptr_err(regmap); + + /* + * keep the clocks always-on + * gcc_camera_ahb_clk/xo_clk, gcc_disp_ahb_clk/xo_clk + * gcc_video_ahb_clk/xo_clk, gcc_gpu_cfg_ahb_clk + */ + regmap_update_bits(regmap, 0x26004, bit(0), bit(0)); + regmap_update_bits(regmap, 0x26028, bit(0), bit(0)); + regmap_update_bits(regmap, 0x27004, bit(0), bit(0)); + regmap_update_bits(regmap, 0x2701c, bit(0), bit(0)); + regmap_update_bits(regmap, 0x28004, bit(0), bit(0)); + regmap_update_bits(regmap, 0x28014, bit(0), bit(0)); + regmap_update_bits(regmap, 0x71004, bit(0), bit(0)); + + ret = qcom_cc_register_rcg_dfs(regmap, gcc_dfs_clocks, + array_size(gcc_dfs_clocks)); + if (ret) + return ret; + + return qcom_cc_really_probe(pdev, &gcc_sc7280_desc, regmap); +} + +static struct platform_driver gcc_sc7280_driver = { + .probe = gcc_sc7280_probe, + .driver = { + .name = "gcc-sc7280", + .of_match_table = gcc_sc7280_match_table, + }, +}; + +static int __init gcc_sc7280_init(void) +{ + return platform_driver_register(&gcc_sc7280_driver); +} +subsys_initcall(gcc_sc7280_init); + +static void __exit gcc_sc7280_exit(void) +{ + platform_driver_unregister(&gcc_sc7280_driver); +} +module_exit(gcc_sc7280_exit); + +module_description("qti gcc sc7280 driver"); +module_license("gpl v2");
|
Clock
|
a3cc092196ef63570c8744c3ac88c3c6c67ab44b
|
taniya das
|
drivers
|
clk
|
qcom
|
clk: qcom: add sdm660 gpu clock controller (gpucc) driver
|
the gpucc manages the clocks for the adreno gpu found on the sdm630, sdm636, sdm660 socs.
|
this release allows to map an uid to a different one in a mount; it also adds support for selecting the preemption model at runtime; support for a low-overhead memory error detector designed to be used in production; support for the acrn hypervisor designed for embedded systems; btrfs initial support for zoned devices, subpage blocks sizes and performance improvements; support for eager nfs writes; support for a thermal power management to control the surface temperature of embedded devices in an unified way; the napi polling can be moved to a kernel thread; and support for non-blocking path lookups. as always, there are many other features, new drivers, improvements and fixes.
|
add sdm660 gpu clock controller (gpucc) driver
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'cryptography', 'security', 'networking', 'architectures x86 arm risc-v powerpc mips csky s390 pa-risc c6x']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'device voltage and frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'non-transparent bridge (ntb)', 'thunderbolt', 'clock', 'phy ("physical layer" framework)', 'cxl (compute express link)', 'various']
|
['qcom']
|
['h', 'kconfig', 'c', 'makefile']
| 4
| 387
| 0
|
--- diff --git a/drivers/clk/qcom/kconfig b/drivers/clk/qcom/kconfig --- a/drivers/clk/qcom/kconfig +++ b/drivers/clk/qcom/kconfig +config sdm_gpucc_660 + tristate "sdm660 graphics clock controller" + select sdm_gcc_660 + select qcom_gdsc + help + support for the graphics clock controller on sdm630/636/660 devices. + say y if you want to support graphics controller devices and + functionality such as 3d graphics + diff --git a/drivers/clk/qcom/makefile b/drivers/clk/qcom/makefile --- a/drivers/clk/qcom/makefile +++ b/drivers/clk/qcom/makefile +obj-$(config_sdm_gpucc_660) += gpucc-sdm660.o diff --git a/drivers/clk/qcom/gpucc-sdm660.c b/drivers/clk/qcom/gpucc-sdm660.c --- /dev/null +++ b/drivers/clk/qcom/gpucc-sdm660.c +// spdx-license-identifier: gpl-2.0-only +/* + * copyright (c) 2020, the linux foundation. all rights reserved. + * copyright (c) 2020, angelogioacchino del regno + * <angelogioacchino.delregno@somainline.org> + */ + +#include <linux/bitops.h> +#include <linux/clk.h> +#include <linux/clk-provider.h> +#include <linux/err.h> +#include <linux/kernel.h> +#include <linux/module.h> +#include <linux/platform_device.h> +#include <linux/of.h> +#include <linux/of_device.h> +#include <linux/regmap.h> +#include <linux/reset-controller.h> +#include <dt-bindings/clock/qcom,gpucc-sdm660.h> + +#include "clk-alpha-pll.h" +#include "common.h" +#include "clk-regmap.h" +#include "clk-pll.h" +#include "clk-rcg.h" +#include "clk-branch.h" +#include "gdsc.h" +#include "reset.h" + +enum { + p_gpu_xo, + p_core_bi_pll_test_se, + p_gpll0_out_main, + p_gpll0_out_main_div, + p_gpu_pll0_pll_out_main, + p_gpu_pll1_pll_out_main, +}; + +static struct clk_branch gpucc_cxo_clk = { + .halt_reg = 0x1020, + .clkr = { + .enable_reg = 0x1020, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gpucc_cxo_clk", + .parent_data = &(const struct clk_parent_data){ + .fw_name = "xo", + .name = "xo" + }, + .num_parents = 1, + .ops = &clk_branch2_ops, + .flags = clk_is_critical, + }, + }, +}; + +static struct pll_vco gpu_vco[] = { + { 1000000000, 2000000000, 0 }, + { 500000000, 1000000000, 2 }, + { 250000000, 500000000, 3 }, +}; + +static struct clk_alpha_pll gpu_pll0_pll_out_main = { + .offset = 0x0, + .regs = clk_alpha_pll_regs[clk_alpha_pll_type_default], + .vco_table = gpu_vco, + .num_vco = array_size(gpu_vco), + .clkr.hw.init = &(struct clk_init_data){ + .name = "gpu_pll0_pll_out_main", + .parent_data = &(const struct clk_parent_data){ + .hw = &gpucc_cxo_clk.clkr.hw, + }, + .num_parents = 1, + .ops = &clk_alpha_pll_ops, + }, +}; + +static struct clk_alpha_pll gpu_pll1_pll_out_main = { + .offset = 0x40, + .regs = clk_alpha_pll_regs[clk_alpha_pll_type_default], + .vco_table = gpu_vco, + .num_vco = array_size(gpu_vco), + .clkr.hw.init = &(struct clk_init_data){ + .name = "gpu_pll1_pll_out_main", + .parent_data = &(const struct clk_parent_data){ + .hw = &gpucc_cxo_clk.clkr.hw, + }, + .num_parents = 1, + .ops = &clk_alpha_pll_ops, + }, +}; + +static const struct parent_map gpucc_parent_map_1[] = { + { p_gpu_xo, 0 }, + { p_gpu_pll0_pll_out_main, 1 }, + { p_gpu_pll1_pll_out_main, 3 }, + { p_gpll0_out_main, 5 }, +}; + +static const struct clk_parent_data gpucc_parent_data_1[] = { + { .hw = &gpucc_cxo_clk.clkr.hw }, + { .hw = &gpu_pll0_pll_out_main.clkr.hw }, + { .hw = &gpu_pll1_pll_out_main.clkr.hw }, + { .fw_name = "gcc_gpu_gpll0_clk", .name = "gcc_gpu_gpll0_clk" }, +}; + +static struct clk_rcg2_gfx3d gfx3d_clk_src = { + .div = 2, + .rcg = { + .cmd_rcgr = 0x1070, + .mnd_width = 0, + .hid_width = 5, + .parent_map = gpucc_parent_map_1, + .clkr.hw.init = &(struct clk_init_data){ + .name = "gfx3d_clk_src", + .parent_data = gpucc_parent_data_1, + .num_parents = 4, + .ops = &clk_gfx3d_ops, + .flags = clk_set_rate_parent | clk_ops_parent_enable, + }, + }, + .hws = (struct clk_hw*[]){ + &gpucc_cxo_clk.clkr.hw, + &gpu_pll0_pll_out_main.clkr.hw, + &gpu_pll1_pll_out_main.clkr.hw, + } +}; + +static struct clk_branch gpucc_gfx3d_clk = { + .halt_reg = 0x1098, + .halt_check = branch_halt, + .hwcg_reg = 0x1098, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x1098, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gpucc_gfx3d_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gfx3d_clk_src.rcg.clkr.hw, + }, + .num_parents = 1, + .ops = &clk_branch2_ops, + .flags = clk_set_rate_parent, + }, + }, +}; + +static const struct parent_map gpucc_parent_map_0[] = { + { p_gpu_xo, 0 }, + { p_gpll0_out_main, 5 }, + { p_gpll0_out_main_div, 6 }, +}; + +static const struct clk_parent_data gpucc_parent_data_0[] = { + { .hw = &gpucc_cxo_clk.clkr.hw }, + { .fw_name = "gcc_gpu_gpll0_clk", .name = "gcc_gpu_gpll0_clk" }, + { .fw_name = "gcc_gpu_gpll0_div_clk", .name = "gcc_gpu_gpll0_div_clk" }, +}; + +static const struct freq_tbl ftbl_rbbmtimer_clk_src[] = { + f(19200000, p_gpu_xo, 1, 0, 0), + { } +}; + +static struct clk_rcg2 rbbmtimer_clk_src = { + .cmd_rcgr = 0x10b0, + .mnd_width = 0, + .hid_width = 5, + .parent_map = gpucc_parent_map_0, + .freq_tbl = ftbl_rbbmtimer_clk_src, + .clkr.hw.init = &(struct clk_init_data){ + .name = "rbbmtimer_clk_src", + .parent_data = gpucc_parent_data_0, + .num_parents = 3, + .ops = &clk_rcg2_ops, + }, +}; + +static const struct freq_tbl ftbl_rbcpr_clk_src[] = { + f(19200000, p_gpu_xo, 1, 0, 0), + f(50000000, p_gpll0_out_main_div, 6, 0, 0), + { } +}; + +static struct clk_rcg2 rbcpr_clk_src = { + .cmd_rcgr = 0x1030, + .mnd_width = 0, + .hid_width = 5, + .parent_map = gpucc_parent_map_0, + .freq_tbl = ftbl_rbcpr_clk_src, + .clkr.hw.init = &(struct clk_init_data){ + .name = "rbcpr_clk_src", + .parent_data = gpucc_parent_data_0, + .num_parents = 3, + .ops = &clk_rcg2_ops, + }, +}; + +static struct clk_branch gpucc_rbbmtimer_clk = { + .halt_reg = 0x10d0, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0x10d0, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gpucc_rbbmtimer_clk", + .parent_names = (const char *[]){ + "rbbmtimer_clk_src", + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gpucc_rbcpr_clk = { + .halt_reg = 0x1054, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0x1054, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gpucc_rbcpr_clk", + .parent_names = (const char *[]){ + "rbcpr_clk_src", + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct gdsc gpu_cx_gdsc = { + .gdscr = 0x1004, + .gds_hw_ctrl = 0x1008, + .pd = { + .name = "gpu_cx", + }, + .pwrsts = pwrsts_off_on, + .flags = votable, +}; + +static struct gdsc gpu_gx_gdsc = { + .gdscr = 0x1094, + .clamp_io_ctrl = 0x130, + .resets = (unsigned int []){ gpu_gx_bcr }, + .reset_count = 1, + .cxcs = (unsigned int []){ 0x1098 }, + .cxc_count = 1, + .pd = { + .name = "gpu_gx", + }, + .parent = &gpu_cx_gdsc.pd, + .pwrsts = pwrsts_off | pwrsts_on | pwrsts_ret, + .flags = clamp_io | sw_reset | aon_reset | no_ret_periph, +}; + +static struct gdsc *gpucc_sdm660_gdscs[] = { + [gpu_cx_gdsc] = &gpu_cx_gdsc, + [gpu_gx_gdsc] = &gpu_gx_gdsc, +}; + +static const struct qcom_reset_map gpucc_sdm660_resets[] = { + [gpu_cx_bcr] = { 0x1000 }, + [rbcpr_bcr] = { 0x1050 }, + [gpu_gx_bcr] = { 0x1090 }, + [spdm_bcr] = { 0x10e0 }, +}; + +static struct clk_regmap *gpucc_sdm660_clocks[] = { + [gpucc_cxo_clk] = &gpucc_cxo_clk.clkr, + [gpu_pll0_pll] = &gpu_pll0_pll_out_main.clkr, + [gpu_pll1_pll] = &gpu_pll1_pll_out_main.clkr, + [gfx3d_clk_src] = &gfx3d_clk_src.rcg.clkr, + [rbcpr_clk_src] = &rbcpr_clk_src.clkr, + [rbbmtimer_clk_src] = &rbbmtimer_clk_src.clkr, + [gpucc_rbcpr_clk] = &gpucc_rbcpr_clk.clkr, + [gpucc_gfx3d_clk] = &gpucc_gfx3d_clk.clkr, + [gpucc_rbbmtimer_clk] = &gpucc_rbbmtimer_clk.clkr, +}; + +static const struct regmap_config gpucc_660_regmap_config = { + .reg_bits = 32, + .reg_stride = 4, + .val_bits = 32, + .max_register = 0x9034, + .fast_io = true, +}; + +static const struct qcom_cc_desc gpucc_sdm660_desc = { + .config = &gpucc_660_regmap_config, + .clks = gpucc_sdm660_clocks, + .num_clks = array_size(gpucc_sdm660_clocks), + .resets = gpucc_sdm660_resets, + .num_resets = array_size(gpucc_sdm660_resets), + .gdscs = gpucc_sdm660_gdscs, + .num_gdscs = array_size(gpucc_sdm660_gdscs), +}; + +static const struct of_device_id gpucc_sdm660_match_table[] = { + { .compatible = "qcom,gpucc-sdm660" }, + { .compatible = "qcom,gpucc-sdm630" }, + { } +}; +module_device_table(of, gpucc_sdm660_match_table); + +static int gpucc_sdm660_probe(struct platform_device *pdev) +{ + struct regmap *regmap; + struct alpha_pll_config gpu_pll_config = { + .config_ctl_val = 0x4001055b, + .alpha = 0xaaaaab00, + .alpha_en_mask = bit(24), + .vco_val = 0x2 << 20, + .vco_mask = 0x3 << 20, + .main_output_mask = 0x1, + }; + + regmap = qcom_cc_map(pdev, &gpucc_sdm660_desc); + if (is_err(regmap)) + return ptr_err(regmap); + + /* 800mhz configuration for gpu pll0 */ + gpu_pll_config.l = 0x29; + gpu_pll_config.alpha_hi = 0xaa; + clk_alpha_pll_configure(&gpu_pll0_pll_out_main, regmap, &gpu_pll_config); + + /* 740mhz configuration for gpu pll1 */ + gpu_pll_config.l = 0x26; + gpu_pll_config.alpha_hi = 0x8a; + clk_alpha_pll_configure(&gpu_pll1_pll_out_main, regmap, &gpu_pll_config); + + return qcom_cc_really_probe(pdev, &gpucc_sdm660_desc, regmap); +} + +static struct platform_driver gpucc_sdm660_driver = { + .probe = gpucc_sdm660_probe, + .driver = { + .name = "gpucc-sdm660", + .of_match_table = gpucc_sdm660_match_table, + }, +}; +module_platform_driver(gpucc_sdm660_driver); + +module_description("qualcomm sdm630/sdm660 gpucc driver"); +module_license("gpl v2"); diff --git a/include/dt-bindings/clock/qcom,gpucc-sdm660.h b/include/dt-bindings/clock/qcom,gpucc-sdm660.h --- /dev/null +++ b/include/dt-bindings/clock/qcom,gpucc-sdm660.h +/* spdx-license-identifier: gpl-2.0 */ +/* + * copyright (c) 2020, the linux foundation. all rights reserved. + * copyright (c) 2020, angelogioacchino del regno <angelogioacchino.delregno@somainline.org> + */ + +#ifndef _dt_bindings_clk_sdm_gpucc_660_h +#define _dt_bindings_clk_sdm_gpucc_660_h + +#define gpucc_cxo_clk 0 +#define gpu_pll0_pll 1 +#define gpu_pll1_pll 2 +#define gfx3d_clk_src 3 +#define rbcpr_clk_src 4 +#define rbbmtimer_clk_src 5 +#define gpucc_rbcpr_clk 6 +#define gpucc_gfx3d_clk 7 +#define gpucc_rbbmtimer_clk 8 + +#define gpu_cx_gdsc 0 +#define gpu_gx_gdsc 1 + +#define gpu_cx_bcr 0 +#define gpu_gx_bcr 1 +#define rbcpr_bcr 2 +#define spdm_bcr 3 + +#endif
|
Clock
|
79b5d1fc93a1f114a0974a076b5a25ca64b37b0f
|
angelogioacchino del regno
|
include
|
dt-bindings
|
clock, qcom
|
clk: qcom: add sdx55 apcs clock controller support
|
add a driver for the sdx55 apcs clock controller. it is part of the apcs hardware block, which among other things implements also a combined mux and half integer divider functionality. the apcs clock controller has 3 parent clocks:
|
this release allows to map an uid to a different one in a mount; it also adds support for selecting the preemption model at runtime; support for a low-overhead memory error detector designed to be used in production; support for the acrn hypervisor designed for embedded systems; btrfs initial support for zoned devices, subpage blocks sizes and performance improvements; support for eager nfs writes; support for a thermal power management to control the surface temperature of embedded devices in an unified way; the napi polling can be moved to a kernel thread; and support for non-blocking path lookups. as always, there are many other features, new drivers, improvements and fixes.
|
add sdx55 apcs clock controller support
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'cryptography', 'security', 'networking', 'architectures x86 arm risc-v powerpc mips csky s390 pa-risc c6x']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'device voltage and frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'non-transparent bridge (ntb)', 'thunderbolt', 'clock', 'phy ("physical layer" framework)', 'cxl (compute express link)', 'various']
|
['qcom']
|
['kconfig', 'c', 'makefile']
| 3
| 159
| 0
|
--- diff --git a/drivers/clk/qcom/kconfig b/drivers/clk/qcom/kconfig --- a/drivers/clk/qcom/kconfig +++ b/drivers/clk/qcom/kconfig +config qcom_clk_apcs_sdx55 + tristate "sdx55 apcs clock controller" + depends on qcom_apcs_ipc || compile_test + help + support for the apcs clock controller on sdx55 platform. the + apcs is managing the mux and divider which feeds the cpus. + say y if you want to support cpu frequency scaling on devices + such as sdx55. + diff --git a/drivers/clk/qcom/makefile b/drivers/clk/qcom/makefile --- a/drivers/clk/qcom/makefile +++ b/drivers/clk/qcom/makefile +obj-$(config_qcom_clk_apcs_sdx55) += apcs-sdx55.o diff --git a/drivers/clk/qcom/apcs-sdx55.c b/drivers/clk/qcom/apcs-sdx55.c --- /dev/null +++ b/drivers/clk/qcom/apcs-sdx55.c +// spdx-license-identifier: gpl-2.0 +/* + * qualcomm sdx55 apcs clock controller driver + * + * copyright (c) 2020, linaro limited + * author: manivannan sadhasivam <manivannan.sadhasivam@linaro.org> + */ + +#include <linux/clk.h> +#include <linux/clk-provider.h> +#include <linux/cpu.h> +#include <linux/kernel.h> +#include <linux/module.h> +#include <linux/platform_device.h> +#include <linux/pm_domain.h> +#include <linux/regmap.h> +#include <linux/slab.h> + +#include "clk-regmap.h" +#include "clk-regmap-mux-div.h" + +static const u32 apcs_mux_clk_parent_map[] = { 0, 1, 5 }; + +static const struct clk_parent_data pdata[] = { + { .fw_name = "ref" }, + { .fw_name = "aux" }, + { .fw_name = "pll" }, +}; + +/* + * we use the notifier function for switching to a temporary safe configuration + * (mux and divider), while the a7 pll is reconfigured. + */ +static int a7cc_notifier_cb(struct notifier_block *nb, unsigned long event, + void *data) +{ + int ret = 0; + struct clk_regmap_mux_div *md = container_of(nb, + struct clk_regmap_mux_div, + clk_nb); + if (event == pre_rate_change) + /* set the mux and divider to safe frequency (400mhz) */ + ret = mux_div_set_src_div(md, 1, 2); + + return notifier_from_errno(ret); +} + +static int qcom_apcs_sdx55_clk_probe(struct platform_device *pdev) +{ + struct device *dev = &pdev->dev; + struct device *parent = dev->parent; + struct device *cpu_dev; + struct clk_regmap_mux_div *a7cc; + struct regmap *regmap; + struct clk_init_data init = { }; + int ret; + + regmap = dev_get_regmap(parent, null); + if (!regmap) { + dev_err_probe(dev, -enodev, "failed to get parent regmap "); + return -enodev; + } + + a7cc = devm_kzalloc(dev, sizeof(*a7cc), gfp_kernel); + if (!a7cc) + return -enomem; + + init.name = "a7mux"; + init.parent_data = pdata; + init.num_parents = array_size(pdata); + init.ops = &clk_regmap_mux_div_ops; + + a7cc->clkr.hw.init = &init; + a7cc->clkr.regmap = regmap; + a7cc->reg_offset = 0x8; + a7cc->hid_width = 5; + a7cc->hid_shift = 0; + a7cc->src_width = 3; + a7cc->src_shift = 8; + a7cc->parent_map = apcs_mux_clk_parent_map; + + a7cc->pclk = devm_clk_get(parent, "pll"); + if (is_err(a7cc->pclk)) { + ret = ptr_err(a7cc->pclk); + if (ret != -eprobe_defer) + dev_err_probe(dev, ret, "failed to get pll clk "); + return ret; + } + + a7cc->clk_nb.notifier_call = a7cc_notifier_cb; + ret = clk_notifier_register(a7cc->pclk, &a7cc->clk_nb); + if (ret) { + dev_err_probe(dev, ret, "failed to register clock notifier "); + return ret; + } + + ret = devm_clk_register_regmap(dev, &a7cc->clkr); + if (ret) { + dev_err_probe(dev, ret, "failed to register regmap clock "); + goto err; + } + + ret = devm_of_clk_add_hw_provider(dev, of_clk_hw_simple_get, + &a7cc->clkr.hw); + if (ret) { + dev_err_probe(dev, ret, "failed to add clock provider "); + goto err; + } + + platform_set_drvdata(pdev, a7cc); + + /* + * attach the power domain to cpudev. since there is no dedicated driver + * for cpus and the sdx55 platform lacks hardware specific cpufreq + * driver, there seems to be no better place to do this. so do it here! + */ + cpu_dev = get_cpu_device(0); + dev_pm_domain_attach(cpu_dev, true); + + return 0; + +err: + clk_notifier_unregister(a7cc->pclk, &a7cc->clk_nb); + return ret; +} + +static int qcom_apcs_sdx55_clk_remove(struct platform_device *pdev) +{ + struct device *cpu_dev = get_cpu_device(0); + struct clk_regmap_mux_div *a7cc = platform_get_drvdata(pdev); + + clk_notifier_unregister(a7cc->pclk, &a7cc->clk_nb); + dev_pm_domain_detach(cpu_dev, true); + + return 0; +} + +static struct platform_driver qcom_apcs_sdx55_clk_driver = { + .probe = qcom_apcs_sdx55_clk_probe, + .remove = qcom_apcs_sdx55_clk_remove, + .driver = { + .name = "qcom-sdx55-acps-clk", + }, +}; +module_platform_driver(qcom_apcs_sdx55_clk_driver); + +module_author("manivannan sadhasivam <manivannan.sadhasivam@linaro.org>"); +module_license("gpl v2"); +module_description("qualcomm sdx55 apcs clock driver");
|
Clock
|
f28dec1ab71bddc76fb8931a16d5d42c13a048cc
|
manivannan sadhasivam
|
drivers
|
clk
|
qcom
|
clk: qcom: clk-alpha-pll: add support for lucid 5lpe pll
|
lucid 5lpe is a slightly different lucid pll with different offsets and porgramming sequence so add support for these
|
this release allows to map an uid to a different one in a mount; it also adds support for selecting the preemption model at runtime; support for a low-overhead memory error detector designed to be used in production; support for the acrn hypervisor designed for embedded systems; btrfs initial support for zoned devices, subpage blocks sizes and performance improvements; support for eager nfs writes; support for a thermal power management to control the surface temperature of embedded devices in an unified way; the napi polling can be moved to a kernel thread; and support for non-blocking path lookups. as always, there are many other features, new drivers, improvements and fixes.
|
add support for lucid 5lpe pll
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'cryptography', 'security', 'networking', 'architectures x86 arm risc-v powerpc mips csky s390 pa-risc c6x']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'device voltage and frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'non-transparent bridge (ntb)', 'thunderbolt', 'clock', 'phy ("physical layer" framework)', 'cxl (compute express link)', 'various']
|
['qcom', 'clk-alpha-pll']
|
['h', 'c']
| 2
| 177
| 0
|
--- diff --git a/drivers/clk/qcom/clk-alpha-pll.c b/drivers/clk/qcom/clk-alpha-pll.c --- a/drivers/clk/qcom/clk-alpha-pll.c +++ b/drivers/clk/qcom/clk-alpha-pll.c +/* lucid 5lpe pll specific settings and offsets */ +#define lucid_5lpe_pcal_done bit(11) +#define lucid_5lpe_alpha_pll_ack_latch bit(13) +#define lucid_5lpe_pll_latch_input bit(14) +#define lucid_5lpe_enable_vote_run bit(21) + + +static int alpha_pll_lucid_5lpe_enable(struct clk_hw *hw) +{ + struct clk_alpha_pll *pll = to_clk_alpha_pll(hw); + u32 val; + int ret; + + ret = regmap_read(pll->clkr.regmap, pll_user_ctl(pll), &val); + if (ret) + return ret; + + /* if in fsm mode, just vote for it */ + if (val & lucid_5lpe_enable_vote_run) { + ret = clk_enable_regmap(hw); + if (ret) + return ret; + return wait_for_pll_enable_lock(pll); + } + + /* check if pll is already enabled, return if enabled */ + ret = trion_pll_is_enabled(pll, pll->clkr.regmap); + if (ret < 0) + return ret; + + ret = regmap_update_bits(pll->clkr.regmap, pll_mode(pll), pll_reset_n, pll_reset_n); + if (ret) + return ret; + + regmap_write(pll->clkr.regmap, pll_opmode(pll), pll_run); + + ret = wait_for_pll_enable_lock(pll); + if (ret) + return ret; + + /* enable the pll outputs */ + ret = regmap_update_bits(pll->clkr.regmap, pll_user_ctl(pll), pll_out_mask, pll_out_mask); + if (ret) + return ret; + + /* enable the global pll outputs */ + return regmap_update_bits(pll->clkr.regmap, pll_mode(pll), pll_outctrl, pll_outctrl); +} + +static void alpha_pll_lucid_5lpe_disable(struct clk_hw *hw) +{ + struct clk_alpha_pll *pll = to_clk_alpha_pll(hw); + u32 val; + int ret; + + ret = regmap_read(pll->clkr.regmap, pll_user_ctl(pll), &val); + if (ret) + return; + + /* if in fsm mode, just unvote it */ + if (val & lucid_5lpe_enable_vote_run) { + clk_disable_regmap(hw); + return; + } + + /* disable the global pll output */ + ret = regmap_update_bits(pll->clkr.regmap, pll_mode(pll), pll_outctrl, 0); + if (ret) + return; + + /* disable the pll outputs */ + ret = regmap_update_bits(pll->clkr.regmap, pll_user_ctl(pll), pll_out_mask, 0); + if (ret) + return; + + /* place the pll mode in standby */ + regmap_write(pll->clkr.regmap, pll_opmode(pll), pll_standby); +} + +/* + * the lucid 5lpe pll requires a power-on self-calibration which happens + * when the pll comes out of reset. calibrate in case it is not completed. + */ +static int alpha_pll_lucid_5lpe_prepare(struct clk_hw *hw) +{ + struct clk_alpha_pll *pll = to_clk_alpha_pll(hw); + struct clk_hw *p; + u32 val = 0; + int ret; + + /* return early if calibration is not needed. */ + regmap_read(pll->clkr.regmap, pll_mode(pll), &val); + if (val & lucid_5lpe_pcal_done) + return 0; + + p = clk_hw_get_parent(hw); + if (!p) + return -einval; + + ret = alpha_pll_lucid_5lpe_enable(hw); + if (ret) + return ret; + + alpha_pll_lucid_5lpe_disable(hw); + + return 0; +} + +static int alpha_pll_lucid_5lpe_set_rate(struct clk_hw *hw, unsigned long rate, + unsigned long prate) +{ + return __alpha_pll_trion_set_rate(hw, rate, prate, + lucid_5lpe_pll_latch_input, + lucid_5lpe_alpha_pll_ack_latch); +} + +static int clk_lucid_5lpe_pll_postdiv_set_rate(struct clk_hw *hw, unsigned long rate, + unsigned long parent_rate) +{ + struct clk_alpha_pll_postdiv *pll = to_clk_alpha_pll_postdiv(hw); + int i, val = 0, div, ret; + u32 mask; + + /* + * if the pll is in fsm mode, then treat set_rate callback as a + * no-operation. + */ + ret = regmap_read(pll->clkr.regmap, pll_user_ctl(pll), &val); + if (ret) + return ret; + + if (val & lucid_5lpe_enable_vote_run) + return 0; + + div = div_round_up_ull((u64)parent_rate, rate); + for (i = 0; i < pll->num_post_div; i++) { + if (pll->post_div_table[i].div == div) { + val = pll->post_div_table[i].val; + break; + } + } + + mask = genmask(pll->width + pll->post_div_shift - 1, pll->post_div_shift); + return regmap_update_bits(pll->clkr.regmap, pll_user_ctl(pll), + mask, val << pll->post_div_shift); +} + +const struct clk_ops clk_alpha_pll_lucid_5lpe_ops = { + .prepare = alpha_pll_lucid_5lpe_prepare, + .enable = alpha_pll_lucid_5lpe_enable, + .disable = alpha_pll_lucid_5lpe_disable, + .is_enabled = clk_trion_pll_is_enabled, + .recalc_rate = clk_trion_pll_recalc_rate, + .round_rate = clk_alpha_pll_round_rate, + .set_rate = alpha_pll_lucid_5lpe_set_rate, +}; +export_symbol(clk_alpha_pll_lucid_5lpe_ops); + +const struct clk_ops clk_alpha_pll_fixed_lucid_5lpe_ops = { + .enable = alpha_pll_lucid_5lpe_enable, + .disable = alpha_pll_lucid_5lpe_disable, + .is_enabled = clk_trion_pll_is_enabled, + .recalc_rate = clk_trion_pll_recalc_rate, + .round_rate = clk_alpha_pll_round_rate, +}; +export_symbol(clk_alpha_pll_fixed_lucid_5lpe_ops); + +const struct clk_ops clk_alpha_pll_postdiv_lucid_5lpe_ops = { + .recalc_rate = clk_alpha_pll_postdiv_fabia_recalc_rate, + .round_rate = clk_alpha_pll_postdiv_fabia_round_rate, + .set_rate = clk_lucid_5lpe_pll_postdiv_set_rate, +}; +export_symbol(clk_alpha_pll_postdiv_lucid_5lpe_ops); diff --git a/drivers/clk/qcom/clk-alpha-pll.h b/drivers/clk/qcom/clk-alpha-pll.h --- a/drivers/clk/qcom/clk-alpha-pll.h +++ b/drivers/clk/qcom/clk-alpha-pll.h +extern const struct clk_ops clk_alpha_pll_lucid_5lpe_ops; +extern const struct clk_ops clk_alpha_pll_fixed_lucid_5lpe_ops; +extern const struct clk_ops clk_alpha_pll_postdiv_lucid_5lpe_ops; +
|
Clock
|
f4c7e27aa4b60a77a581d8b542c4d56942ee81ef
|
vivek aknurwar angelogioacchino del regno angelogioacchino delregno somainline org bjorn andersson bjorn andersson linaro org
|
drivers
|
clk
|
qcom
|
clk: qcom: gcc-sm8350: add gdsc
|
add the gdsc found in gcc for sm8350 soc
|
this release allows to map an uid to a different one in a mount; it also adds support for selecting the preemption model at runtime; support for a low-overhead memory error detector designed to be used in production; support for the acrn hypervisor designed for embedded systems; btrfs initial support for zoned devices, subpage blocks sizes and performance improvements; support for eager nfs writes; support for a thermal power management to control the surface temperature of embedded devices in an unified way; the napi polling can be moved to a kernel thread; and support for non-blocking path lookups. as always, there are many other features, new drivers, improvements and fixes.
|
add gdsc
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'cryptography', 'security', 'networking', 'architectures x86 arm risc-v powerpc mips csky s390 pa-risc c6x']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'device voltage and frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'non-transparent bridge (ntb)', 'thunderbolt', 'clock', 'phy ("physical layer" framework)', 'cxl (compute express link)', 'various']
|
['qcom', 'gcc-sm8350']
|
['h', 'c']
| 2
| 112
| 0
|
--- diff --git a/drivers/clk/qcom/gcc-sm8350.c b/drivers/clk/qcom/gcc-sm8350.c --- a/drivers/clk/qcom/gcc-sm8350.c +++ b/drivers/clk/qcom/gcc-sm8350.c +#include "gdsc.h" +static struct gdsc pcie_0_gdsc = { + .gdscr = 0x6b004, + .pd = { + .name = "pcie_0_gdsc", + }, + .pwrsts = pwrsts_off_on, +}; + +static struct gdsc pcie_1_gdsc = { + .gdscr = 0x8d004, + .pd = { + .name = "pcie_1_gdsc", + }, + .pwrsts = pwrsts_off_on, +}; + +static struct gdsc ufs_card_gdsc = { + .gdscr = 0x75004, + .pd = { + .name = "ufs_card_gdsc", + }, + .pwrsts = pwrsts_off_on, +}; + +static struct gdsc ufs_phy_gdsc = { + .gdscr = 0x77004, + .pd = { + .name = "ufs_phy_gdsc", + }, + .pwrsts = pwrsts_off_on, +}; + +static struct gdsc usb30_prim_gdsc = { + .gdscr = 0xf004, + .pd = { + .name = "usb30_prim_gdsc", + }, + .pwrsts = pwrsts_off_on, +}; + +static struct gdsc usb30_sec_gdsc = { + .gdscr = 0x10004, + .pd = { + .name = "usb30_sec_gdsc", + }, + .pwrsts = pwrsts_off_on, +}; + +static struct gdsc hlos1_vote_mmnoc_mmu_tbu_hf0_gdsc = { + .gdscr = 0x7d050, + .pd = { + .name = "hlos1_vote_mmnoc_mmu_tbu_hf0_gdsc", + }, + .pwrsts = pwrsts_off_on, + .flags = votable, +}; + +static struct gdsc hlos1_vote_mmnoc_mmu_tbu_hf1_gdsc = { + .gdscr = 0x7d058, + .pd = { + .name = "hlos1_vote_mmnoc_mmu_tbu_hf1_gdsc", + }, + .pwrsts = pwrsts_off_on, + .flags = votable, +}; + +static struct gdsc hlos1_vote_mmnoc_mmu_tbu_sf0_gdsc = { + .gdscr = 0x7d054, + .pd = { + .name = "hlos1_vote_mmnoc_mmu_tbu_sf0_gdsc", + }, + .pwrsts = pwrsts_off_on, + .flags = votable, +}; + +static struct gdsc hlos1_vote_mmnoc_mmu_tbu_sf1_gdsc = { + .gdscr = 0x7d06c, + .pd = { + .name = "hlos1_vote_mmnoc_mmu_tbu_sf1_gdsc", + }, + .pwrsts = pwrsts_off_on, + .flags = votable, +}; + +static struct gdsc *gcc_sm8350_gdscs[] = { + [pcie_0_gdsc] = &pcie_0_gdsc, + [pcie_1_gdsc] = &pcie_1_gdsc, + [ufs_card_gdsc] = &ufs_card_gdsc, + [ufs_phy_gdsc] = &ufs_phy_gdsc, + [usb30_prim_gdsc] = &usb30_prim_gdsc, + [usb30_sec_gdsc] = &usb30_sec_gdsc, + [hlos1_vote_mmnoc_mmu_tbu_hf0_gdsc] = &hlos1_vote_mmnoc_mmu_tbu_hf0_gdsc, + [hlos1_vote_mmnoc_mmu_tbu_hf1_gdsc] = &hlos1_vote_mmnoc_mmu_tbu_hf1_gdsc, + [hlos1_vote_mmnoc_mmu_tbu_sf0_gdsc] = &hlos1_vote_mmnoc_mmu_tbu_sf0_gdsc, + [hlos1_vote_mmnoc_mmu_tbu_sf1_gdsc] = &hlos1_vote_mmnoc_mmu_tbu_sf1_gdsc, +}; + + .gdscs = gcc_sm8350_gdscs, + .num_gdscs = array_size(gcc_sm8350_gdscs), diff --git a/include/dt-bindings/clock/qcom,gcc-sm8350.h b/include/dt-bindings/clock/qcom,gcc-sm8350.h --- a/include/dt-bindings/clock/qcom,gcc-sm8350.h +++ b/include/dt-bindings/clock/qcom,gcc-sm8350.h +/* gcc power domains */ +#define pcie_0_gdsc 0 +#define pcie_1_gdsc 1 +#define ufs_card_gdsc 2 +#define ufs_phy_gdsc 3 +#define usb30_prim_gdsc 4 +#define usb30_sec_gdsc 5 +#define hlos1_vote_mmnoc_mmu_tbu_hf0_gdsc 6 +#define hlos1_vote_mmnoc_mmu_tbu_hf1_gdsc 7 +#define hlos1_vote_mmnoc_mmu_tbu_sf0_gdsc 8 +#define hlos1_vote_mmnoc_mmu_tbu_sf1_gdsc 9 +
|
Clock
|
3fade948fbb3ccd30f6b06c474d0d084dffecb64
|
vinod koul
|
include
|
dt-bindings
|
clock, qcom
|
clk: qcom: gcc: add clock driver for sm8350
|
this adds global clock controller (gcc) driver for sm8350 soc
|
this release allows to map an uid to a different one in a mount; it also adds support for selecting the preemption model at runtime; support for a low-overhead memory error detector designed to be used in production; support for the acrn hypervisor designed for embedded systems; btrfs initial support for zoned devices, subpage blocks sizes and performance improvements; support for eager nfs writes; support for a thermal power management to control the surface temperature of embedded devices in an unified way; the napi polling can be moved to a kernel thread; and support for non-blocking path lookups. as always, there are many other features, new drivers, improvements and fixes.
|
add clock driver for sm8350
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'cryptography', 'security', 'networking', 'architectures x86 arm risc-v powerpc mips csky s390 pa-risc c6x']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'device voltage and frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'non-transparent bridge (ntb)', 'thunderbolt', 'clock', 'phy ("physical layer" framework)', 'cxl (compute express link)', 'various']
|
['qcom', 'gcc']
|
['kconfig', 'c', 'makefile']
| 3
| 3,799
| 0
|
--- diff --git a/drivers/clk/qcom/kconfig b/drivers/clk/qcom/kconfig --- a/drivers/clk/qcom/kconfig +++ b/drivers/clk/qcom/kconfig +config sm_gcc_8350 + tristate "sm8350 global clock controller" + select qcom_gdsc + help + support for the global clock controller on sm8350 devices. + say y if you want to use peripheral devices such as uart, + spi, i2c, usb, sd/ufs, pcie etc. + diff --git a/drivers/clk/qcom/makefile b/drivers/clk/qcom/makefile --- a/drivers/clk/qcom/makefile +++ b/drivers/clk/qcom/makefile +obj-$(config_sm_gcc_8350) += gcc-sm8350.o diff --git a/drivers/clk/qcom/gcc-sm8350.c b/drivers/clk/qcom/gcc-sm8350.c --- /dev/null +++ b/drivers/clk/qcom/gcc-sm8350.c +// spdx-license-identifier: gpl-2.0-only +/* + * copyright (c) 2019-2020, the linux foundation. all rights reserved. + * copyright (c) 2020-2021, linaro limited + */ + +#include <linux/module.h> +#include <linux/platform_device.h> +#include <linux/regmap.h> + +#include <dt-bindings/clock/qcom,gcc-sm8350.h> + +#include "clk-alpha-pll.h" +#include "clk-branch.h" +#include "clk-rcg.h" +#include "clk-regmap.h" +#include "clk-regmap-divider.h" +#include "clk-regmap-mux.h" +#include "reset.h" + +enum { + p_bi_tcxo, + p_core_bi_pll_test_se, + p_gcc_gpll0_out_even, + p_gcc_gpll0_out_main, + p_gcc_gpll4_out_main, + p_gcc_gpll9_out_main, + p_pcie_0_pipe_clk, + p_pcie_1_pipe_clk, + p_sleep_clk, + p_ufs_card_rx_symbol_0_clk, + p_ufs_card_rx_symbol_1_clk, + p_ufs_card_tx_symbol_0_clk, + p_ufs_phy_rx_symbol_0_clk, + p_ufs_phy_rx_symbol_1_clk, + p_ufs_phy_tx_symbol_0_clk, + p_usb3_phy_wrapper_gcc_usb30_pipe_clk, + p_usb3_uni_phy_sec_gcc_usb30_pipe_clk, +}; + +static struct clk_alpha_pll gcc_gpll0 = { + .offset = 0x0, + .regs = clk_alpha_pll_regs[clk_alpha_pll_type_lucid], + .clkr = { + .enable_reg = 0x52018, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_gpll0", + .parent_data = &(const struct clk_parent_data){ + .fw_name = "bi_tcxo", + }, + .num_parents = 1, + .ops = &clk_alpha_pll_fixed_lucid_5lpe_ops, + }, + }, +}; + +static const struct clk_div_table post_div_table_gcc_gpll0_out_even[] = { + { 0x1, 2 }, + { } +}; + +static struct clk_alpha_pll_postdiv gcc_gpll0_out_even = { + .offset = 0x0, + .post_div_shift = 8, + .post_div_table = post_div_table_gcc_gpll0_out_even, + .num_post_div = array_size(post_div_table_gcc_gpll0_out_even), + .width = 4, + .regs = clk_alpha_pll_regs[clk_alpha_pll_type_lucid], + .clkr.hw.init = &(struct clk_init_data){ + .name = "gcc_gpll0_out_even", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_gpll0.clkr.hw, + }, + .num_parents = 1, + .ops = &clk_alpha_pll_postdiv_lucid_5lpe_ops, + }, +}; + +static struct clk_alpha_pll gcc_gpll4 = { + .offset = 0x76000, + .regs = clk_alpha_pll_regs[clk_alpha_pll_type_lucid], + .clkr = { + .enable_reg = 0x52018, + .enable_mask = bit(4), + .hw.init = &(struct clk_init_data){ + .name = "gcc_gpll4", + .parent_data = &(const struct clk_parent_data){ + .fw_name = "bi_tcxo", + .name = "bi_tcxo", + }, + .num_parents = 1, + .ops = &clk_alpha_pll_fixed_lucid_5lpe_ops, + }, + }, +}; + +static struct clk_alpha_pll gcc_gpll9 = { + .offset = 0x1c000, + .regs = clk_alpha_pll_regs[clk_alpha_pll_type_lucid], + .clkr = { + .enable_reg = 0x52018, + .enable_mask = bit(9), + .hw.init = &(struct clk_init_data){ + .name = "gcc_gpll9", + .parent_data = &(const struct clk_parent_data){ + .fw_name = "bi_tcxo", + .name = "bi_tcxo", + }, + .num_parents = 1, + .ops = &clk_alpha_pll_fixed_lucid_5lpe_ops, + }, + }, +}; + +static const struct parent_map gcc_parent_map_0[] = { + { p_bi_tcxo, 0 }, + { p_gcc_gpll0_out_main, 1 }, + { p_gcc_gpll0_out_even, 6 }, + { p_core_bi_pll_test_se, 7 }, +}; + +static const struct clk_parent_data gcc_parent_data_0[] = { + { .fw_name = "bi_tcxo" }, + { .hw = &gcc_gpll0.clkr.hw }, + { .hw = &gcc_gpll0_out_even.clkr.hw }, + { .fw_name = "core_bi_pll_test_se" }, +}; + +static const struct parent_map gcc_parent_map_1[] = { + { p_bi_tcxo, 0 }, + { p_gcc_gpll0_out_main, 1 }, + { p_sleep_clk, 5 }, + { p_gcc_gpll0_out_even, 6 }, + { p_core_bi_pll_test_se, 7 }, +}; + +static const struct clk_parent_data gcc_parent_data_1[] = { + { .fw_name = "bi_tcxo" }, + { .hw = &gcc_gpll0.clkr.hw }, + { .fw_name = "sleep_clk" }, + { .hw = &gcc_gpll0_out_even.clkr.hw }, + { .fw_name = "core_bi_pll_test_se" }, +}; + +static const struct parent_map gcc_parent_map_2[] = { + { p_bi_tcxo, 0 }, + { p_sleep_clk, 5 }, + { p_core_bi_pll_test_se, 7 }, +}; + +static const struct clk_parent_data gcc_parent_data_2[] = { + { .fw_name = "bi_tcxo" }, + { .fw_name = "sleep_clk" }, + { .fw_name = "core_bi_pll_test_se" }, +}; + +static const struct parent_map gcc_parent_map_3[] = { + { p_bi_tcxo, 0 }, + { p_core_bi_pll_test_se, 7 }, +}; + +static const struct clk_parent_data gcc_parent_data_3[] = { + { .fw_name = "bi_tcxo" }, + { .fw_name = "core_bi_pll_test_se" }, +}; + +static const struct parent_map gcc_parent_map_4[] = { + { p_pcie_0_pipe_clk, 0 }, + { p_bi_tcxo, 2 }, +}; + +static const struct clk_parent_data gcc_parent_data_4[] = { + { .fw_name = "pcie_0_pipe_clk", }, + { .fw_name = "bi_tcxo" }, +}; + +static const struct parent_map gcc_parent_map_5[] = { + { p_pcie_1_pipe_clk, 0 }, + { p_bi_tcxo, 2 }, +}; + +static const struct clk_parent_data gcc_parent_data_5[] = { + { .fw_name = "pcie_1_pipe_clk" }, + { .fw_name = "bi_tcxo" }, +}; + +static const struct parent_map gcc_parent_map_6[] = { + { p_bi_tcxo, 0 }, + { p_gcc_gpll0_out_main, 1 }, + { p_gcc_gpll9_out_main, 2 }, + { p_gcc_gpll4_out_main, 5 }, + { p_gcc_gpll0_out_even, 6 }, + { p_core_bi_pll_test_se, 7 }, +}; + +static const struct clk_parent_data gcc_parent_data_6[] = { + { .fw_name = "bi_tcxo" }, + { .hw = &gcc_gpll0.clkr.hw }, + { .hw = &gcc_gpll9.clkr.hw }, + { .hw = &gcc_gpll4.clkr.hw }, + { .hw = &gcc_gpll0_out_even.clkr.hw }, + { .fw_name = "core_bi_pll_test_se" }, +}; + +static const struct parent_map gcc_parent_map_7[] = { + { p_ufs_card_rx_symbol_0_clk, 0 }, + { p_bi_tcxo, 2 }, +}; + +static const struct clk_parent_data gcc_parent_data_7[] = { + { .fw_name = "ufs_card_rx_symbol_0_clk" }, + { .fw_name = "bi_tcxo" }, +}; + +static const struct parent_map gcc_parent_map_8[] = { + { p_ufs_card_rx_symbol_1_clk, 0 }, + { p_bi_tcxo, 2 }, +}; + +static const struct clk_parent_data gcc_parent_data_8[] = { + { .fw_name = "ufs_card_rx_symbol_1_clk" }, + { .fw_name = "bi_tcxo" }, +}; + +static const struct parent_map gcc_parent_map_9[] = { + { p_ufs_card_tx_symbol_0_clk, 0 }, + { p_bi_tcxo, 2 }, +}; + +static const struct clk_parent_data gcc_parent_data_9[] = { + { .fw_name = "ufs_card_tx_symbol_0_clk" }, + { .fw_name = "bi_tcxo" }, +}; + +static const struct parent_map gcc_parent_map_10[] = { + { p_ufs_phy_rx_symbol_0_clk, 0 }, + { p_bi_tcxo, 2 }, +}; + +static const struct clk_parent_data gcc_parent_data_10[] = { + { .fw_name = "ufs_phy_rx_symbol_0_clk" }, + { .fw_name = "bi_tcxo" }, +}; + +static const struct parent_map gcc_parent_map_11[] = { + { p_ufs_phy_rx_symbol_1_clk, 0 }, + { p_bi_tcxo, 2 }, +}; + +static const struct clk_parent_data gcc_parent_data_11[] = { + { .fw_name = "ufs_phy_rx_symbol_1_clk" }, + { .fw_name = "bi_tcxo" }, +}; + +static const struct parent_map gcc_parent_map_12[] = { + { p_ufs_phy_tx_symbol_0_clk, 0 }, + { p_bi_tcxo, 2 }, +}; + +static const struct clk_parent_data gcc_parent_data_12[] = { + { .fw_name = "ufs_phy_tx_symbol_0_clk" }, + { .fw_name = "bi_tcxo" }, +}; + +static const struct parent_map gcc_parent_map_13[] = { + { p_usb3_phy_wrapper_gcc_usb30_pipe_clk, 0 }, + { p_core_bi_pll_test_se, 1 }, + { p_bi_tcxo, 2 }, +}; + +static const struct clk_parent_data gcc_parent_data_13[] = { + { .fw_name = "usb3_phy_wrapper_gcc_usb30_pipe_clk" }, + { .fw_name = "core_bi_pll_test_se" }, + { .fw_name = "bi_tcxo" }, +}; + +static const struct parent_map gcc_parent_map_14[] = { + { p_usb3_uni_phy_sec_gcc_usb30_pipe_clk, 0 }, + { p_core_bi_pll_test_se, 1 }, + { p_bi_tcxo, 2 }, +}; + +static const struct clk_parent_data gcc_parent_data_14[] = { + { .fw_name = "usb3_uni_phy_sec_gcc_usb30_pipe_clk" }, + { .fw_name = "core_bi_pll_test_se" }, + { .fw_name = "bi_tcxo" }, +}; + +static struct clk_regmap_mux gcc_pcie_0_pipe_clk_src = { + .reg = 0x6b054, + .shift = 0, + .width = 2, + .parent_map = gcc_parent_map_4, + .clkr = { + .hw.init = &(struct clk_init_data){ + .name = "gcc_pcie_0_pipe_clk_src", + .parent_data = gcc_parent_data_4, + .num_parents = 2, + .ops = &clk_regmap_mux_closest_ops, + }, + }, +}; + +static struct clk_regmap_mux gcc_pcie_1_pipe_clk_src = { + .reg = 0x8d054, + .shift = 0, + .width = 2, + .parent_map = gcc_parent_map_5, + .clkr = { + .hw.init = &(struct clk_init_data){ + .name = "gcc_pcie_1_pipe_clk_src", + .parent_data = gcc_parent_data_5, + .num_parents = 2, + .ops = &clk_regmap_mux_closest_ops, + }, + }, +}; + +static struct clk_regmap_mux gcc_ufs_card_rx_symbol_0_clk_src = { + .reg = 0x75058, + .shift = 0, + .width = 2, + .parent_map = gcc_parent_map_7, + .clkr = { + .hw.init = &(struct clk_init_data){ + .name = "gcc_ufs_card_rx_symbol_0_clk_src", + .parent_data = gcc_parent_data_7, + .num_parents = 2, + .ops = &clk_regmap_mux_closest_ops, + }, + }, +}; + +static struct clk_regmap_mux gcc_ufs_card_rx_symbol_1_clk_src = { + .reg = 0x750c8, + .shift = 0, + .width = 2, + .parent_map = gcc_parent_map_8, + .clkr = { + .hw.init = &(struct clk_init_data){ + .name = "gcc_ufs_card_rx_symbol_1_clk_src", + .parent_data = gcc_parent_data_8, + .num_parents = 2, + .ops = &clk_regmap_mux_closest_ops, + }, + }, +}; + +static struct clk_regmap_mux gcc_ufs_card_tx_symbol_0_clk_src = { + .reg = 0x75048, + .shift = 0, + .width = 2, + .parent_map = gcc_parent_map_9, + .clkr = { + .hw.init = &(struct clk_init_data){ + .name = "gcc_ufs_card_tx_symbol_0_clk_src", + .parent_data = gcc_parent_data_9, + .num_parents = 2, + .ops = &clk_regmap_mux_closest_ops, + }, + }, +}; + +static struct clk_regmap_mux gcc_ufs_phy_rx_symbol_0_clk_src = { + .reg = 0x77058, + .shift = 0, + .width = 2, + .parent_map = gcc_parent_map_10, + .clkr = { + .hw.init = &(struct clk_init_data){ + .name = "gcc_ufs_phy_rx_symbol_0_clk_src", + .parent_data = gcc_parent_data_10, + .num_parents = 2, + .ops = &clk_regmap_mux_closest_ops, + }, + }, +}; + +static struct clk_regmap_mux gcc_ufs_phy_rx_symbol_1_clk_src = { + .reg = 0x770c8, + .shift = 0, + .width = 2, + .parent_map = gcc_parent_map_11, + .clkr = { + .hw.init = &(struct clk_init_data){ + .name = "gcc_ufs_phy_rx_symbol_1_clk_src", + .parent_data = gcc_parent_data_11, + .num_parents = 2, + .ops = &clk_regmap_mux_closest_ops, + }, + }, +}; + +static struct clk_regmap_mux gcc_ufs_phy_tx_symbol_0_clk_src = { + .reg = 0x77048, + .shift = 0, + .width = 2, + .parent_map = gcc_parent_map_12, + .clkr = { + .hw.init = &(struct clk_init_data){ + .name = "gcc_ufs_phy_tx_symbol_0_clk_src", + .parent_data = gcc_parent_data_12, + .num_parents = 2, + .ops = &clk_regmap_mux_closest_ops, + }, + }, +}; + +static struct clk_regmap_mux gcc_usb3_prim_phy_pipe_clk_src = { + .reg = 0xf060, + .shift = 0, + .width = 2, + .parent_map = gcc_parent_map_13, + .clkr = { + .hw.init = &(struct clk_init_data){ + .name = "gcc_usb3_prim_phy_pipe_clk_src", + .parent_data = gcc_parent_data_13, + .num_parents = 3, + .ops = &clk_regmap_mux_closest_ops, + }, + }, +}; + +static struct clk_regmap_mux gcc_usb3_sec_phy_pipe_clk_src = { + .reg = 0x10060, + .shift = 0, + .width = 2, + .parent_map = gcc_parent_map_14, + .clkr = { + .hw.init = &(struct clk_init_data){ + .name = "gcc_usb3_sec_phy_pipe_clk_src", + .parent_data = gcc_parent_data_14, + .num_parents = 3, + .ops = &clk_regmap_mux_closest_ops, + }, + }, +}; + +static const struct freq_tbl ftbl_gcc_gp1_clk_src[] = { + f(50000000, p_gcc_gpll0_out_even, 6, 0, 0), + f(100000000, p_gcc_gpll0_out_main, 6, 0, 0), + f(200000000, p_gcc_gpll0_out_main, 3, 0, 0), + { } +}; + +static struct clk_rcg2 gcc_gp1_clk_src = { + .cmd_rcgr = 0x64004, + .mnd_width = 8, + .hid_width = 5, + .parent_map = gcc_parent_map_1, + .freq_tbl = ftbl_gcc_gp1_clk_src, + .clkr.hw.init = &(struct clk_init_data){ + .name = "gcc_gp1_clk_src", + .parent_data = gcc_parent_data_1, + .num_parents = 5, + .flags = clk_set_rate_parent, + .ops = &clk_rcg2_ops, + }, +}; + +static struct clk_rcg2 gcc_gp2_clk_src = { + .cmd_rcgr = 0x65004, + .mnd_width = 8, + .hid_width = 5, + .parent_map = gcc_parent_map_1, + .freq_tbl = ftbl_gcc_gp1_clk_src, + .clkr.hw.init = &(struct clk_init_data){ + .name = "gcc_gp2_clk_src", + .parent_data = gcc_parent_data_1, + .num_parents = 5, + .flags = clk_set_rate_parent, + .ops = &clk_rcg2_ops, + }, +}; + +static struct clk_rcg2 gcc_gp3_clk_src = { + .cmd_rcgr = 0x66004, + .mnd_width = 8, + .hid_width = 5, + .parent_map = gcc_parent_map_1, + .freq_tbl = ftbl_gcc_gp1_clk_src, + .clkr.hw.init = &(struct clk_init_data){ + .name = "gcc_gp3_clk_src", + .parent_data = gcc_parent_data_1, + .num_parents = 5, + .flags = clk_set_rate_parent, + .ops = &clk_rcg2_ops, + }, +}; + +static const struct freq_tbl ftbl_gcc_pcie_0_aux_clk_src[] = { + f(9600000, p_bi_tcxo, 2, 0, 0), + f(19200000, p_bi_tcxo, 1, 0, 0), + { } +}; + +static struct clk_rcg2 gcc_pcie_0_aux_clk_src = { + .cmd_rcgr = 0x6b058, + .mnd_width = 16, + .hid_width = 5, + .parent_map = gcc_parent_map_2, + .freq_tbl = ftbl_gcc_pcie_0_aux_clk_src, + .clkr.hw.init = &(struct clk_init_data){ + .name = "gcc_pcie_0_aux_clk_src", + .parent_data = gcc_parent_data_2, + .num_parents = 3, + .flags = clk_set_rate_parent, + .ops = &clk_rcg2_ops, + }, +}; + +static const struct freq_tbl ftbl_gcc_pcie_0_phy_rchng_clk_src[] = { + f(19200000, p_bi_tcxo, 1, 0, 0), + f(100000000, p_gcc_gpll0_out_main, 6, 0, 0), + { } +}; + +static struct clk_rcg2 gcc_pcie_0_phy_rchng_clk_src = { + .cmd_rcgr = 0x6b03c, + .mnd_width = 0, + .hid_width = 5, + .parent_map = gcc_parent_map_0, + .freq_tbl = ftbl_gcc_pcie_0_phy_rchng_clk_src, + .clkr.hw.init = &(struct clk_init_data){ + .name = "gcc_pcie_0_phy_rchng_clk_src", + .parent_data = gcc_parent_data_0, + .num_parents = 4, + .flags = clk_set_rate_parent, + .ops = &clk_rcg2_ops, + }, +}; + +static struct clk_rcg2 gcc_pcie_1_aux_clk_src = { + .cmd_rcgr = 0x8d058, + .mnd_width = 16, + .hid_width = 5, + .parent_map = gcc_parent_map_2, + .freq_tbl = ftbl_gcc_pcie_0_aux_clk_src, + .clkr.hw.init = &(struct clk_init_data){ + .name = "gcc_pcie_1_aux_clk_src", + .parent_data = gcc_parent_data_2, + .num_parents = 3, + .flags = clk_set_rate_parent, + .ops = &clk_rcg2_ops, + }, +}; + +static struct clk_rcg2 gcc_pcie_1_phy_rchng_clk_src = { + .cmd_rcgr = 0x8d03c, + .mnd_width = 0, + .hid_width = 5, + .parent_map = gcc_parent_map_0, + .freq_tbl = ftbl_gcc_pcie_0_phy_rchng_clk_src, + .clkr.hw.init = &(struct clk_init_data){ + .name = "gcc_pcie_1_phy_rchng_clk_src", + .parent_data = gcc_parent_data_0, + .num_parents = 4, + .flags = clk_set_rate_parent, + .ops = &clk_rcg2_ops, + }, +}; + +static const struct freq_tbl ftbl_gcc_pdm2_clk_src[] = { + f(60000000, p_gcc_gpll0_out_main, 10, 0, 0), + { } +}; + +static struct clk_rcg2 gcc_pdm2_clk_src = { + .cmd_rcgr = 0x33010, + .mnd_width = 0, + .hid_width = 5, + .parent_map = gcc_parent_map_0, + .freq_tbl = ftbl_gcc_pdm2_clk_src, + .clkr.hw.init = &(struct clk_init_data){ + .name = "gcc_pdm2_clk_src", + .parent_data = gcc_parent_data_0, + .num_parents = 4, + .flags = clk_set_rate_parent, + .ops = &clk_rcg2_ops, + }, +}; + +static const struct freq_tbl ftbl_gcc_qupv3_wrap0_s0_clk_src[] = { + f(7372800, p_gcc_gpll0_out_even, 1, 384, 15625), + f(14745600, p_gcc_gpll0_out_even, 1, 768, 15625), + f(19200000, p_bi_tcxo, 1, 0, 0), + f(29491200, p_gcc_gpll0_out_even, 1, 1536, 15625), + f(32000000, p_gcc_gpll0_out_even, 1, 8, 75), + f(48000000, p_gcc_gpll0_out_even, 1, 4, 25), + f(64000000, p_gcc_gpll0_out_even, 1, 16, 75), + f(75000000, p_gcc_gpll0_out_even, 4, 0, 0), + f(80000000, p_gcc_gpll0_out_even, 1, 4, 15), + f(96000000, p_gcc_gpll0_out_even, 1, 8, 25), + f(100000000, p_gcc_gpll0_out_main, 6, 0, 0), + { } +}; + +static struct clk_init_data gcc_qupv3_wrap0_s0_clk_src_init = { + .name = "gcc_qupv3_wrap0_s0_clk_src", + .parent_data = gcc_parent_data_0, + .num_parents = 4, + .flags = clk_set_rate_parent, + .ops = &clk_rcg2_ops, +}; + +static struct clk_rcg2 gcc_qupv3_wrap0_s0_clk_src = { + .cmd_rcgr = 0x17010, + .mnd_width = 16, + .hid_width = 5, + .parent_map = gcc_parent_map_0, + .freq_tbl = ftbl_gcc_qupv3_wrap0_s0_clk_src, + .clkr.hw.init = &gcc_qupv3_wrap0_s0_clk_src_init, +}; + +static struct clk_init_data gcc_qupv3_wrap0_s1_clk_src_init = { + .name = "gcc_qupv3_wrap0_s1_clk_src", + .parent_data = gcc_parent_data_0, + .num_parents = 4, + .flags = clk_set_rate_parent, + .ops = &clk_rcg2_ops, +}; + +static struct clk_rcg2 gcc_qupv3_wrap0_s1_clk_src = { + .cmd_rcgr = 0x17140, + .mnd_width = 16, + .hid_width = 5, + .parent_map = gcc_parent_map_0, + .freq_tbl = ftbl_gcc_qupv3_wrap0_s0_clk_src, + .clkr.hw.init = &gcc_qupv3_wrap0_s1_clk_src_init, +}; + +static struct clk_init_data gcc_qupv3_wrap0_s2_clk_src_init = { + .name = "gcc_qupv3_wrap0_s2_clk_src", + .parent_data = gcc_parent_data_0, + .num_parents = 4, + .flags = clk_set_rate_parent, + .ops = &clk_rcg2_ops, +}; + +static struct clk_rcg2 gcc_qupv3_wrap0_s2_clk_src = { + .cmd_rcgr = 0x17270, + .mnd_width = 16, + .hid_width = 5, + .parent_map = gcc_parent_map_0, + .freq_tbl = ftbl_gcc_qupv3_wrap0_s0_clk_src, + .clkr.hw.init = &gcc_qupv3_wrap0_s2_clk_src_init, +}; + +static struct clk_init_data gcc_qupv3_wrap0_s3_clk_src_init = { + .name = "gcc_qupv3_wrap0_s3_clk_src", + .parent_data = gcc_parent_data_0, + .num_parents = 4, + .flags = clk_set_rate_parent, + .ops = &clk_rcg2_ops, +}; + +static struct clk_rcg2 gcc_qupv3_wrap0_s3_clk_src = { + .cmd_rcgr = 0x173a0, + .mnd_width = 16, + .hid_width = 5, + .parent_map = gcc_parent_map_0, + .freq_tbl = ftbl_gcc_qupv3_wrap0_s0_clk_src, + .clkr.hw.init = &gcc_qupv3_wrap0_s3_clk_src_init, +}; + +static struct clk_init_data gcc_qupv3_wrap0_s4_clk_src_init = { + .name = "gcc_qupv3_wrap0_s4_clk_src", + .parent_data = gcc_parent_data_0, + .num_parents = 4, + .flags = clk_set_rate_parent, + .ops = &clk_rcg2_ops, +}; + +static struct clk_rcg2 gcc_qupv3_wrap0_s4_clk_src = { + .cmd_rcgr = 0x174d0, + .mnd_width = 16, + .hid_width = 5, + .parent_map = gcc_parent_map_0, + .freq_tbl = ftbl_gcc_qupv3_wrap0_s0_clk_src, + .clkr.hw.init = &gcc_qupv3_wrap0_s4_clk_src_init, +}; + +static struct clk_init_data gcc_qupv3_wrap0_s5_clk_src_init = { + .name = "gcc_qupv3_wrap0_s5_clk_src", + .parent_data = gcc_parent_data_0, + .num_parents = 4, + .flags = clk_set_rate_parent, + .ops = &clk_rcg2_ops, +}; + +static struct clk_rcg2 gcc_qupv3_wrap0_s5_clk_src = { + .cmd_rcgr = 0x17600, + .mnd_width = 16, + .hid_width = 5, + .parent_map = gcc_parent_map_0, + .freq_tbl = ftbl_gcc_qupv3_wrap0_s0_clk_src, + .clkr.hw.init = &gcc_qupv3_wrap0_s5_clk_src_init, +}; + +static struct clk_init_data gcc_qupv3_wrap0_s6_clk_src_init = { + .name = "gcc_qupv3_wrap0_s6_clk_src", + .parent_data = gcc_parent_data_0, + .num_parents = 4, + .flags = clk_set_rate_parent, + .ops = &clk_rcg2_ops, +}; + +static struct clk_rcg2 gcc_qupv3_wrap0_s6_clk_src = { + .cmd_rcgr = 0x17730, + .mnd_width = 16, + .hid_width = 5, + .parent_map = gcc_parent_map_0, + .freq_tbl = ftbl_gcc_qupv3_wrap0_s0_clk_src, + .clkr.hw.init = &gcc_qupv3_wrap0_s6_clk_src_init, +}; + +static struct clk_init_data gcc_qupv3_wrap0_s7_clk_src_init = { + .name = "gcc_qupv3_wrap0_s7_clk_src", + .parent_data = gcc_parent_data_0, + .num_parents = 4, + .flags = clk_set_rate_parent, + .ops = &clk_rcg2_ops, +}; + +static struct clk_rcg2 gcc_qupv3_wrap0_s7_clk_src = { + .cmd_rcgr = 0x17860, + .mnd_width = 16, + .hid_width = 5, + .parent_map = gcc_parent_map_0, + .freq_tbl = ftbl_gcc_qupv3_wrap0_s0_clk_src, + .clkr.hw.init = &gcc_qupv3_wrap0_s7_clk_src_init, +}; + +static const struct freq_tbl ftbl_gcc_qupv3_wrap1_s0_clk_src[] = { + f(7372800, p_gcc_gpll0_out_even, 1, 384, 15625), + f(14745600, p_gcc_gpll0_out_even, 1, 768, 15625), + f(19200000, p_bi_tcxo, 1, 0, 0), + f(29491200, p_gcc_gpll0_out_even, 1, 1536, 15625), + f(32000000, p_gcc_gpll0_out_even, 1, 8, 75), + f(48000000, p_gcc_gpll0_out_even, 1, 4, 25), + f(64000000, p_gcc_gpll0_out_even, 1, 16, 75), + f(75000000, p_gcc_gpll0_out_even, 4, 0, 0), + f(80000000, p_gcc_gpll0_out_even, 1, 4, 15), + f(96000000, p_gcc_gpll0_out_even, 1, 8, 25), + f(100000000, p_gcc_gpll0_out_main, 6, 0, 0), + f(102400000, p_gcc_gpll0_out_even, 1, 128, 375), + f(112000000, p_gcc_gpll0_out_even, 1, 28, 75), + f(117964800, p_gcc_gpll0_out_even, 1, 6144, 15625), + f(120000000, p_gcc_gpll0_out_main, 5, 0, 0), + { } +}; + +static struct clk_init_data gcc_qupv3_wrap1_s0_clk_src_init = { + .name = "gcc_qupv3_wrap1_s0_clk_src", + .parent_data = gcc_parent_data_0, + .num_parents = 4, + .flags = clk_set_rate_parent, + .ops = &clk_rcg2_ops, +}; + +static struct clk_rcg2 gcc_qupv3_wrap1_s0_clk_src = { + .cmd_rcgr = 0x18010, + .mnd_width = 16, + .hid_width = 5, + .parent_map = gcc_parent_map_0, + .freq_tbl = ftbl_gcc_qupv3_wrap1_s0_clk_src, + .clkr.hw.init = &gcc_qupv3_wrap1_s0_clk_src_init, +}; + +static struct clk_init_data gcc_qupv3_wrap1_s1_clk_src_init = { + .name = "gcc_qupv3_wrap1_s1_clk_src", + .parent_data = gcc_parent_data_0, + .num_parents = 4, + .flags = clk_set_rate_parent, + .ops = &clk_rcg2_ops, +}; + +static struct clk_rcg2 gcc_qupv3_wrap1_s1_clk_src = { + .cmd_rcgr = 0x18140, + .mnd_width = 16, + .hid_width = 5, + .parent_map = gcc_parent_map_0, + .freq_tbl = ftbl_gcc_qupv3_wrap0_s0_clk_src, + .clkr.hw.init = &gcc_qupv3_wrap1_s1_clk_src_init, +}; + +static struct clk_init_data gcc_qupv3_wrap1_s2_clk_src_init = { + .name = "gcc_qupv3_wrap1_s2_clk_src", + .parent_data = gcc_parent_data_0, + .num_parents = 4, + .flags = clk_set_rate_parent, + .ops = &clk_rcg2_ops, +}; + +static struct clk_rcg2 gcc_qupv3_wrap1_s2_clk_src = { + .cmd_rcgr = 0x18270, + .mnd_width = 16, + .hid_width = 5, + .parent_map = gcc_parent_map_0, + .freq_tbl = ftbl_gcc_qupv3_wrap0_s0_clk_src, + .clkr.hw.init = &gcc_qupv3_wrap1_s2_clk_src_init, +}; + +static struct clk_init_data gcc_qupv3_wrap1_s3_clk_src_init = { + .name = "gcc_qupv3_wrap1_s3_clk_src", + .parent_data = gcc_parent_data_0, + .num_parents = 4, + .flags = clk_set_rate_parent, + .ops = &clk_rcg2_ops, +}; + +static struct clk_rcg2 gcc_qupv3_wrap1_s3_clk_src = { + .cmd_rcgr = 0x183a0, + .mnd_width = 16, + .hid_width = 5, + .parent_map = gcc_parent_map_0, + .freq_tbl = ftbl_gcc_qupv3_wrap0_s0_clk_src, + .clkr.hw.init = &gcc_qupv3_wrap1_s3_clk_src_init, +}; + +static struct clk_init_data gcc_qupv3_wrap1_s4_clk_src_init = { + .name = "gcc_qupv3_wrap1_s4_clk_src", + .parent_data = gcc_parent_data_0, + .num_parents = 4, + .flags = clk_set_rate_parent, + .ops = &clk_rcg2_ops, +}; + +static struct clk_rcg2 gcc_qupv3_wrap1_s4_clk_src = { + .cmd_rcgr = 0x184d0, + .mnd_width = 16, + .hid_width = 5, + .parent_map = gcc_parent_map_0, + .freq_tbl = ftbl_gcc_qupv3_wrap0_s0_clk_src, + .clkr.hw.init = &gcc_qupv3_wrap1_s4_clk_src_init, +}; + +static struct clk_init_data gcc_qupv3_wrap1_s5_clk_src_init = { + .name = "gcc_qupv3_wrap1_s5_clk_src", + .parent_data = gcc_parent_data_0, + .num_parents = 4, + .flags = clk_set_rate_parent, + .ops = &clk_rcg2_ops, +}; + +static struct clk_rcg2 gcc_qupv3_wrap1_s5_clk_src = { + .cmd_rcgr = 0x18600, + .mnd_width = 16, + .hid_width = 5, + .parent_map = gcc_parent_map_0, + .freq_tbl = ftbl_gcc_qupv3_wrap0_s0_clk_src, + .clkr.hw.init = &gcc_qupv3_wrap1_s5_clk_src_init, +}; + +static struct clk_init_data gcc_qupv3_wrap2_s0_clk_src_init = { + .name = "gcc_qupv3_wrap2_s0_clk_src", + .parent_data = gcc_parent_data_0, + .num_parents = 4, + .flags = clk_set_rate_parent, + .ops = &clk_rcg2_ops, +}; + +static struct clk_rcg2 gcc_qupv3_wrap2_s0_clk_src = { + .cmd_rcgr = 0x1e010, + .mnd_width = 16, + .hid_width = 5, + .parent_map = gcc_parent_map_0, + .freq_tbl = ftbl_gcc_qupv3_wrap1_s0_clk_src, + .clkr.hw.init = &gcc_qupv3_wrap2_s0_clk_src_init, +}; + +static struct clk_init_data gcc_qupv3_wrap2_s1_clk_src_init = { + .name = "gcc_qupv3_wrap2_s1_clk_src", + .parent_data = gcc_parent_data_0, + .num_parents = 4, + .flags = clk_set_rate_parent, + .ops = &clk_rcg2_ops, +}; + +static struct clk_rcg2 gcc_qupv3_wrap2_s1_clk_src = { + .cmd_rcgr = 0x1e140, + .mnd_width = 16, + .hid_width = 5, + .parent_map = gcc_parent_map_0, + .freq_tbl = ftbl_gcc_qupv3_wrap1_s0_clk_src, + .clkr.hw.init = &gcc_qupv3_wrap2_s1_clk_src_init, +}; + +static struct clk_init_data gcc_qupv3_wrap2_s2_clk_src_init = { + .name = "gcc_qupv3_wrap2_s2_clk_src", + .parent_data = gcc_parent_data_0, + .num_parents = 4, + .flags = clk_set_rate_parent, + .ops = &clk_rcg2_ops, +}; + +static struct clk_rcg2 gcc_qupv3_wrap2_s2_clk_src = { + .cmd_rcgr = 0x1e270, + .mnd_width = 16, + .hid_width = 5, + .parent_map = gcc_parent_map_0, + .freq_tbl = ftbl_gcc_qupv3_wrap0_s0_clk_src, + .clkr.hw.init = &gcc_qupv3_wrap2_s2_clk_src_init, +}; + +static struct clk_init_data gcc_qupv3_wrap2_s3_clk_src_init = { + .name = "gcc_qupv3_wrap2_s3_clk_src", + .parent_data = gcc_parent_data_0, + .num_parents = 4, + .flags = clk_set_rate_parent, + .ops = &clk_rcg2_ops, +}; + +static struct clk_rcg2 gcc_qupv3_wrap2_s3_clk_src = { + .cmd_rcgr = 0x1e3a0, + .mnd_width = 16, + .hid_width = 5, + .parent_map = gcc_parent_map_0, + .freq_tbl = ftbl_gcc_qupv3_wrap0_s0_clk_src, + .clkr.hw.init = &gcc_qupv3_wrap2_s3_clk_src_init, +}; + +static struct clk_init_data gcc_qupv3_wrap2_s4_clk_src_init = { + .name = "gcc_qupv3_wrap2_s4_clk_src", + .parent_data = gcc_parent_data_0, + .num_parents = 4, + .flags = clk_set_rate_parent, + .ops = &clk_rcg2_ops, +}; + +static struct clk_rcg2 gcc_qupv3_wrap2_s4_clk_src = { + .cmd_rcgr = 0x1e4d0, + .mnd_width = 16, + .hid_width = 5, + .parent_map = gcc_parent_map_0, + .freq_tbl = ftbl_gcc_qupv3_wrap0_s0_clk_src, + .clkr.hw.init = &gcc_qupv3_wrap2_s4_clk_src_init, +}; + +static struct clk_init_data gcc_qupv3_wrap2_s5_clk_src_init = { + .name = "gcc_qupv3_wrap2_s5_clk_src", + .parent_data = gcc_parent_data_0, + .num_parents = 4, + .flags = clk_set_rate_parent, + .ops = &clk_rcg2_ops, +}; + +static struct clk_rcg2 gcc_qupv3_wrap2_s5_clk_src = { + .cmd_rcgr = 0x1e600, + .mnd_width = 16, + .hid_width = 5, + .parent_map = gcc_parent_map_0, + .freq_tbl = ftbl_gcc_qupv3_wrap0_s0_clk_src, + .clkr.hw.init = &gcc_qupv3_wrap2_s5_clk_src_init, +}; + +static const struct freq_tbl ftbl_gcc_sdcc2_apps_clk_src[] = { + f(400000, p_bi_tcxo, 12, 1, 4), + f(25000000, p_gcc_gpll0_out_even, 12, 0, 0), + f(50000000, p_gcc_gpll0_out_even, 6, 0, 0), + f(100000000, p_gcc_gpll0_out_even, 3, 0, 0), + f(202000000, p_gcc_gpll9_out_main, 4, 0, 0), + { } +}; + +static struct clk_rcg2 gcc_sdcc2_apps_clk_src = { + .cmd_rcgr = 0x1400c, + .mnd_width = 8, + .hid_width = 5, + .parent_map = gcc_parent_map_6, + .freq_tbl = ftbl_gcc_sdcc2_apps_clk_src, + .clkr.hw.init = &(struct clk_init_data){ + .name = "gcc_sdcc2_apps_clk_src", + .parent_data = gcc_parent_data_6, + .num_parents = 6, + .flags = clk_set_rate_parent, + .ops = &clk_rcg2_floor_ops, + }, +}; + +static const struct freq_tbl ftbl_gcc_sdcc4_apps_clk_src[] = { + f(400000, p_bi_tcxo, 12, 1, 4), + f(25000000, p_gcc_gpll0_out_even, 12, 0, 0), + f(100000000, p_gcc_gpll0_out_even, 3, 0, 0), + { } +}; + +static struct clk_rcg2 gcc_sdcc4_apps_clk_src = { + .cmd_rcgr = 0x1600c, + .mnd_width = 8, + .hid_width = 5, + .parent_map = gcc_parent_map_0, + .freq_tbl = ftbl_gcc_sdcc4_apps_clk_src, + .clkr.hw.init = &(struct clk_init_data){ + .name = "gcc_sdcc4_apps_clk_src", + .parent_data = gcc_parent_data_0, + .num_parents = 4, + .flags = clk_set_rate_parent, + .ops = &clk_rcg2_floor_ops, + }, +}; + +static const struct freq_tbl ftbl_gcc_ufs_card_axi_clk_src[] = { + f(25000000, p_gcc_gpll0_out_even, 12, 0, 0), + f(75000000, p_gcc_gpll0_out_even, 4, 0, 0), + f(150000000, p_gcc_gpll0_out_main, 4, 0, 0), + f(300000000, p_gcc_gpll0_out_main, 2, 0, 0), + { } +}; + +static struct clk_rcg2 gcc_ufs_card_axi_clk_src = { + .cmd_rcgr = 0x75024, + .mnd_width = 8, + .hid_width = 5, + .parent_map = gcc_parent_map_0, + .freq_tbl = ftbl_gcc_ufs_card_axi_clk_src, + .clkr.hw.init = &(struct clk_init_data){ + .name = "gcc_ufs_card_axi_clk_src", + .parent_data = gcc_parent_data_0, + .num_parents = 4, + .flags = clk_set_rate_parent, + .ops = &clk_rcg2_ops, + }, +}; + +static const struct freq_tbl ftbl_gcc_ufs_card_ice_core_clk_src[] = { + f(75000000, p_gcc_gpll0_out_even, 4, 0, 0), + f(150000000, p_gcc_gpll0_out_main, 4, 0, 0), + f(300000000, p_gcc_gpll0_out_main, 2, 0, 0), + { } +}; + +static struct clk_rcg2 gcc_ufs_card_ice_core_clk_src = { + .cmd_rcgr = 0x7506c, + .mnd_width = 0, + .hid_width = 5, + .parent_map = gcc_parent_map_0, + .freq_tbl = ftbl_gcc_ufs_card_ice_core_clk_src, + .clkr.hw.init = &(struct clk_init_data){ + .name = "gcc_ufs_card_ice_core_clk_src", + .parent_data = gcc_parent_data_0, + .num_parents = 4, + .flags = clk_set_rate_parent, + .ops = &clk_rcg2_ops, + }, +}; + +static const struct freq_tbl ftbl_gcc_ufs_card_phy_aux_clk_src[] = { + f(19200000, p_bi_tcxo, 1, 0, 0), + { } +}; + +static struct clk_rcg2 gcc_ufs_card_phy_aux_clk_src = { + .cmd_rcgr = 0x750a0, + .mnd_width = 0, + .hid_width = 5, + .parent_map = gcc_parent_map_3, + .freq_tbl = ftbl_gcc_ufs_card_phy_aux_clk_src, + .clkr.hw.init = &(struct clk_init_data){ + .name = "gcc_ufs_card_phy_aux_clk_src", + .parent_data = gcc_parent_data_3, + .num_parents = 2, + .flags = clk_set_rate_parent, + .ops = &clk_rcg2_ops, + }, +}; + +static struct clk_rcg2 gcc_ufs_card_unipro_core_clk_src = { + .cmd_rcgr = 0x75084, + .mnd_width = 0, + .hid_width = 5, + .parent_map = gcc_parent_map_0, + .freq_tbl = ftbl_gcc_ufs_card_ice_core_clk_src, + .clkr.hw.init = &(struct clk_init_data){ + .name = "gcc_ufs_card_unipro_core_clk_src", + .parent_data = gcc_parent_data_0, + .num_parents = 4, + .flags = clk_set_rate_parent, + .ops = &clk_rcg2_ops, + }, +}; + +static struct clk_rcg2 gcc_ufs_phy_axi_clk_src = { + .cmd_rcgr = 0x77024, + .mnd_width = 8, + .hid_width = 5, + .parent_map = gcc_parent_map_0, + .freq_tbl = ftbl_gcc_ufs_card_axi_clk_src, + .clkr.hw.init = &(struct clk_init_data){ + .name = "gcc_ufs_phy_axi_clk_src", + .parent_data = gcc_parent_data_0, + .num_parents = 4, + .flags = clk_set_rate_parent, + .ops = &clk_rcg2_ops, + }, +}; + +static struct clk_rcg2 gcc_ufs_phy_ice_core_clk_src = { + .cmd_rcgr = 0x7706c, + .mnd_width = 0, + .hid_width = 5, + .parent_map = gcc_parent_map_0, + .freq_tbl = ftbl_gcc_ufs_card_ice_core_clk_src, + .clkr.hw.init = &(struct clk_init_data){ + .name = "gcc_ufs_phy_ice_core_clk_src", + .parent_data = gcc_parent_data_0, + .num_parents = 4, + .flags = clk_set_rate_parent, + .ops = &clk_rcg2_ops, + }, +}; + +static struct clk_rcg2 gcc_ufs_phy_phy_aux_clk_src = { + .cmd_rcgr = 0x770a0, + .mnd_width = 0, + .hid_width = 5, + .parent_map = gcc_parent_map_3, + .freq_tbl = ftbl_gcc_pcie_0_aux_clk_src, + .clkr.hw.init = &(struct clk_init_data){ + .name = "gcc_ufs_phy_phy_aux_clk_src", + .parent_data = gcc_parent_data_3, + .num_parents = 2, + .flags = clk_set_rate_parent, + .ops = &clk_rcg2_ops, + }, +}; + +static struct clk_rcg2 gcc_ufs_phy_unipro_core_clk_src = { + .cmd_rcgr = 0x77084, + .mnd_width = 0, + .hid_width = 5, + .parent_map = gcc_parent_map_0, + .freq_tbl = ftbl_gcc_ufs_card_ice_core_clk_src, + .clkr.hw.init = &(struct clk_init_data){ + .name = "gcc_ufs_phy_unipro_core_clk_src", + .parent_data = gcc_parent_data_0, + .num_parents = 4, + .flags = clk_set_rate_parent, + .ops = &clk_rcg2_ops, + }, +}; + +static const struct freq_tbl ftbl_gcc_usb30_prim_master_clk_src[] = { + f(66666667, p_gcc_gpll0_out_even, 4.5, 0, 0), + f(133333333, p_gcc_gpll0_out_main, 4.5, 0, 0), + f(200000000, p_gcc_gpll0_out_main, 3, 0, 0), + f(240000000, p_gcc_gpll0_out_main, 2.5, 0, 0), + { } +}; + +static struct clk_rcg2 gcc_usb30_prim_master_clk_src = { + .cmd_rcgr = 0xf020, + .mnd_width = 8, + .hid_width = 5, + .parent_map = gcc_parent_map_0, + .freq_tbl = ftbl_gcc_usb30_prim_master_clk_src, + .clkr.hw.init = &(struct clk_init_data){ + .name = "gcc_usb30_prim_master_clk_src", + .parent_data = gcc_parent_data_0, + .num_parents = 4, + .flags = clk_set_rate_parent, + .ops = &clk_rcg2_ops, + }, +}; + +static struct clk_rcg2 gcc_usb30_prim_mock_utmi_clk_src = { + .cmd_rcgr = 0xf038, + .mnd_width = 0, + .hid_width = 5, + .parent_map = gcc_parent_map_0, + .freq_tbl = ftbl_gcc_ufs_card_phy_aux_clk_src, + .clkr.hw.init = &(struct clk_init_data){ + .name = "gcc_usb30_prim_mock_utmi_clk_src", + .parent_data = gcc_parent_data_0, + .num_parents = 4, + .flags = clk_set_rate_parent, + .ops = &clk_rcg2_ops, + }, +}; + +static struct clk_rcg2 gcc_usb30_sec_master_clk_src = { + .cmd_rcgr = 0x10020, + .mnd_width = 8, + .hid_width = 5, + .parent_map = gcc_parent_map_0, + .freq_tbl = ftbl_gcc_usb30_prim_master_clk_src, + .clkr.hw.init = &(struct clk_init_data){ + .name = "gcc_usb30_sec_master_clk_src", + .parent_data = gcc_parent_data_0, + .num_parents = 4, + .flags = clk_set_rate_parent, + .ops = &clk_rcg2_ops, + }, +}; + +static struct clk_rcg2 gcc_usb30_sec_mock_utmi_clk_src = { + .cmd_rcgr = 0x10038, + .mnd_width = 0, + .hid_width = 5, + .parent_map = gcc_parent_map_0, + .freq_tbl = ftbl_gcc_ufs_card_phy_aux_clk_src, + .clkr.hw.init = &(struct clk_init_data){ + .name = "gcc_usb30_sec_mock_utmi_clk_src", + .parent_data = gcc_parent_data_0, + .num_parents = 4, + .flags = clk_set_rate_parent, + .ops = &clk_rcg2_ops, + }, +}; + +static struct clk_rcg2 gcc_usb3_prim_phy_aux_clk_src = { + .cmd_rcgr = 0xf064, + .mnd_width = 0, + .hid_width = 5, + .parent_map = gcc_parent_map_2, + .freq_tbl = ftbl_gcc_ufs_card_phy_aux_clk_src, + .clkr.hw.init = &(struct clk_init_data){ + .name = "gcc_usb3_prim_phy_aux_clk_src", + .parent_data = gcc_parent_data_2, + .num_parents = 3, + .flags = clk_set_rate_parent, + .ops = &clk_rcg2_ops, + }, +}; + +static struct clk_rcg2 gcc_usb3_sec_phy_aux_clk_src = { + .cmd_rcgr = 0x10064, + .mnd_width = 0, + .hid_width = 5, + .parent_map = gcc_parent_map_2, + .freq_tbl = ftbl_gcc_ufs_card_phy_aux_clk_src, + .clkr.hw.init = &(struct clk_init_data){ + .name = "gcc_usb3_sec_phy_aux_clk_src", + .parent_data = gcc_parent_data_2, + .num_parents = 3, + .flags = clk_set_rate_parent, + .ops = &clk_rcg2_ops, + }, +}; + +static struct clk_regmap_div gcc_usb30_prim_mock_utmi_postdiv_clk_src = { + .reg = 0xf050, + .shift = 0, + .width = 4, + .clkr.hw.init = &(struct clk_init_data) { + .name = "gcc_usb30_prim_mock_utmi_postdiv_clk_src", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_usb30_prim_mock_utmi_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_regmap_div_ro_ops, + }, +}; + +static struct clk_regmap_div gcc_usb30_sec_mock_utmi_postdiv_clk_src = { + .reg = 0x10050, + .shift = 0, + .width = 4, + .clkr.hw.init = &(struct clk_init_data) { + .name = "gcc_usb30_sec_mock_utmi_postdiv_clk_src", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_usb30_sec_mock_utmi_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_regmap_div_ro_ops, + }, +}; + +/* external clocks so add branch_halt_skip */ +static struct clk_branch gcc_aggre_noc_pcie_0_axi_clk = { + .halt_reg = 0x6b080, + .halt_check = branch_halt_skip, + .clkr = { + .enable_reg = 0x52000, + .enable_mask = bit(12), + .hw.init = &(struct clk_init_data){ + .name = "gcc_aggre_noc_pcie_0_axi_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +/* external clocks so add branch_halt_skip */ +static struct clk_branch gcc_aggre_noc_pcie_1_axi_clk = { + .halt_reg = 0x8d084, + .halt_check = branch_halt_skip, + .clkr = { + .enable_reg = 0x52000, + .enable_mask = bit(11), + .hw.init = &(struct clk_init_data){ + .name = "gcc_aggre_noc_pcie_1_axi_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_aggre_noc_pcie_tbu_clk = { + .halt_reg = 0x9000c, + .halt_check = branch_halt_voted, + .hwcg_reg = 0x9000c, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x52000, + .enable_mask = bit(18), + .hw.init = &(struct clk_init_data){ + .name = "gcc_aggre_noc_pcie_tbu_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_aggre_ufs_card_axi_clk = { + .halt_reg = 0x750cc, + .halt_check = branch_halt_voted, + .hwcg_reg = 0x750cc, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x750cc, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_aggre_ufs_card_axi_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_ufs_card_axi_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_aggre_ufs_card_axi_hw_ctl_clk = { + .halt_reg = 0x750cc, + .halt_check = branch_halt_voted, + .hwcg_reg = 0x750cc, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x750cc, + .enable_mask = bit(1), + .hw.init = &(struct clk_init_data){ + .name = "gcc_aggre_ufs_card_axi_hw_ctl_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_ufs_card_axi_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_aggre_ufs_phy_axi_clk = { + .halt_reg = 0x770cc, + .halt_check = branch_halt_voted, + .hwcg_reg = 0x770cc, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x770cc, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_aggre_ufs_phy_axi_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_ufs_phy_axi_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_aggre_ufs_phy_axi_hw_ctl_clk = { + .halt_reg = 0x770cc, + .halt_check = branch_halt_voted, + .hwcg_reg = 0x770cc, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x770cc, + .enable_mask = bit(1), + .hw.init = &(struct clk_init_data){ + .name = "gcc_aggre_ufs_phy_axi_hw_ctl_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_ufs_phy_axi_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_aggre_usb3_prim_axi_clk = { + .halt_reg = 0xf080, + .halt_check = branch_halt_voted, + .hwcg_reg = 0xf080, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0xf080, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_aggre_usb3_prim_axi_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_usb30_prim_master_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_aggre_usb3_sec_axi_clk = { + .halt_reg = 0x10080, + .halt_check = branch_halt_voted, + .hwcg_reg = 0x10080, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x10080, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_aggre_usb3_sec_axi_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_usb30_sec_master_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_boot_rom_ahb_clk = { + .halt_reg = 0x38004, + .halt_check = branch_halt_voted, + .hwcg_reg = 0x38004, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x52000, + .enable_mask = bit(10), + .hw.init = &(struct clk_init_data){ + .name = "gcc_boot_rom_ahb_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +/* external clocks so add branch_halt_skip */ +static struct clk_branch gcc_camera_hf_axi_clk = { + .halt_reg = 0x26010, + .halt_check = branch_halt_skip, + .hwcg_reg = 0x26010, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x26010, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_camera_hf_axi_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +/* external clocks so add branch_halt_skip */ +static struct clk_branch gcc_camera_sf_axi_clk = { + .halt_reg = 0x26014, + .halt_check = branch_halt_skip, + .hwcg_reg = 0x26014, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x26014, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_camera_sf_axi_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_cfg_noc_usb3_prim_axi_clk = { + .halt_reg = 0xf07c, + .halt_check = branch_halt_voted, + .hwcg_reg = 0xf07c, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0xf07c, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_cfg_noc_usb3_prim_axi_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_usb30_prim_master_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_cfg_noc_usb3_sec_axi_clk = { + .halt_reg = 0x1007c, + .halt_check = branch_halt_voted, + .hwcg_reg = 0x1007c, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x1007c, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_cfg_noc_usb3_sec_axi_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_usb30_sec_master_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +/* external clocks so add branch_halt_skip */ +static struct clk_branch gcc_ddrss_gpu_axi_clk = { + .halt_reg = 0x71154, + .halt_check = branch_halt_skip, + .hwcg_reg = 0x71154, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x71154, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_ddrss_gpu_axi_clk", + .ops = &clk_branch2_aon_ops, + }, + }, +}; + +/* external clocks so add branch_halt_skip */ +static struct clk_branch gcc_ddrss_pcie_sf_tbu_clk = { + .halt_reg = 0x8d080, + .halt_check = branch_halt_skip, + .hwcg_reg = 0x8d080, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x52000, + .enable_mask = bit(19), + .hw.init = &(struct clk_init_data){ + .name = "gcc_ddrss_pcie_sf_tbu_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +/* external clocks so add branch_halt_skip */ +static struct clk_branch gcc_disp_hf_axi_clk = { + .halt_reg = 0x2700c, + .halt_check = branch_halt_skip, + .hwcg_reg = 0x2700c, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x2700c, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_disp_hf_axi_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +/* external clocks so add branch_halt_skip */ +static struct clk_branch gcc_disp_sf_axi_clk = { + .halt_reg = 0x27014, + .halt_check = branch_halt_skip, + .hwcg_reg = 0x27014, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x27014, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_disp_sf_axi_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_gp1_clk = { + .halt_reg = 0x64000, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0x64000, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_gp1_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_gp1_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_gp2_clk = { + .halt_reg = 0x65000, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0x65000, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_gp2_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_gp2_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_gp3_clk = { + .halt_reg = 0x66000, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0x66000, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_gp3_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_gp3_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +/* clock on depends on external parent clock, so don't poll */ +static struct clk_branch gcc_gpu_gpll0_clk_src = { + .halt_check = branch_halt_delay, + .clkr = { + .enable_reg = 0x52000, + .enable_mask = bit(15), + .hw.init = &(struct clk_init_data){ + .name = "gcc_gpu_gpll0_clk_src", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_gpll0.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +/* clock on depends on external parent clock, so don't poll */ +static struct clk_branch gcc_gpu_gpll0_div_clk_src = { + .halt_check = branch_halt_delay, + .clkr = { + .enable_reg = 0x52000, + .enable_mask = bit(16), + .hw.init = &(struct clk_init_data){ + .name = "gcc_gpu_gpll0_div_clk_src", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_gpll0_out_even.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_gpu_iref_en = { + .halt_reg = 0x8c014, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0x8c014, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_gpu_iref_en", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_gpu_memnoc_gfx_clk = { + .halt_reg = 0x7100c, + .halt_check = branch_halt_voted, + .hwcg_reg = 0x7100c, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x7100c, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_gpu_memnoc_gfx_clk", + .ops = &clk_branch2_aon_ops, + }, + }, +}; + +static struct clk_branch gcc_gpu_snoc_dvm_gfx_clk = { + .halt_reg = 0x71018, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0x71018, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_gpu_snoc_dvm_gfx_clk", + .ops = &clk_branch2_aon_ops, + }, + }, +}; + +static struct clk_branch gcc_pcie0_phy_rchng_clk = { + .halt_reg = 0x6b038, + .halt_check = branch_halt_voted, + .clkr = { + .enable_reg = 0x52000, + .enable_mask = bit(22), + .hw.init = &(struct clk_init_data){ + .name = "gcc_pcie0_phy_rchng_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_pcie_0_phy_rchng_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_pcie1_phy_rchng_clk = { + .halt_reg = 0x8d038, + .halt_check = branch_halt_voted, + .clkr = { + .enable_reg = 0x52000, + .enable_mask = bit(23), + .hw.init = &(struct clk_init_data){ + .name = "gcc_pcie1_phy_rchng_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_pcie_1_phy_rchng_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_pcie_0_aux_clk = { + .halt_reg = 0x6b028, + .halt_check = branch_halt_voted, + .clkr = { + .enable_reg = 0x52008, + .enable_mask = bit(3), + .hw.init = &(struct clk_init_data){ + .name = "gcc_pcie_0_aux_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_pcie_0_aux_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_pcie_0_cfg_ahb_clk = { + .halt_reg = 0x6b024, + .halt_check = branch_halt_voted, + .hwcg_reg = 0x6b024, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x52008, + .enable_mask = bit(2), + .hw.init = &(struct clk_init_data){ + .name = "gcc_pcie_0_cfg_ahb_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_pcie_0_clkref_en = { + .halt_reg = 0x8c004, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0x8c004, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_pcie_0_clkref_en", + .ops = &clk_branch2_ops, + }, + }, +}; + +/* external clocks so add branch_halt_skip */ +static struct clk_branch gcc_pcie_0_mstr_axi_clk = { + .halt_reg = 0x6b01c, + .halt_check = branch_halt_skip, + .hwcg_reg = 0x6b01c, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x52008, + .enable_mask = bit(1), + .hw.init = &(struct clk_init_data){ + .name = "gcc_pcie_0_mstr_axi_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +/* external clocks so add branch_halt_skip */ +static struct clk_branch gcc_pcie_0_pipe_clk = { + .halt_reg = 0x6b030, + .halt_check = branch_halt_skip, + .clkr = { + .enable_reg = 0x52008, + .enable_mask = bit(4), + .hw.init = &(struct clk_init_data){ + .name = "gcc_pcie_0_pipe_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_pcie_0_pipe_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_pcie_0_slv_axi_clk = { + .halt_reg = 0x6b014, + .halt_check = branch_halt_voted, + .hwcg_reg = 0x6b014, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x52008, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_pcie_0_slv_axi_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_pcie_0_slv_q2a_axi_clk = { + .halt_reg = 0x6b010, + .halt_check = branch_halt_voted, + .clkr = { + .enable_reg = 0x52008, + .enable_mask = bit(5), + .hw.init = &(struct clk_init_data){ + .name = "gcc_pcie_0_slv_q2a_axi_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_pcie_1_aux_clk = { + .halt_reg = 0x8d028, + .halt_check = branch_halt_voted, + .clkr = { + .enable_reg = 0x52000, + .enable_mask = bit(29), + .hw.init = &(struct clk_init_data){ + .name = "gcc_pcie_1_aux_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_pcie_1_aux_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_pcie_1_cfg_ahb_clk = { + .halt_reg = 0x8d024, + .halt_check = branch_halt_voted, + .hwcg_reg = 0x8d024, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x52000, + .enable_mask = bit(28), + .hw.init = &(struct clk_init_data){ + .name = "gcc_pcie_1_cfg_ahb_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_pcie_1_clkref_en = { + .halt_reg = 0x8c008, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0x8c008, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_pcie_1_clkref_en", + .ops = &clk_branch2_ops, + }, + }, +}; + +/* external clocks so add branch_halt_skip */ +static struct clk_branch gcc_pcie_1_mstr_axi_clk = { + .halt_reg = 0x8d01c, + .halt_check = branch_halt_skip, + .hwcg_reg = 0x8d01c, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x52000, + .enable_mask = bit(27), + .hw.init = &(struct clk_init_data){ + .name = "gcc_pcie_1_mstr_axi_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +/* external clocks so add branch_halt_skip */ +static struct clk_branch gcc_pcie_1_pipe_clk = { + .halt_reg = 0x8d030, + .halt_check = branch_halt_skip, + .clkr = { + .enable_reg = 0x52000, + .enable_mask = bit(30), + .hw.init = &(struct clk_init_data){ + .name = "gcc_pcie_1_pipe_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_pcie_1_pipe_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_pcie_1_slv_axi_clk = { + .halt_reg = 0x8d014, + .halt_check = branch_halt_voted, + .hwcg_reg = 0x8d014, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x52000, + .enable_mask = bit(26), + .hw.init = &(struct clk_init_data){ + .name = "gcc_pcie_1_slv_axi_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_pcie_1_slv_q2a_axi_clk = { + .halt_reg = 0x8d010, + .halt_check = branch_halt_voted, + .clkr = { + .enable_reg = 0x52000, + .enable_mask = bit(25), + .hw.init = &(struct clk_init_data){ + .name = "gcc_pcie_1_slv_q2a_axi_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_pdm2_clk = { + .halt_reg = 0x3300c, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0x3300c, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_pdm2_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_pdm2_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_pdm_ahb_clk = { + .halt_reg = 0x33004, + .halt_check = branch_halt_voted, + .hwcg_reg = 0x33004, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x33004, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_pdm_ahb_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_pdm_xo4_clk = { + .halt_reg = 0x33008, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0x33008, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_pdm_xo4_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_qmip_camera_nrt_ahb_clk = { + .halt_reg = 0x26008, + .halt_check = branch_halt_voted, + .hwcg_reg = 0x26008, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x26008, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_qmip_camera_nrt_ahb_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_qmip_camera_rt_ahb_clk = { + .halt_reg = 0x2600c, + .halt_check = branch_halt_voted, + .hwcg_reg = 0x2600c, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x2600c, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_qmip_camera_rt_ahb_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_qmip_disp_ahb_clk = { + .halt_reg = 0x27008, + .halt_check = branch_halt_voted, + .hwcg_reg = 0x27008, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x27008, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_qmip_disp_ahb_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_qmip_video_cvp_ahb_clk = { + .halt_reg = 0x28008, + .halt_check = branch_halt_voted, + .hwcg_reg = 0x28008, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x28008, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_qmip_video_cvp_ahb_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_qmip_video_vcodec_ahb_clk = { + .halt_reg = 0x2800c, + .halt_check = branch_halt_voted, + .hwcg_reg = 0x2800c, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x2800c, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_qmip_video_vcodec_ahb_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_qupv3_wrap0_core_2x_clk = { + .halt_reg = 0x23008, + .halt_check = branch_halt_voted, + .clkr = { + .enable_reg = 0x52008, + .enable_mask = bit(9), + .hw.init = &(struct clk_init_data){ + .name = "gcc_qupv3_wrap0_core_2x_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_qupv3_wrap0_core_clk = { + .halt_reg = 0x23000, + .halt_check = branch_halt_voted, + .clkr = { + .enable_reg = 0x52008, + .enable_mask = bit(8), + .hw.init = &(struct clk_init_data){ + .name = "gcc_qupv3_wrap0_core_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_qupv3_wrap0_s0_clk = { + .halt_reg = 0x1700c, + .halt_check = branch_halt_voted, + .clkr = { + .enable_reg = 0x52008, + .enable_mask = bit(10), + .hw.init = &(struct clk_init_data){ + .name = "gcc_qupv3_wrap0_s0_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_qupv3_wrap0_s0_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_qupv3_wrap0_s1_clk = { + .halt_reg = 0x1713c, + .halt_check = branch_halt_voted, + .clkr = { + .enable_reg = 0x52008, + .enable_mask = bit(11), + .hw.init = &(struct clk_init_data){ + .name = "gcc_qupv3_wrap0_s1_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_qupv3_wrap0_s1_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_qupv3_wrap0_s2_clk = { + .halt_reg = 0x1726c, + .halt_check = branch_halt_voted, + .clkr = { + .enable_reg = 0x52008, + .enable_mask = bit(12), + .hw.init = &(struct clk_init_data){ + .name = "gcc_qupv3_wrap0_s2_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_qupv3_wrap0_s2_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_qupv3_wrap0_s3_clk = { + .halt_reg = 0x1739c, + .halt_check = branch_halt_voted, + .clkr = { + .enable_reg = 0x52008, + .enable_mask = bit(13), + .hw.init = &(struct clk_init_data){ + .name = "gcc_qupv3_wrap0_s3_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_qupv3_wrap0_s3_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_qupv3_wrap0_s4_clk = { + .halt_reg = 0x174cc, + .halt_check = branch_halt_voted, + .clkr = { + .enable_reg = 0x52008, + .enable_mask = bit(14), + .hw.init = &(struct clk_init_data){ + .name = "gcc_qupv3_wrap0_s4_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_qupv3_wrap0_s4_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_qupv3_wrap0_s5_clk = { + .halt_reg = 0x175fc, + .halt_check = branch_halt_voted, + .clkr = { + .enable_reg = 0x52008, + .enable_mask = bit(15), + .hw.init = &(struct clk_init_data){ + .name = "gcc_qupv3_wrap0_s5_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_qupv3_wrap0_s5_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_qupv3_wrap0_s6_clk = { + .halt_reg = 0x1772c, + .halt_check = branch_halt_voted, + .clkr = { + .enable_reg = 0x52008, + .enable_mask = bit(16), + .hw.init = &(struct clk_init_data){ + .name = "gcc_qupv3_wrap0_s6_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_qupv3_wrap0_s6_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_qupv3_wrap0_s7_clk = { + .halt_reg = 0x1785c, + .halt_check = branch_halt_voted, + .clkr = { + .enable_reg = 0x52008, + .enable_mask = bit(17), + .hw.init = &(struct clk_init_data){ + .name = "gcc_qupv3_wrap0_s7_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_qupv3_wrap0_s7_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_qupv3_wrap1_core_2x_clk = { + .halt_reg = 0x23140, + .halt_check = branch_halt_voted, + .clkr = { + .enable_reg = 0x52008, + .enable_mask = bit(18), + .hw.init = &(struct clk_init_data){ + .name = "gcc_qupv3_wrap1_core_2x_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_qupv3_wrap1_core_clk = { + .halt_reg = 0x23138, + .halt_check = branch_halt_voted, + .clkr = { + .enable_reg = 0x52008, + .enable_mask = bit(19), + .hw.init = &(struct clk_init_data){ + .name = "gcc_qupv3_wrap1_core_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_qupv3_wrap_1_m_ahb_clk = { + .halt_reg = 0x18004, + .halt_check = branch_halt_voted, + .hwcg_reg = 0x18004, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x52008, + .enable_mask = bit(20), + .hw.init = &(struct clk_init_data){ + .name = "gcc_qupv3_wrap_1_m_ahb_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_qupv3_wrap_1_s_ahb_clk = { + .halt_reg = 0x18008, + .halt_check = branch_halt_voted, + .hwcg_reg = 0x18008, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x52008, + .enable_mask = bit(21), + .hw.init = &(struct clk_init_data){ + .name = "gcc_qupv3_wrap_1_s_ahb_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_qupv3_wrap1_s0_clk = { + .halt_reg = 0x1800c, + .halt_check = branch_halt_voted, + .clkr = { + .enable_reg = 0x52008, + .enable_mask = bit(22), + .hw.init = &(struct clk_init_data){ + .name = "gcc_qupv3_wrap1_s0_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_qupv3_wrap1_s0_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_qupv3_wrap1_s1_clk = { + .halt_reg = 0x1813c, + .halt_check = branch_halt_voted, + .clkr = { + .enable_reg = 0x52008, + .enable_mask = bit(23), + .hw.init = &(struct clk_init_data){ + .name = "gcc_qupv3_wrap1_s1_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_qupv3_wrap1_s1_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_qupv3_wrap1_s2_clk = { + .halt_reg = 0x1826c, + .halt_check = branch_halt_voted, + .clkr = { + .enable_reg = 0x52008, + .enable_mask = bit(24), + .hw.init = &(struct clk_init_data){ + .name = "gcc_qupv3_wrap1_s2_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_qupv3_wrap1_s2_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_qupv3_wrap1_s3_clk = { + .halt_reg = 0x1839c, + .halt_check = branch_halt_voted, + .clkr = { + .enable_reg = 0x52008, + .enable_mask = bit(25), + .hw.init = &(struct clk_init_data){ + .name = "gcc_qupv3_wrap1_s3_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_qupv3_wrap1_s3_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_qupv3_wrap1_s4_clk = { + .halt_reg = 0x184cc, + .halt_check = branch_halt_voted, + .clkr = { + .enable_reg = 0x52008, + .enable_mask = bit(26), + .hw.init = &(struct clk_init_data){ + .name = "gcc_qupv3_wrap1_s4_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_qupv3_wrap1_s4_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_qupv3_wrap1_s5_clk = { + .halt_reg = 0x185fc, + .halt_check = branch_halt_voted, + .clkr = { + .enable_reg = 0x52008, + .enable_mask = bit(27), + .hw.init = &(struct clk_init_data){ + .name = "gcc_qupv3_wrap1_s5_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_qupv3_wrap1_s5_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_qupv3_wrap2_core_2x_clk = { + .halt_reg = 0x23278, + .halt_check = branch_halt_voted, + .clkr = { + .enable_reg = 0x52010, + .enable_mask = bit(3), + .hw.init = &(struct clk_init_data){ + .name = "gcc_qupv3_wrap2_core_2x_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_qupv3_wrap2_core_clk = { + .halt_reg = 0x23270, + .halt_check = branch_halt_voted, + .clkr = { + .enable_reg = 0x52010, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_qupv3_wrap2_core_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_qupv3_wrap2_s0_clk = { + .halt_reg = 0x1e00c, + .halt_check = branch_halt_voted, + .clkr = { + .enable_reg = 0x52010, + .enable_mask = bit(4), + .hw.init = &(struct clk_init_data){ + .name = "gcc_qupv3_wrap2_s0_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_qupv3_wrap2_s0_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_qupv3_wrap2_s1_clk = { + .halt_reg = 0x1e13c, + .halt_check = branch_halt_voted, + .clkr = { + .enable_reg = 0x52010, + .enable_mask = bit(5), + .hw.init = &(struct clk_init_data){ + .name = "gcc_qupv3_wrap2_s1_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_qupv3_wrap2_s1_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_qupv3_wrap2_s2_clk = { + .halt_reg = 0x1e26c, + .halt_check = branch_halt_voted, + .clkr = { + .enable_reg = 0x52010, + .enable_mask = bit(6), + .hw.init = &(struct clk_init_data){ + .name = "gcc_qupv3_wrap2_s2_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_qupv3_wrap2_s2_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_qupv3_wrap2_s3_clk = { + .halt_reg = 0x1e39c, + .halt_check = branch_halt_voted, + .clkr = { + .enable_reg = 0x52010, + .enable_mask = bit(7), + .hw.init = &(struct clk_init_data){ + .name = "gcc_qupv3_wrap2_s3_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_qupv3_wrap2_s3_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_qupv3_wrap2_s4_clk = { + .halt_reg = 0x1e4cc, + .halt_check = branch_halt_voted, + .clkr = { + .enable_reg = 0x52010, + .enable_mask = bit(8), + .hw.init = &(struct clk_init_data){ + .name = "gcc_qupv3_wrap2_s4_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_qupv3_wrap2_s4_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_qupv3_wrap2_s5_clk = { + .halt_reg = 0x1e5fc, + .halt_check = branch_halt_voted, + .clkr = { + .enable_reg = 0x52010, + .enable_mask = bit(9), + .hw.init = &(struct clk_init_data){ + .name = "gcc_qupv3_wrap2_s5_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_qupv3_wrap2_s5_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_qupv3_wrap_0_m_ahb_clk = { + .halt_reg = 0x17004, + .halt_check = branch_halt_voted, + .hwcg_reg = 0x17004, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x52008, + .enable_mask = bit(6), + .hw.init = &(struct clk_init_data){ + .name = "gcc_qupv3_wrap_0_m_ahb_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_qupv3_wrap_0_s_ahb_clk = { + .halt_reg = 0x17008, + .halt_check = branch_halt_voted, + .hwcg_reg = 0x17008, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x52008, + .enable_mask = bit(7), + .hw.init = &(struct clk_init_data){ + .name = "gcc_qupv3_wrap_0_s_ahb_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_qupv3_wrap_2_m_ahb_clk = { + .halt_reg = 0x1e004, + .halt_check = branch_halt_voted, + .hwcg_reg = 0x1e004, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x52010, + .enable_mask = bit(2), + .hw.init = &(struct clk_init_data){ + .name = "gcc_qupv3_wrap_2_m_ahb_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_qupv3_wrap_2_s_ahb_clk = { + .halt_reg = 0x1e008, + .halt_check = branch_halt_voted, + .hwcg_reg = 0x1e008, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x52010, + .enable_mask = bit(1), + .hw.init = &(struct clk_init_data){ + .name = "gcc_qupv3_wrap_2_s_ahb_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_sdcc2_ahb_clk = { + .halt_reg = 0x14008, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0x14008, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_sdcc2_ahb_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_sdcc2_apps_clk = { + .halt_reg = 0x14004, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0x14004, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_sdcc2_apps_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_sdcc2_apps_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_sdcc4_ahb_clk = { + .halt_reg = 0x16008, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0x16008, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_sdcc4_ahb_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_sdcc4_apps_clk = { + .halt_reg = 0x16004, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0x16004, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_sdcc4_apps_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_sdcc4_apps_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_throttle_pcie_ahb_clk = { + .halt_reg = 0x9044, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0x9044, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_throttle_pcie_ahb_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_ufs_1_clkref_en = { + .halt_reg = 0x8c000, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0x8c000, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_ufs_1_clkref_en", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_ufs_card_ahb_clk = { + .halt_reg = 0x75018, + .halt_check = branch_halt_voted, + .hwcg_reg = 0x75018, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x75018, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_ufs_card_ahb_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_ufs_card_axi_clk = { + .halt_reg = 0x75010, + .halt_check = branch_halt_voted, + .hwcg_reg = 0x75010, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x75010, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_ufs_card_axi_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_ufs_card_axi_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_ufs_card_axi_hw_ctl_clk = { + .halt_reg = 0x75010, + .halt_check = branch_halt_voted, + .hwcg_reg = 0x75010, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x75010, + .enable_mask = bit(1), + .hw.init = &(struct clk_init_data){ + .name = "gcc_ufs_card_axi_hw_ctl_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_ufs_card_axi_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_ufs_card_ice_core_clk = { + .halt_reg = 0x75064, + .halt_check = branch_halt_voted, + .hwcg_reg = 0x75064, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x75064, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_ufs_card_ice_core_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_ufs_card_ice_core_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_ufs_card_ice_core_hw_ctl_clk = { + .halt_reg = 0x75064, + .halt_check = branch_halt_voted, + .hwcg_reg = 0x75064, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x75064, + .enable_mask = bit(1), + .hw.init = &(struct clk_init_data){ + .name = "gcc_ufs_card_ice_core_hw_ctl_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_ufs_card_ice_core_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_ufs_card_phy_aux_clk = { + .halt_reg = 0x7509c, + .halt_check = branch_halt_voted, + .hwcg_reg = 0x7509c, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x7509c, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_ufs_card_phy_aux_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_ufs_card_phy_aux_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_ufs_card_phy_aux_hw_ctl_clk = { + .halt_reg = 0x7509c, + .halt_check = branch_halt_voted, + .hwcg_reg = 0x7509c, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x7509c, + .enable_mask = bit(1), + .hw.init = &(struct clk_init_data){ + .name = "gcc_ufs_card_phy_aux_hw_ctl_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_ufs_card_phy_aux_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +/* clock on depends on external parent clock, so don't poll */ +static struct clk_branch gcc_ufs_card_rx_symbol_0_clk = { + .halt_reg = 0x75020, + .halt_check = branch_halt_delay, + .clkr = { + .enable_reg = 0x75020, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_ufs_card_rx_symbol_0_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_ufs_card_rx_symbol_0_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +/* clock on depends on external parent clock, so don't poll */ +static struct clk_branch gcc_ufs_card_rx_symbol_1_clk = { + .halt_reg = 0x750b8, + .halt_check = branch_halt_delay, + .clkr = { + .enable_reg = 0x750b8, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_ufs_card_rx_symbol_1_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_ufs_card_rx_symbol_1_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +/* clock on depends on external parent clock, so don't poll */ +static struct clk_branch gcc_ufs_card_tx_symbol_0_clk = { + .halt_reg = 0x7501c, + .halt_check = branch_halt_delay, + .clkr = { + .enable_reg = 0x7501c, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_ufs_card_tx_symbol_0_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_ufs_card_tx_symbol_0_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_ufs_card_unipro_core_clk = { + .halt_reg = 0x7505c, + .halt_check = branch_halt_voted, + .hwcg_reg = 0x7505c, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x7505c, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_ufs_card_unipro_core_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_ufs_card_unipro_core_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_ufs_card_unipro_core_hw_ctl_clk = { + .halt_reg = 0x7505c, + .halt_check = branch_halt_voted, + .hwcg_reg = 0x7505c, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x7505c, + .enable_mask = bit(1), + .hw.init = &(struct clk_init_data){ + .name = "gcc_ufs_card_unipro_core_hw_ctl_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_ufs_card_unipro_core_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_ufs_phy_ahb_clk = { + .halt_reg = 0x77018, + .halt_check = branch_halt_voted, + .hwcg_reg = 0x77018, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x77018, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_ufs_phy_ahb_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_ufs_phy_axi_clk = { + .halt_reg = 0x77010, + .halt_check = branch_halt_voted, + .hwcg_reg = 0x77010, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x77010, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_ufs_phy_axi_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_ufs_phy_axi_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_ufs_phy_axi_hw_ctl_clk = { + .halt_reg = 0x77010, + .halt_check = branch_halt_voted, + .hwcg_reg = 0x77010, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x77010, + .enable_mask = bit(1), + .hw.init = &(struct clk_init_data){ + .name = "gcc_ufs_phy_axi_hw_ctl_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_ufs_phy_axi_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_ufs_phy_ice_core_clk = { + .halt_reg = 0x77064, + .halt_check = branch_halt_voted, + .hwcg_reg = 0x77064, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x77064, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_ufs_phy_ice_core_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_ufs_phy_ice_core_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_ufs_phy_ice_core_hw_ctl_clk = { + .halt_reg = 0x77064, + .halt_check = branch_halt_voted, + .hwcg_reg = 0x77064, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x77064, + .enable_mask = bit(1), + .hw.init = &(struct clk_init_data){ + .name = "gcc_ufs_phy_ice_core_hw_ctl_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_ufs_phy_ice_core_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_ufs_phy_phy_aux_clk = { + .halt_reg = 0x7709c, + .halt_check = branch_halt_voted, + .hwcg_reg = 0x7709c, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x7709c, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_ufs_phy_phy_aux_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_ufs_phy_phy_aux_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_ufs_phy_phy_aux_hw_ctl_clk = { + .halt_reg = 0x7709c, + .halt_check = branch_halt_voted, + .hwcg_reg = 0x7709c, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x7709c, + .enable_mask = bit(1), + .hw.init = &(struct clk_init_data){ + .name = "gcc_ufs_phy_phy_aux_hw_ctl_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_ufs_phy_phy_aux_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +/* clock on depends on external parent clock, so don't poll */ +static struct clk_branch gcc_ufs_phy_rx_symbol_0_clk = { + .halt_reg = 0x77020, + .halt_check = branch_halt_delay, + .clkr = { + .enable_reg = 0x77020, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_ufs_phy_rx_symbol_0_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_ufs_phy_rx_symbol_0_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +/* clock on depends on external parent clock, so don't poll */ +static struct clk_branch gcc_ufs_phy_rx_symbol_1_clk = { + .halt_reg = 0x770b8, + .halt_check = branch_halt_delay, + .clkr = { + .enable_reg = 0x770b8, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_ufs_phy_rx_symbol_1_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_ufs_phy_rx_symbol_1_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +/* clock on depends on external parent clock, so don't poll */ +static struct clk_branch gcc_ufs_phy_tx_symbol_0_clk = { + .halt_reg = 0x7701c, + .halt_check = branch_halt_delay, + .clkr = { + .enable_reg = 0x7701c, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_ufs_phy_tx_symbol_0_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_ufs_phy_tx_symbol_0_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_ufs_phy_unipro_core_clk = { + .halt_reg = 0x7705c, + .halt_check = branch_halt_voted, + .hwcg_reg = 0x7705c, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x7705c, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_ufs_phy_unipro_core_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_ufs_phy_unipro_core_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_ufs_phy_unipro_core_hw_ctl_clk = { + .halt_reg = 0x7705c, + .halt_check = branch_halt_voted, + .hwcg_reg = 0x7705c, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x7705c, + .enable_mask = bit(1), + .hw.init = &(struct clk_init_data){ + .name = "gcc_ufs_phy_unipro_core_hw_ctl_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_ufs_phy_unipro_core_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_usb30_prim_master_clk = { + .halt_reg = 0xf010, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0xf010, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_usb30_prim_master_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_usb30_prim_master_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_usb30_prim_master_clk__force_mem_core_on = { + .halt_reg = 0xf010, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0xf010, + .enable_mask = bit(14), + .hw.init = &(struct clk_init_data){ + .name = "gcc_usb30_prim_master_clk__force_mem_core_on", + .ops = &clk_branch_simple_ops, + }, + }, +}; + +static struct clk_branch gcc_usb30_prim_mock_utmi_clk = { + .halt_reg = 0xf01c, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0xf01c, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_usb30_prim_mock_utmi_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = + &gcc_usb30_prim_mock_utmi_postdiv_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_usb30_prim_sleep_clk = { + .halt_reg = 0xf018, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0xf018, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_usb30_prim_sleep_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_usb30_sec_master_clk = { + .halt_reg = 0x10010, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0x10010, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_usb30_sec_master_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_usb30_sec_master_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_usb30_sec_master_clk__force_mem_core_on = { + .halt_reg = 0x10010, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0x10010, + .enable_mask = bit(14), + .hw.init = &(struct clk_init_data){ + .name = "gcc_usb30_sec_master_clk__force_mem_core_on", + .ops = &clk_branch_simple_ops, + }, + }, +}; + +static struct clk_branch gcc_usb30_sec_mock_utmi_clk = { + .halt_reg = 0x1001c, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0x1001c, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_usb30_sec_mock_utmi_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = + &gcc_usb30_sec_mock_utmi_postdiv_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_usb30_sec_sleep_clk = { + .halt_reg = 0x10018, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0x10018, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_usb30_sec_sleep_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_usb3_prim_phy_aux_clk = { + .halt_reg = 0xf054, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0xf054, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_usb3_prim_phy_aux_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_usb3_prim_phy_aux_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_usb3_prim_phy_com_aux_clk = { + .halt_reg = 0xf058, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0xf058, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_usb3_prim_phy_com_aux_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_usb3_prim_phy_aux_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +/* clock on depends on external parent clock, so don't poll */ +static struct clk_branch gcc_usb3_prim_phy_pipe_clk = { + .halt_reg = 0xf05c, + .halt_check = branch_halt_delay, + .hwcg_reg = 0xf05c, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0xf05c, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_usb3_prim_phy_pipe_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_usb3_prim_phy_pipe_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_usb3_sec_clkref_en = { + .halt_reg = 0x8c010, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0x8c010, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_usb3_sec_clkref_en", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_usb3_sec_phy_aux_clk = { + .halt_reg = 0x10054, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0x10054, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_usb3_sec_phy_aux_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_usb3_sec_phy_aux_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_usb3_sec_phy_com_aux_clk = { + .halt_reg = 0x10058, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0x10058, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_usb3_sec_phy_com_aux_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_usb3_sec_phy_aux_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +/* clock on depends on external parent clock, so don't poll */ +static struct clk_branch gcc_usb3_sec_phy_pipe_clk = { + .halt_reg = 0x1005c, + .halt_check = branch_halt_delay, + .clkr = { + .enable_reg = 0x1005c, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_usb3_sec_phy_pipe_clk", + .parent_data = &(const struct clk_parent_data){ + .hw = &gcc_usb3_sec_phy_pipe_clk_src.clkr.hw, + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +/* external clocks so add branch_halt_skip */ +static struct clk_branch gcc_video_axi0_clk = { + .halt_reg = 0x28010, + .halt_check = branch_halt_skip, + .hwcg_reg = 0x28010, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x28010, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_video_axi0_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +/* external clocks so add branch_halt_skip */ +static struct clk_branch gcc_video_axi1_clk = { + .halt_reg = 0x28018, + .halt_check = branch_halt_skip, + .hwcg_reg = 0x28018, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x28018, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_video_axi1_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_regmap *gcc_sm8350_clocks[] = { + [gcc_aggre_noc_pcie_0_axi_clk] = &gcc_aggre_noc_pcie_0_axi_clk.clkr, + [gcc_aggre_noc_pcie_1_axi_clk] = &gcc_aggre_noc_pcie_1_axi_clk.clkr, + [gcc_aggre_noc_pcie_tbu_clk] = &gcc_aggre_noc_pcie_tbu_clk.clkr, + [gcc_aggre_ufs_card_axi_clk] = &gcc_aggre_ufs_card_axi_clk.clkr, + [gcc_aggre_ufs_card_axi_hw_ctl_clk] = &gcc_aggre_ufs_card_axi_hw_ctl_clk.clkr, + [gcc_aggre_ufs_phy_axi_clk] = &gcc_aggre_ufs_phy_axi_clk.clkr, + [gcc_aggre_ufs_phy_axi_hw_ctl_clk] = &gcc_aggre_ufs_phy_axi_hw_ctl_clk.clkr, + [gcc_aggre_usb3_prim_axi_clk] = &gcc_aggre_usb3_prim_axi_clk.clkr, + [gcc_aggre_usb3_sec_axi_clk] = &gcc_aggre_usb3_sec_axi_clk.clkr, + [gcc_boot_rom_ahb_clk] = &gcc_boot_rom_ahb_clk.clkr, + [gcc_camera_hf_axi_clk] = &gcc_camera_hf_axi_clk.clkr, + [gcc_camera_sf_axi_clk] = &gcc_camera_sf_axi_clk.clkr, + [gcc_cfg_noc_usb3_prim_axi_clk] = &gcc_cfg_noc_usb3_prim_axi_clk.clkr, + [gcc_cfg_noc_usb3_sec_axi_clk] = &gcc_cfg_noc_usb3_sec_axi_clk.clkr, + [gcc_ddrss_gpu_axi_clk] = &gcc_ddrss_gpu_axi_clk.clkr, + [gcc_ddrss_pcie_sf_tbu_clk] = &gcc_ddrss_pcie_sf_tbu_clk.clkr, + [gcc_disp_hf_axi_clk] = &gcc_disp_hf_axi_clk.clkr, + [gcc_disp_sf_axi_clk] = &gcc_disp_sf_axi_clk.clkr, + [gcc_gp1_clk] = &gcc_gp1_clk.clkr, + [gcc_gp1_clk_src] = &gcc_gp1_clk_src.clkr, + [gcc_gp2_clk] = &gcc_gp2_clk.clkr, + [gcc_gp2_clk_src] = &gcc_gp2_clk_src.clkr, + [gcc_gp3_clk] = &gcc_gp3_clk.clkr, + [gcc_gp3_clk_src] = &gcc_gp3_clk_src.clkr, + [gcc_gpll0] = &gcc_gpll0.clkr, + [gcc_gpll0_out_even] = &gcc_gpll0_out_even.clkr, + [gcc_gpll4] = &gcc_gpll4.clkr, + [gcc_gpll9] = &gcc_gpll9.clkr, + [gcc_gpu_gpll0_clk_src] = &gcc_gpu_gpll0_clk_src.clkr, + [gcc_gpu_gpll0_div_clk_src] = &gcc_gpu_gpll0_div_clk_src.clkr, + [gcc_gpu_iref_en] = &gcc_gpu_iref_en.clkr, + [gcc_gpu_memnoc_gfx_clk] = &gcc_gpu_memnoc_gfx_clk.clkr, + [gcc_gpu_snoc_dvm_gfx_clk] = &gcc_gpu_snoc_dvm_gfx_clk.clkr, + [gcc_pcie0_phy_rchng_clk] = &gcc_pcie0_phy_rchng_clk.clkr, + [gcc_pcie1_phy_rchng_clk] = &gcc_pcie1_phy_rchng_clk.clkr, + [gcc_pcie_0_aux_clk] = &gcc_pcie_0_aux_clk.clkr, + [gcc_pcie_0_aux_clk_src] = &gcc_pcie_0_aux_clk_src.clkr, + [gcc_pcie_0_cfg_ahb_clk] = &gcc_pcie_0_cfg_ahb_clk.clkr, + [gcc_pcie_0_clkref_en] = &gcc_pcie_0_clkref_en.clkr, + [gcc_pcie_0_mstr_axi_clk] = &gcc_pcie_0_mstr_axi_clk.clkr, + [gcc_pcie_0_phy_rchng_clk_src] = &gcc_pcie_0_phy_rchng_clk_src.clkr, + [gcc_pcie_0_pipe_clk] = &gcc_pcie_0_pipe_clk.clkr, + [gcc_pcie_0_pipe_clk_src] = &gcc_pcie_0_pipe_clk_src.clkr, + [gcc_pcie_0_slv_axi_clk] = &gcc_pcie_0_slv_axi_clk.clkr, + [gcc_pcie_0_slv_q2a_axi_clk] = &gcc_pcie_0_slv_q2a_axi_clk.clkr, + [gcc_pcie_1_aux_clk] = &gcc_pcie_1_aux_clk.clkr, + [gcc_pcie_1_aux_clk_src] = &gcc_pcie_1_aux_clk_src.clkr, + [gcc_pcie_1_cfg_ahb_clk] = &gcc_pcie_1_cfg_ahb_clk.clkr, + [gcc_pcie_1_clkref_en] = &gcc_pcie_1_clkref_en.clkr, + [gcc_pcie_1_mstr_axi_clk] = &gcc_pcie_1_mstr_axi_clk.clkr, + [gcc_pcie_1_phy_rchng_clk_src] = &gcc_pcie_1_phy_rchng_clk_src.clkr, + [gcc_pcie_1_pipe_clk] = &gcc_pcie_1_pipe_clk.clkr, + [gcc_pcie_1_pipe_clk_src] = &gcc_pcie_1_pipe_clk_src.clkr, + [gcc_pcie_1_slv_axi_clk] = &gcc_pcie_1_slv_axi_clk.clkr, + [gcc_pcie_1_slv_q2a_axi_clk] = &gcc_pcie_1_slv_q2a_axi_clk.clkr, + [gcc_pdm2_clk] = &gcc_pdm2_clk.clkr, + [gcc_pdm2_clk_src] = &gcc_pdm2_clk_src.clkr, + [gcc_pdm_ahb_clk] = &gcc_pdm_ahb_clk.clkr, + [gcc_pdm_xo4_clk] = &gcc_pdm_xo4_clk.clkr, + [gcc_qmip_camera_nrt_ahb_clk] = &gcc_qmip_camera_nrt_ahb_clk.clkr, + [gcc_qmip_camera_rt_ahb_clk] = &gcc_qmip_camera_rt_ahb_clk.clkr, + [gcc_qmip_disp_ahb_clk] = &gcc_qmip_disp_ahb_clk.clkr, + [gcc_qmip_video_cvp_ahb_clk] = &gcc_qmip_video_cvp_ahb_clk.clkr, + [gcc_qmip_video_vcodec_ahb_clk] = &gcc_qmip_video_vcodec_ahb_clk.clkr, + [gcc_qupv3_wrap0_core_2x_clk] = &gcc_qupv3_wrap0_core_2x_clk.clkr, + [gcc_qupv3_wrap0_core_clk] = &gcc_qupv3_wrap0_core_clk.clkr, + [gcc_qupv3_wrap0_s0_clk] = &gcc_qupv3_wrap0_s0_clk.clkr, + [gcc_qupv3_wrap0_s0_clk_src] = &gcc_qupv3_wrap0_s0_clk_src.clkr, + [gcc_qupv3_wrap0_s1_clk] = &gcc_qupv3_wrap0_s1_clk.clkr, + [gcc_qupv3_wrap0_s1_clk_src] = &gcc_qupv3_wrap0_s1_clk_src.clkr, + [gcc_qupv3_wrap0_s2_clk] = &gcc_qupv3_wrap0_s2_clk.clkr, + [gcc_qupv3_wrap0_s2_clk_src] = &gcc_qupv3_wrap0_s2_clk_src.clkr, + [gcc_qupv3_wrap0_s3_clk] = &gcc_qupv3_wrap0_s3_clk.clkr, + [gcc_qupv3_wrap0_s3_clk_src] = &gcc_qupv3_wrap0_s3_clk_src.clkr, + [gcc_qupv3_wrap0_s4_clk] = &gcc_qupv3_wrap0_s4_clk.clkr, + [gcc_qupv3_wrap0_s4_clk_src] = &gcc_qupv3_wrap0_s4_clk_src.clkr, + [gcc_qupv3_wrap0_s5_clk] = &gcc_qupv3_wrap0_s5_clk.clkr, + [gcc_qupv3_wrap0_s5_clk_src] = &gcc_qupv3_wrap0_s5_clk_src.clkr, + [gcc_qupv3_wrap0_s6_clk] = &gcc_qupv3_wrap0_s6_clk.clkr, + [gcc_qupv3_wrap0_s6_clk_src] = &gcc_qupv3_wrap0_s6_clk_src.clkr, + [gcc_qupv3_wrap0_s7_clk] = &gcc_qupv3_wrap0_s7_clk.clkr, + [gcc_qupv3_wrap0_s7_clk_src] = &gcc_qupv3_wrap0_s7_clk_src.clkr, + [gcc_qupv3_wrap1_core_2x_clk] = &gcc_qupv3_wrap1_core_2x_clk.clkr, + [gcc_qupv3_wrap1_core_clk] = &gcc_qupv3_wrap1_core_clk.clkr, + [gcc_qupv3_wrap1_s0_clk] = &gcc_qupv3_wrap1_s0_clk.clkr, + [gcc_qupv3_wrap1_s0_clk_src] = &gcc_qupv3_wrap1_s0_clk_src.clkr, + [gcc_qupv3_wrap1_s1_clk] = &gcc_qupv3_wrap1_s1_clk.clkr, + [gcc_qupv3_wrap1_s1_clk_src] = &gcc_qupv3_wrap1_s1_clk_src.clkr, + [gcc_qupv3_wrap1_s2_clk] = &gcc_qupv3_wrap1_s2_clk.clkr, + [gcc_qupv3_wrap1_s2_clk_src] = &gcc_qupv3_wrap1_s2_clk_src.clkr, + [gcc_qupv3_wrap1_s3_clk] = &gcc_qupv3_wrap1_s3_clk.clkr, + [gcc_qupv3_wrap1_s3_clk_src] = &gcc_qupv3_wrap1_s3_clk_src.clkr, + [gcc_qupv3_wrap1_s4_clk] = &gcc_qupv3_wrap1_s4_clk.clkr, + [gcc_qupv3_wrap1_s4_clk_src] = &gcc_qupv3_wrap1_s4_clk_src.clkr, + [gcc_qupv3_wrap1_s5_clk] = &gcc_qupv3_wrap1_s5_clk.clkr, + [gcc_qupv3_wrap1_s5_clk_src] = &gcc_qupv3_wrap1_s5_clk_src.clkr, + [gcc_qupv3_wrap2_core_2x_clk] = &gcc_qupv3_wrap2_core_2x_clk.clkr, + [gcc_qupv3_wrap2_core_clk] = &gcc_qupv3_wrap2_core_clk.clkr, + [gcc_qupv3_wrap2_s0_clk] = &gcc_qupv3_wrap2_s0_clk.clkr, + [gcc_qupv3_wrap2_s0_clk_src] = &gcc_qupv3_wrap2_s0_clk_src.clkr, + [gcc_qupv3_wrap2_s1_clk] = &gcc_qupv3_wrap2_s1_clk.clkr, + [gcc_qupv3_wrap2_s1_clk_src] = &gcc_qupv3_wrap2_s1_clk_src.clkr, + [gcc_qupv3_wrap2_s2_clk] = &gcc_qupv3_wrap2_s2_clk.clkr, + [gcc_qupv3_wrap2_s2_clk_src] = &gcc_qupv3_wrap2_s2_clk_src.clkr, + [gcc_qupv3_wrap2_s3_clk] = &gcc_qupv3_wrap2_s3_clk.clkr, + [gcc_qupv3_wrap2_s3_clk_src] = &gcc_qupv3_wrap2_s3_clk_src.clkr, + [gcc_qupv3_wrap2_s4_clk] = &gcc_qupv3_wrap2_s4_clk.clkr, + [gcc_qupv3_wrap2_s4_clk_src] = &gcc_qupv3_wrap2_s4_clk_src.clkr, + [gcc_qupv3_wrap2_s5_clk] = &gcc_qupv3_wrap2_s5_clk.clkr, + [gcc_qupv3_wrap2_s5_clk_src] = &gcc_qupv3_wrap2_s5_clk_src.clkr, + [gcc_qupv3_wrap_0_m_ahb_clk] = &gcc_qupv3_wrap_0_m_ahb_clk.clkr, + [gcc_qupv3_wrap_0_s_ahb_clk] = &gcc_qupv3_wrap_0_s_ahb_clk.clkr, + [gcc_qupv3_wrap_1_m_ahb_clk] = &gcc_qupv3_wrap_1_m_ahb_clk.clkr, + [gcc_qupv3_wrap_1_s_ahb_clk] = &gcc_qupv3_wrap_1_s_ahb_clk.clkr, + [gcc_qupv3_wrap_2_m_ahb_clk] = &gcc_qupv3_wrap_2_m_ahb_clk.clkr, + [gcc_qupv3_wrap_2_s_ahb_clk] = &gcc_qupv3_wrap_2_s_ahb_clk.clkr, + [gcc_sdcc2_ahb_clk] = &gcc_sdcc2_ahb_clk.clkr, + [gcc_sdcc2_apps_clk] = &gcc_sdcc2_apps_clk.clkr, + [gcc_sdcc2_apps_clk_src] = &gcc_sdcc2_apps_clk_src.clkr, + [gcc_sdcc4_ahb_clk] = &gcc_sdcc4_ahb_clk.clkr, + [gcc_sdcc4_apps_clk] = &gcc_sdcc4_apps_clk.clkr, + [gcc_sdcc4_apps_clk_src] = &gcc_sdcc4_apps_clk_src.clkr, + [gcc_throttle_pcie_ahb_clk] = &gcc_throttle_pcie_ahb_clk.clkr, + [gcc_ufs_1_clkref_en] = &gcc_ufs_1_clkref_en.clkr, + [gcc_ufs_card_ahb_clk] = &gcc_ufs_card_ahb_clk.clkr, + [gcc_ufs_card_axi_clk] = &gcc_ufs_card_axi_clk.clkr, + [gcc_ufs_card_axi_clk_src] = &gcc_ufs_card_axi_clk_src.clkr, + [gcc_ufs_card_axi_hw_ctl_clk] = &gcc_ufs_card_axi_hw_ctl_clk.clkr, + [gcc_ufs_card_ice_core_clk] = &gcc_ufs_card_ice_core_clk.clkr, + [gcc_ufs_card_ice_core_clk_src] = &gcc_ufs_card_ice_core_clk_src.clkr, + [gcc_ufs_card_ice_core_hw_ctl_clk] = &gcc_ufs_card_ice_core_hw_ctl_clk.clkr, + [gcc_ufs_card_phy_aux_clk] = &gcc_ufs_card_phy_aux_clk.clkr, + [gcc_ufs_card_phy_aux_clk_src] = &gcc_ufs_card_phy_aux_clk_src.clkr, + [gcc_ufs_card_phy_aux_hw_ctl_clk] = &gcc_ufs_card_phy_aux_hw_ctl_clk.clkr, + [gcc_ufs_card_rx_symbol_0_clk] = &gcc_ufs_card_rx_symbol_0_clk.clkr, + [gcc_ufs_card_rx_symbol_0_clk_src] = &gcc_ufs_card_rx_symbol_0_clk_src.clkr, + [gcc_ufs_card_rx_symbol_1_clk] = &gcc_ufs_card_rx_symbol_1_clk.clkr, + [gcc_ufs_card_rx_symbol_1_clk_src] = &gcc_ufs_card_rx_symbol_1_clk_src.clkr, + [gcc_ufs_card_tx_symbol_0_clk] = &gcc_ufs_card_tx_symbol_0_clk.clkr, + [gcc_ufs_card_tx_symbol_0_clk_src] = &gcc_ufs_card_tx_symbol_0_clk_src.clkr, + [gcc_ufs_card_unipro_core_clk] = &gcc_ufs_card_unipro_core_clk.clkr, + [gcc_ufs_card_unipro_core_clk_src] = &gcc_ufs_card_unipro_core_clk_src.clkr, + [gcc_ufs_card_unipro_core_hw_ctl_clk] = &gcc_ufs_card_unipro_core_hw_ctl_clk.clkr, + [gcc_ufs_phy_ahb_clk] = &gcc_ufs_phy_ahb_clk.clkr, + [gcc_ufs_phy_axi_clk] = &gcc_ufs_phy_axi_clk.clkr, + [gcc_ufs_phy_axi_clk_src] = &gcc_ufs_phy_axi_clk_src.clkr, + [gcc_ufs_phy_axi_hw_ctl_clk] = &gcc_ufs_phy_axi_hw_ctl_clk.clkr, + [gcc_ufs_phy_ice_core_clk] = &gcc_ufs_phy_ice_core_clk.clkr, + [gcc_ufs_phy_ice_core_clk_src] = &gcc_ufs_phy_ice_core_clk_src.clkr, + [gcc_ufs_phy_ice_core_hw_ctl_clk] = &gcc_ufs_phy_ice_core_hw_ctl_clk.clkr, + [gcc_ufs_phy_phy_aux_clk] = &gcc_ufs_phy_phy_aux_clk.clkr, + [gcc_ufs_phy_phy_aux_clk_src] = &gcc_ufs_phy_phy_aux_clk_src.clkr, + [gcc_ufs_phy_phy_aux_hw_ctl_clk] = &gcc_ufs_phy_phy_aux_hw_ctl_clk.clkr, + [gcc_ufs_phy_rx_symbol_0_clk] = &gcc_ufs_phy_rx_symbol_0_clk.clkr, + [gcc_ufs_phy_rx_symbol_0_clk_src] = &gcc_ufs_phy_rx_symbol_0_clk_src.clkr, + [gcc_ufs_phy_rx_symbol_1_clk] = &gcc_ufs_phy_rx_symbol_1_clk.clkr, + [gcc_ufs_phy_rx_symbol_1_clk_src] = &gcc_ufs_phy_rx_symbol_1_clk_src.clkr, + [gcc_ufs_phy_tx_symbol_0_clk] = &gcc_ufs_phy_tx_symbol_0_clk.clkr, + [gcc_ufs_phy_tx_symbol_0_clk_src] = &gcc_ufs_phy_tx_symbol_0_clk_src.clkr, + [gcc_ufs_phy_unipro_core_clk] = &gcc_ufs_phy_unipro_core_clk.clkr, + [gcc_ufs_phy_unipro_core_clk_src] = &gcc_ufs_phy_unipro_core_clk_src.clkr, + [gcc_ufs_phy_unipro_core_hw_ctl_clk] = &gcc_ufs_phy_unipro_core_hw_ctl_clk.clkr, + [gcc_usb30_prim_master_clk] = &gcc_usb30_prim_master_clk.clkr, + [gcc_usb30_prim_master_clk__force_mem_core_on] = + &gcc_usb30_prim_master_clk__force_mem_core_on.clkr, + [gcc_usb30_prim_master_clk_src] = &gcc_usb30_prim_master_clk_src.clkr, + [gcc_usb30_prim_mock_utmi_clk] = &gcc_usb30_prim_mock_utmi_clk.clkr, + [gcc_usb30_prim_mock_utmi_clk_src] = &gcc_usb30_prim_mock_utmi_clk_src.clkr, + [gcc_usb30_prim_mock_utmi_postdiv_clk_src] = &gcc_usb30_prim_mock_utmi_postdiv_clk_src.clkr, + [gcc_usb30_prim_sleep_clk] = &gcc_usb30_prim_sleep_clk.clkr, + [gcc_usb30_sec_master_clk] = &gcc_usb30_sec_master_clk.clkr, + [gcc_usb30_sec_master_clk__force_mem_core_on] = + &gcc_usb30_sec_master_clk__force_mem_core_on.clkr, + [gcc_usb30_sec_master_clk_src] = &gcc_usb30_sec_master_clk_src.clkr, + [gcc_usb30_sec_mock_utmi_clk] = &gcc_usb30_sec_mock_utmi_clk.clkr, + [gcc_usb30_sec_mock_utmi_clk_src] = &gcc_usb30_sec_mock_utmi_clk_src.clkr, + [gcc_usb30_sec_mock_utmi_postdiv_clk_src] = &gcc_usb30_sec_mock_utmi_postdiv_clk_src.clkr, + [gcc_usb30_sec_sleep_clk] = &gcc_usb30_sec_sleep_clk.clkr, + [gcc_usb3_prim_phy_aux_clk] = &gcc_usb3_prim_phy_aux_clk.clkr, + [gcc_usb3_prim_phy_aux_clk_src] = &gcc_usb3_prim_phy_aux_clk_src.clkr, + [gcc_usb3_prim_phy_com_aux_clk] = &gcc_usb3_prim_phy_com_aux_clk.clkr, + [gcc_usb3_prim_phy_pipe_clk] = &gcc_usb3_prim_phy_pipe_clk.clkr, + [gcc_usb3_prim_phy_pipe_clk_src] = &gcc_usb3_prim_phy_pipe_clk_src.clkr, + [gcc_usb3_sec_clkref_en] = &gcc_usb3_sec_clkref_en.clkr, + [gcc_usb3_sec_phy_aux_clk] = &gcc_usb3_sec_phy_aux_clk.clkr, + [gcc_usb3_sec_phy_aux_clk_src] = &gcc_usb3_sec_phy_aux_clk_src.clkr, + [gcc_usb3_sec_phy_com_aux_clk] = &gcc_usb3_sec_phy_com_aux_clk.clkr, + [gcc_usb3_sec_phy_pipe_clk] = &gcc_usb3_sec_phy_pipe_clk.clkr, + [gcc_usb3_sec_phy_pipe_clk_src] = &gcc_usb3_sec_phy_pipe_clk_src.clkr, + [gcc_video_axi0_clk] = &gcc_video_axi0_clk.clkr, + [gcc_video_axi1_clk] = &gcc_video_axi1_clk.clkr, +}; + +static const struct qcom_reset_map gcc_sm8350_resets[] = { + [gcc_camera_bcr] = { 0x26000 }, + [gcc_display_bcr] = { 0x27000 }, + [gcc_gpu_bcr] = { 0x71000 }, + [gcc_mmss_bcr] = { 0xb000 }, + [gcc_pcie_0_bcr] = { 0x6b000 }, + [gcc_pcie_0_link_down_bcr] = { 0x6c014 }, + [gcc_pcie_0_nocsr_com_phy_bcr] = { 0x6c020 }, + [gcc_pcie_0_phy_bcr] = { 0x6c01c }, + [gcc_pcie_0_phy_nocsr_com_phy_bcr] = { 0x6c028 }, + [gcc_pcie_1_bcr] = { 0x8d000 }, + [gcc_pcie_1_link_down_bcr] = { 0x8e014 }, + [gcc_pcie_1_nocsr_com_phy_bcr] = { 0x8e020 }, + [gcc_pcie_1_phy_bcr] = { 0x8e01c }, + [gcc_pcie_1_phy_nocsr_com_phy_bcr] = { 0x8e000 }, + [gcc_pcie_phy_cfg_ahb_bcr] = { 0x6f00c }, + [gcc_pcie_phy_com_bcr] = { 0x6f010 }, + [gcc_pdm_bcr] = { 0x33000 }, + [gcc_qupv3_wrapper_0_bcr] = { 0x17000 }, + [gcc_qupv3_wrapper_1_bcr] = { 0x18000 }, + [gcc_qupv3_wrapper_2_bcr] = { 0x1e000 }, + [gcc_qusb2phy_prim_bcr] = { 0x12000 }, + [gcc_qusb2phy_sec_bcr] = { 0x12004 }, + [gcc_sdcc2_bcr] = { 0x14000 }, + [gcc_sdcc4_bcr] = { 0x16000 }, + [gcc_ufs_card_bcr] = { 0x75000 }, + [gcc_ufs_phy_bcr] = { 0x77000 }, + [gcc_usb30_prim_bcr] = { 0xf000 }, + [gcc_usb30_sec_bcr] = { 0x10000 }, + [gcc_usb3_dp_phy_prim_bcr] = { 0x50008 }, + [gcc_usb3_dp_phy_sec_bcr] = { 0x50014 }, + [gcc_usb3_phy_prim_bcr] = { 0x50000 }, + [gcc_usb3_phy_sec_bcr] = { 0x5000c }, + [gcc_usb3phy_phy_prim_bcr] = { 0x50004 }, + [gcc_usb3phy_phy_sec_bcr] = { 0x50010 }, + [gcc_usb_phy_cfg_ahb2phy_bcr] = { 0x6a000 }, + [gcc_video_axi0_clk_ares] = { 0x28010, 2 }, + [gcc_video_axi1_clk_ares] = { 0x28018, 2 }, + [gcc_video_bcr] = { 0x28000 }, +}; + +static const struct clk_rcg_dfs_data gcc_dfs_clocks[] = { + define_rcg_dfs(gcc_qupv3_wrap0_s0_clk_src), + define_rcg_dfs(gcc_qupv3_wrap0_s1_clk_src), + define_rcg_dfs(gcc_qupv3_wrap0_s2_clk_src), + define_rcg_dfs(gcc_qupv3_wrap0_s3_clk_src), + define_rcg_dfs(gcc_qupv3_wrap0_s4_clk_src), + define_rcg_dfs(gcc_qupv3_wrap0_s5_clk_src), + define_rcg_dfs(gcc_qupv3_wrap0_s6_clk_src), + define_rcg_dfs(gcc_qupv3_wrap0_s7_clk_src), + define_rcg_dfs(gcc_qupv3_wrap1_s0_clk_src), + define_rcg_dfs(gcc_qupv3_wrap1_s1_clk_src), + define_rcg_dfs(gcc_qupv3_wrap1_s2_clk_src), + define_rcg_dfs(gcc_qupv3_wrap1_s3_clk_src), + define_rcg_dfs(gcc_qupv3_wrap1_s4_clk_src), + define_rcg_dfs(gcc_qupv3_wrap1_s5_clk_src), + define_rcg_dfs(gcc_qupv3_wrap2_s0_clk_src), + define_rcg_dfs(gcc_qupv3_wrap2_s1_clk_src), + define_rcg_dfs(gcc_qupv3_wrap2_s2_clk_src), + define_rcg_dfs(gcc_qupv3_wrap2_s3_clk_src), + define_rcg_dfs(gcc_qupv3_wrap2_s4_clk_src), + define_rcg_dfs(gcc_qupv3_wrap2_s5_clk_src), +}; + +static const struct regmap_config gcc_sm8350_regmap_config = { + .reg_bits = 32, + .reg_stride = 4, + .val_bits = 32, + .max_register = 0x9c100, + .fast_io = true, +}; + +static const struct qcom_cc_desc gcc_sm8350_desc = { + .config = &gcc_sm8350_regmap_config, + .clks = gcc_sm8350_clocks, + .num_clks = array_size(gcc_sm8350_clocks), + .resets = gcc_sm8350_resets, + .num_resets = array_size(gcc_sm8350_resets), +}; + +static const struct of_device_id gcc_sm8350_match_table[] = { + { .compatible = "qcom,gcc-sm8350" }, + { } +}; +module_device_table(of, gcc_sm8350_match_table); + +static int gcc_sm8350_probe(struct platform_device *pdev) +{ + struct regmap *regmap; + int ret; + + regmap = qcom_cc_map(pdev, &gcc_sm8350_desc); + if (is_err(regmap)) { + dev_err(&pdev->dev, "failed to map gcc registers "); + return ptr_err(regmap); + } + + /* + * keep the critical clock always-on + * gcc_camera_ahb_clk, gcc_camera_xo_clk, gcc_disp_ahb_clk, gcc_disp_xo_clk, + * gcc_gpu_cfg_ahb_clk, gcc_video_ahb_clk, gcc_video_xo_clk + */ + regmap_update_bits(regmap, 0x26004, bit(0), bit(0)); + regmap_update_bits(regmap, 0x26018, bit(0), bit(0)); + regmap_update_bits(regmap, 0x27004, bit(0), bit(0)); + regmap_update_bits(regmap, 0x2701c, bit(0), bit(0)); + regmap_update_bits(regmap, 0x71004, bit(0), bit(0)); + regmap_update_bits(regmap, 0x28004, bit(0), bit(0)); + regmap_update_bits(regmap, 0x28020, bit(0), bit(0)); + + ret = qcom_cc_register_rcg_dfs(regmap, gcc_dfs_clocks, array_size(gcc_dfs_clocks)); + if (ret) + return ret; + + /* force_mem_core_on for ufs phy ice core clocks */ + regmap_update_bits(regmap, gcc_ufs_phy_ice_core_clk.halt_reg, bit(14), bit(14)); + + return qcom_cc_really_probe(pdev, &gcc_sm8350_desc, regmap); +} + +static struct platform_driver gcc_sm8350_driver = { + .probe = gcc_sm8350_probe, + .driver = { + .name = "sm8350-gcc", + .of_match_table = gcc_sm8350_match_table, + }, +}; + +static int __init gcc_sm8350_init(void) +{ + return platform_driver_register(&gcc_sm8350_driver); +} +subsys_initcall(gcc_sm8350_init); + +static void __exit gcc_sm8350_exit(void) +{ + platform_driver_unregister(&gcc_sm8350_driver); +} +module_exit(gcc_sm8350_exit); + +module_description("qti gcc sm8350 driver"); +module_license("gpl v2");
|
Clock
|
44c20c9ed37fa60e2a6df3f5aefa7b237b7839fb
|
vivek aknurwar bjorn andersson bjorn andersson linaro org
|
drivers
|
clk
|
qcom
|
clk: qcom: gcc: add global clock controller driver for sc8180x
|
add clocks, resets and some of the gdsc provided by the global clock controller found in the qualcomm sc8180x platform.
|
this release allows to map an uid to a different one in a mount; it also adds support for selecting the preemption model at runtime; support for a low-overhead memory error detector designed to be used in production; support for the acrn hypervisor designed for embedded systems; btrfs initial support for zoned devices, subpage blocks sizes and performance improvements; support for eager nfs writes; support for a thermal power management to control the surface temperature of embedded devices in an unified way; the napi polling can be moved to a kernel thread; and support for non-blocking path lookups. as always, there are many other features, new drivers, improvements and fixes.
|
add global clock controller driver for sc8180x
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'cryptography', 'security', 'networking', 'architectures x86 arm risc-v powerpc mips csky s390 pa-risc c6x']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'device voltage and frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'non-transparent bridge (ntb)', 'thunderbolt', 'clock', 'phy ("physical layer" framework)', 'cxl (compute express link)', 'various']
|
['qcom', 'gcc']
|
['kconfig', 'c', 'makefile']
| 3
| 4,639
| 0
|
--- diff --git a/drivers/clk/qcom/kconfig b/drivers/clk/qcom/kconfig --- a/drivers/clk/qcom/kconfig +++ b/drivers/clk/qcom/kconfig +config sc_gcc_8180x + tristate "sc8180x global clock controller" + select qcom_gdsc + depends on common_clk_qcom + help + support for the global clock controller on sc8180x devices. + say y if you want to use peripheral devices such as uart, spi, + i2c, usb, ufs, sdcc, etc. + diff --git a/drivers/clk/qcom/makefile b/drivers/clk/qcom/makefile --- a/drivers/clk/qcom/makefile +++ b/drivers/clk/qcom/makefile +obj-$(config_sc_gcc_8180x) += gcc-sc8180x.o diff --git a/drivers/clk/qcom/gcc-sc8180x.c b/drivers/clk/qcom/gcc-sc8180x.c --- /dev/null +++ b/drivers/clk/qcom/gcc-sc8180x.c +// spdx-license-identifier: gpl-2.0 +/* + * copyright (c) 2018-2019, the linux foundation. all rights reserved. + * copyright (c) 2020-2021, linaro ltd. + */ + +#include <linux/bitops.h> +#include <linux/clk-provider.h> +#include <linux/err.h> +#include <linux/kernel.h> +#include <linux/module.h> +#include <linux/of.h> +#include <linux/of_device.h> +#include <linux/platform_device.h> +#include <linux/regmap.h> +#include <linux/reset-controller.h> + +#include <dt-bindings/clock/qcom,gcc-sc8180x.h> + +#include "common.h" +#include "clk-alpha-pll.h" +#include "clk-branch.h" +#include "clk-pll.h" +#include "clk-rcg.h" +#include "clk-regmap.h" +#include "gdsc.h" +#include "reset.h" + +enum { + p_aud_ref_clk, + p_bi_tcxo, + p_gpll0_out_even, + p_gpll0_out_main, + p_gpll1_out_main, + p_gpll2_out_main, + p_gpll4_out_main, + p_gpll5_out_main, + p_gpll7_out_main, + p_gpll9_out_main, + p_sleep_clk, +}; + +static struct pll_vco trion_vco[] = { + { 249600000, 2000000000, 0 }, +}; + +static struct clk_alpha_pll gpll0 = { + .offset = 0x0, + .regs = clk_alpha_pll_regs[clk_alpha_pll_type_trion], + .vco_table = trion_vco, + .num_vco = array_size(trion_vco), + .clkr = { + .enable_reg = 0x52000, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gpll0", + .parent_data = &(const struct clk_parent_data){ + .fw_name = "bi_tcxo", + }, + .num_parents = 1, + .ops = &clk_alpha_pll_fixed_trion_ops, + }, + }, +}; + +static const struct clk_div_table post_div_table_trion_even[] = { + { 0x0, 1 }, + { 0x1, 2 }, + { 0x3, 4 }, + { 0x7, 8 }, + { } +}; + +static struct clk_alpha_pll_postdiv gpll0_out_even = { + .offset = 0x0, + .post_div_shift = 8, + .post_div_table = post_div_table_trion_even, + .num_post_div = array_size(post_div_table_trion_even), + .regs = clk_alpha_pll_regs[clk_alpha_pll_type_trion], + .width = 4, + .clkr.hw.init = &(struct clk_init_data){ + .name = "gpll0_out_even", + .parent_hws = (const struct clk_hw *[]){ &gpll0.clkr.hw }, + .num_parents = 1, + .ops = &clk_alpha_pll_postdiv_trion_ops, + }, +}; + +static struct clk_alpha_pll gpll1 = { + .offset = 0x1000, + .regs = clk_alpha_pll_regs[clk_alpha_pll_type_trion], + .vco_table = trion_vco, + .num_vco = array_size(trion_vco), + .clkr = { + .enable_reg = 0x52000, + .enable_mask = bit(1), + .hw.init = &(struct clk_init_data){ + .name = "gpll1", + .parent_data = &(const struct clk_parent_data){ + .fw_name = "bi_tcxo", + }, + .num_parents = 1, + .ops = &clk_alpha_pll_fixed_trion_ops, + }, + }, +}; + +static struct clk_alpha_pll gpll4 = { + .offset = 0x76000, + .regs = clk_alpha_pll_regs[clk_alpha_pll_type_trion], + .vco_table = trion_vco, + .num_vco = array_size(trion_vco), + .clkr = { + .enable_reg = 0x52000, + .enable_mask = bit(4), + .hw.init = &(struct clk_init_data){ + .name = "gpll4", + .parent_data = &(const struct clk_parent_data){ + .fw_name = "bi_tcxo", + }, + .num_parents = 1, + .ops = &clk_alpha_pll_fixed_trion_ops, + }, + }, +}; + +static struct clk_alpha_pll gpll7 = { + .offset = 0x1a000, + .regs = clk_alpha_pll_regs[clk_alpha_pll_type_trion], + .vco_table = trion_vco, + .num_vco = array_size(trion_vco), + .clkr = { + .enable_reg = 0x52000, + .enable_mask = bit(7), + .hw.init = &(struct clk_init_data){ + .name = "gpll7", + .parent_data = &(const struct clk_parent_data){ + .fw_name = "bi_tcxo", + }, + .num_parents = 1, + .ops = &clk_alpha_pll_fixed_trion_ops, + }, + }, +}; + +static const struct parent_map gcc_parent_map_0[] = { + { p_bi_tcxo, 0 }, + { p_gpll0_out_main, 1 }, + { p_gpll0_out_even, 6 }, +}; + +static const struct clk_parent_data gcc_parents_0[] = { + { .fw_name = "bi_tcxo" }, + { .hw = &gpll0.clkr.hw }, + { .hw = &gpll0_out_even.clkr.hw }, +}; + +static const struct parent_map gcc_parent_map_1[] = { + { p_bi_tcxo, 0 }, + { p_gpll0_out_main, 1 }, + { p_sleep_clk, 5 }, + { p_gpll0_out_even, 6 }, +}; + +static const struct clk_parent_data gcc_parents_1[] = { + { .fw_name = "bi_tcxo", }, + { .hw = &gpll0.clkr.hw }, + { .fw_name = "sleep_clk", }, + { .hw = &gpll0_out_even.clkr.hw }, +}; + +static const struct parent_map gcc_parent_map_2[] = { + { p_bi_tcxo, 0 }, + { p_sleep_clk, 5 }, +}; + +static const struct clk_parent_data gcc_parents_2[] = { + { .fw_name = "bi_tcxo", }, + { .fw_name = "sleep_clk", }, +}; + +static const struct parent_map gcc_parent_map_3[] = { + { p_bi_tcxo, 0 }, + { p_gpll0_out_main, 1 }, + { p_gpll2_out_main, 2 }, + { p_gpll5_out_main, 3 }, + { p_gpll1_out_main, 4 }, + { p_gpll4_out_main, 5 }, + { p_gpll0_out_even, 6 }, +}; + +static const struct clk_parent_data gcc_parents_3[] = { + { .fw_name = "bi_tcxo", }, + { .hw = &gpll0.clkr.hw }, + { .name = "gpll2" }, + { .name = "gpll5" }, + { .hw = &gpll1.clkr.hw }, + { .hw = &gpll4.clkr.hw }, + { .hw = &gpll0_out_even.clkr.hw }, +}; + +static const struct parent_map gcc_parent_map_4[] = { + { p_bi_tcxo, 0 }, +}; + +static const struct clk_parent_data gcc_parents_4[] = { + { .fw_name = "bi_tcxo", }, +}; + +static const struct parent_map gcc_parent_map_5[] = { + { p_bi_tcxo, 0 }, + { p_gpll0_out_main, 1 }, +}; + +static const struct clk_parent_data gcc_parents_5[] = { + { .fw_name = "bi_tcxo", }, + { .hw = &gpll0.clkr.hw }, +}; + +static const struct parent_map gcc_parent_map_6[] = { + { p_bi_tcxo, 0 }, + { p_gpll0_out_main, 1 }, + { p_gpll7_out_main, 3 }, + { p_gpll0_out_even, 6 }, +}; + +static const struct clk_parent_data gcc_parents_6[] = { + { .fw_name = "bi_tcxo", }, + { .hw = &gpll0.clkr.hw }, + { .hw = &gpll7.clkr.hw }, + { .hw = &gpll0_out_even.clkr.hw }, +}; + +static const struct parent_map gcc_parent_map_7[] = { + { p_bi_tcxo, 0 }, + { p_gpll0_out_main, 1 }, + { p_gpll9_out_main, 2 }, + { p_gpll4_out_main, 5 }, + { p_gpll0_out_even, 6 }, +}; + +static const struct clk_parent_data gcc_parents_7[] = { + { .fw_name = "bi_tcxo", }, + { .hw = &gpll0.clkr.hw }, + { .name = "gppl9" }, + { .hw = &gpll4.clkr.hw }, + { .hw = &gpll0_out_even.clkr.hw }, +}; + +static const struct parent_map gcc_parent_map_8[] = { + { p_bi_tcxo, 0 }, + { p_gpll0_out_main, 1 }, + { p_aud_ref_clk, 2 }, + { p_gpll0_out_even, 6 }, +}; + +static const struct clk_parent_data gcc_parents_8[] = { + { .fw_name = "bi_tcxo", }, + { .hw = &gpll0.clkr.hw }, + { .name = "aud_ref_clk" }, + { .hw = &gpll0_out_even.clkr.hw }, +}; + +static const struct freq_tbl ftbl_gcc_cpuss_ahb_clk_src[] = { + f(19200000, p_bi_tcxo, 1, 0, 0), + f(50000000, p_gpll0_out_main, 12, 0, 0), + f(100000000, p_gpll0_out_main, 6, 0, 0), + { } +}; + +static struct clk_rcg2 gcc_cpuss_ahb_clk_src = { + .cmd_rcgr = 0x48014, + .mnd_width = 0, + .hid_width = 5, + .parent_map = gcc_parent_map_0, + .freq_tbl = ftbl_gcc_cpuss_ahb_clk_src, + .clkr.hw.init = &(struct clk_init_data){ + .name = "gcc_cpuss_ahb_clk_src", + .parent_data = gcc_parents_0, + .num_parents = 3, + .flags = clk_set_rate_parent, + .ops = &clk_rcg2_ops, + }, +}; + +static const struct freq_tbl ftbl_gcc_emac_ptp_clk_src[] = { + f(19200000, p_bi_tcxo, 1, 0, 0), + f(50000000, p_gpll0_out_even, 6, 0, 0), + f(125000000, p_gpll7_out_main, 4, 0, 0), + f(250000000, p_gpll7_out_main, 2, 0, 0), + { } +}; + +static struct clk_rcg2 gcc_emac_ptp_clk_src = { + .cmd_rcgr = 0x6038, + .mnd_width = 0, + .hid_width = 5, + .parent_map = gcc_parent_map_6, + .freq_tbl = ftbl_gcc_emac_ptp_clk_src, + .clkr.hw.init = &(struct clk_init_data){ + .name = "gcc_emac_ptp_clk_src", + .parent_data = gcc_parents_6, + .num_parents = 4, + .flags = clk_set_rate_parent, + .ops = &clk_rcg2_ops, + }, +}; + +static const struct freq_tbl ftbl_gcc_emac_rgmii_clk_src[] = { + f(2500000, p_bi_tcxo, 1, 25, 192), + f(5000000, p_bi_tcxo, 1, 25, 96), + f(19200000, p_bi_tcxo, 1, 0, 0), + f(25000000, p_gpll0_out_even, 12, 0, 0), + f(50000000, p_gpll0_out_even, 6, 0, 0), + f(125000000, p_gpll7_out_main, 4, 0, 0), + f(250000000, p_gpll7_out_main, 2, 0, 0), + { } +}; + +static struct clk_rcg2 gcc_emac_rgmii_clk_src = { + .cmd_rcgr = 0x601c, + .mnd_width = 8, + .hid_width = 5, + .parent_map = gcc_parent_map_6, + .freq_tbl = ftbl_gcc_emac_rgmii_clk_src, + .clkr.hw.init = &(struct clk_init_data){ + .name = "gcc_emac_rgmii_clk_src", + .parent_data = gcc_parents_6, + .num_parents = 4, + .flags = clk_set_rate_parent, + .ops = &clk_rcg2_ops, + }, +}; + +static const struct freq_tbl ftbl_gcc_gp1_clk_src[] = { + f(19200000, p_bi_tcxo, 1, 0, 0), + f(25000000, p_gpll0_out_even, 12, 0, 0), + f(50000000, p_gpll0_out_even, 6, 0, 0), + f(100000000, p_gpll0_out_main, 6, 0, 0), + f(200000000, p_gpll0_out_main, 3, 0, 0), + { } +}; + +static struct clk_rcg2 gcc_gp1_clk_src = { + .cmd_rcgr = 0x64004, + .mnd_width = 8, + .hid_width = 5, + .parent_map = gcc_parent_map_1, + .freq_tbl = ftbl_gcc_gp1_clk_src, + .clkr.hw.init = &(struct clk_init_data){ + .name = "gcc_gp1_clk_src", + .parent_data = gcc_parents_1, + .num_parents = 4, + .flags = clk_set_rate_parent, + .ops = &clk_rcg2_ops, + }, +}; + +static struct clk_rcg2 gcc_gp2_clk_src = { + .cmd_rcgr = 0x65004, + .mnd_width = 8, + .hid_width = 5, + .parent_map = gcc_parent_map_1, + .freq_tbl = ftbl_gcc_gp1_clk_src, + .clkr.hw.init = &(struct clk_init_data){ + .name = "gcc_gp2_clk_src", + .parent_data = gcc_parents_1, + .num_parents = 4, + .flags = clk_set_rate_parent, + .ops = &clk_rcg2_ops, + }, +}; + +static struct clk_rcg2 gcc_gp3_clk_src = { + .cmd_rcgr = 0x66004, + .mnd_width = 8, + .hid_width = 5, + .parent_map = gcc_parent_map_1, + .freq_tbl = ftbl_gcc_gp1_clk_src, + .clkr.hw.init = &(struct clk_init_data){ + .name = "gcc_gp3_clk_src", + .parent_data = gcc_parents_1, + .num_parents = 4, + .flags = clk_set_rate_parent, + .ops = &clk_rcg2_ops, + }, +}; + +static struct clk_rcg2 gcc_gp4_clk_src = { + .cmd_rcgr = 0xbe004, + .mnd_width = 8, + .hid_width = 5, + .parent_map = gcc_parent_map_1, + .freq_tbl = ftbl_gcc_gp1_clk_src, + .clkr.hw.init = &(struct clk_init_data){ + .name = "gcc_gp4_clk_src", + .parent_data = gcc_parents_1, + .num_parents = 4, + .flags = clk_set_rate_parent, + .ops = &clk_rcg2_ops, + }, +}; + +static struct clk_rcg2 gcc_gp5_clk_src = { + .cmd_rcgr = 0xbf004, + .mnd_width = 8, + .hid_width = 5, + .parent_map = gcc_parent_map_1, + .freq_tbl = ftbl_gcc_gp1_clk_src, + .clkr.hw.init = &(struct clk_init_data){ + .name = "gcc_gp5_clk_src", + .parent_data = gcc_parents_1, + .num_parents = 4, + .flags = clk_set_rate_parent, + .ops = &clk_rcg2_ops, + }, +}; + +static const struct freq_tbl ftbl_gcc_npu_axi_clk_src[] = { + f(19200000, p_bi_tcxo, 1, 0, 0), + f(60000000, p_gpll0_out_even, 5, 0, 0), + f(150000000, p_gpll0_out_even, 2, 0, 0), + f(200000000, p_gpll0_out_main, 3, 0, 0), + f(300000000, p_gpll0_out_main, 2, 0, 0), + f(403000000, p_gpll4_out_main, 2, 0, 0), + f(533000000, p_gpll1_out_main, 2, 0, 0), + { } +}; + +static struct clk_rcg2 gcc_npu_axi_clk_src = { + .cmd_rcgr = 0x4d014, + .mnd_width = 0, + .hid_width = 5, + .parent_map = gcc_parent_map_3, + .freq_tbl = ftbl_gcc_npu_axi_clk_src, + .clkr.hw.init = &(struct clk_init_data){ + .name = "gcc_npu_axi_clk_src", + .parent_data = gcc_parents_3, + .num_parents = 7, + .flags = clk_set_rate_parent, + .ops = &clk_rcg2_ops, + }, +}; + +static const struct freq_tbl ftbl_gcc_pcie_0_aux_clk_src[] = { + f(9600000, p_bi_tcxo, 2, 0, 0), + f(19200000, p_bi_tcxo, 1, 0, 0), + { } +}; + +static struct clk_rcg2 gcc_pcie_0_aux_clk_src = { + .cmd_rcgr = 0x6b02c, + .mnd_width = 16, + .hid_width = 5, + .parent_map = gcc_parent_map_2, + .freq_tbl = ftbl_gcc_pcie_0_aux_clk_src, + .clkr.hw.init = &(struct clk_init_data){ + .name = "gcc_pcie_0_aux_clk_src", + .parent_data = gcc_parents_2, + .num_parents = 2, + .flags = clk_set_rate_parent, + .ops = &clk_rcg2_ops, + }, +}; + +static struct clk_rcg2 gcc_pcie_1_aux_clk_src = { + .cmd_rcgr = 0x8d02c, + .mnd_width = 16, + .hid_width = 5, + .parent_map = gcc_parent_map_2, + .freq_tbl = ftbl_gcc_pcie_0_aux_clk_src, + .clkr.hw.init = &(struct clk_init_data){ + .name = "gcc_pcie_1_aux_clk_src", + .parent_data = gcc_parents_2, + .num_parents = 2, + .flags = clk_set_rate_parent, + .ops = &clk_rcg2_ops, + }, +}; + +static struct clk_rcg2 gcc_pcie_2_aux_clk_src = { + .cmd_rcgr = 0x9d02c, + .mnd_width = 16, + .hid_width = 5, + .parent_map = gcc_parent_map_2, + .freq_tbl = ftbl_gcc_pcie_0_aux_clk_src, + .clkr.hw.init = &(struct clk_init_data){ + .name = "gcc_pcie_2_aux_clk_src", + .parent_data = gcc_parents_2, + .num_parents = 2, + .flags = clk_set_rate_parent, + .ops = &clk_rcg2_ops, + }, +}; + +static struct clk_rcg2 gcc_pcie_3_aux_clk_src = { + .cmd_rcgr = 0xa302c, + .mnd_width = 16, + .hid_width = 5, + .parent_map = gcc_parent_map_2, + .freq_tbl = ftbl_gcc_pcie_0_aux_clk_src, + .clkr.hw.init = &(struct clk_init_data){ + .name = "gcc_pcie_3_aux_clk_src", + .parent_data = gcc_parents_2, + .num_parents = 2, + .flags = clk_set_rate_parent, + .ops = &clk_rcg2_ops, + }, +}; + +static const struct freq_tbl ftbl_gcc_pcie_phy_refgen_clk_src[] = { + f(19200000, p_bi_tcxo, 1, 0, 0), + f(100000000, p_gpll0_out_main, 6, 0, 0), + { } +}; + +static struct clk_rcg2 gcc_pcie_phy_refgen_clk_src = { + .cmd_rcgr = 0x6f014, + .mnd_width = 0, + .hid_width = 5, + .parent_map = gcc_parent_map_0, + .freq_tbl = ftbl_gcc_pcie_phy_refgen_clk_src, + .clkr.hw.init = &(struct clk_init_data){ + .name = "gcc_pcie_phy_refgen_clk_src", + .parent_data = gcc_parents_0, + .num_parents = 3, + .flags = clk_set_rate_parent, + .ops = &clk_rcg2_ops, + }, +}; + +static const struct freq_tbl ftbl_gcc_pdm2_clk_src[] = { + f(9600000, p_bi_tcxo, 2, 0, 0), + f(19200000, p_bi_tcxo, 1, 0, 0), + f(60000000, p_gpll0_out_main, 10, 0, 0), + { } +}; + +static struct clk_rcg2 gcc_pdm2_clk_src = { + .cmd_rcgr = 0x33010, + .mnd_width = 0, + .hid_width = 5, + .parent_map = gcc_parent_map_0, + .freq_tbl = ftbl_gcc_pdm2_clk_src, + .clkr.hw.init = &(struct clk_init_data){ + .name = "gcc_pdm2_clk_src", + .parent_data = gcc_parents_0, + .num_parents = 3, + .flags = clk_set_rate_parent, + .ops = &clk_rcg2_ops, + }, +}; + +static const struct freq_tbl ftbl_gcc_qspi_1_core_clk_src[] = { + f(19200000, p_bi_tcxo, 1, 0, 0), + f(75000000, p_gpll0_out_even, 4, 0, 0), + f(150000000, p_gpll0_out_main, 4, 0, 0), + f(300000000, p_gpll0_out_main, 2, 0, 0), + { } +}; + +static struct clk_rcg2 gcc_qspi_1_core_clk_src = { + .cmd_rcgr = 0x4a00c, + .mnd_width = 0, + .hid_width = 5, + .parent_map = gcc_parent_map_0, + .freq_tbl = ftbl_gcc_qspi_1_core_clk_src, + .clkr.hw.init = &(struct clk_init_data){ + .name = "gcc_qspi_1_core_clk_src", + .parent_data = gcc_parents_0, + .num_parents = 3, + .flags = clk_set_rate_parent, + .ops = &clk_rcg2_ops, + }, +}; + +static struct clk_rcg2 gcc_qspi_core_clk_src = { + .cmd_rcgr = 0x4b008, + .mnd_width = 0, + .hid_width = 5, + .parent_map = gcc_parent_map_0, + .freq_tbl = ftbl_gcc_qspi_1_core_clk_src, + .clkr.hw.init = &(struct clk_init_data){ + .name = "gcc_qspi_core_clk_src", + .parent_data = gcc_parents_0, + .num_parents = 3, + .flags = clk_set_rate_parent, + .ops = &clk_rcg2_ops, + }, +}; + +static const struct freq_tbl ftbl_gcc_qupv3_wrap0_s0_clk_src[] = { + f(7372800, p_gpll0_out_even, 1, 384, 15625), + f(14745600, p_gpll0_out_even, 1, 768, 15625), + f(19200000, p_bi_tcxo, 1, 0, 0), + f(29491200, p_gpll0_out_even, 1, 1536, 15625), + f(32000000, p_gpll0_out_even, 1, 8, 75), + f(48000000, p_gpll0_out_even, 1, 4, 25), + f(50000000, p_gpll0_out_even, 6, 0, 0), + f(64000000, p_gpll0_out_even, 1, 16, 75), + f(75000000, p_gpll0_out_even, 4, 0, 0), + f(80000000, p_gpll0_out_even, 1, 4, 15), + f(96000000, p_gpll0_out_even, 1, 8, 25), + f(100000000, p_gpll0_out_main, 6, 0, 0), + f(102400000, p_gpll0_out_even, 1, 128, 375), + f(112000000, p_gpll0_out_even, 1, 28, 75), + f(117964800, p_gpll0_out_even, 1, 6144, 15625), + f(120000000, p_gpll0_out_even, 2.5, 0, 0), + f(128000000, p_gpll0_out_main, 1, 16, 75), + { } +}; + +static struct clk_rcg2 gcc_qupv3_wrap0_s0_clk_src = { + .cmd_rcgr = 0x17148, + .mnd_width = 16, + .hid_width = 5, + .parent_map = gcc_parent_map_0, + .freq_tbl = ftbl_gcc_qupv3_wrap0_s0_clk_src, + .clkr.hw.init = &(struct clk_init_data){ + .name = "gcc_qupv3_wrap0_s0_clk_src", + .parent_data = gcc_parents_0, + .num_parents = 3, + .flags = clk_set_rate_parent, + .ops = &clk_rcg2_ops, + }, +}; + +static struct clk_rcg2 gcc_qupv3_wrap0_s1_clk_src = { + .cmd_rcgr = 0x17278, + .mnd_width = 16, + .hid_width = 5, + .parent_map = gcc_parent_map_0, + .freq_tbl = ftbl_gcc_qupv3_wrap0_s0_clk_src, + .clkr.hw.init = &(struct clk_init_data){ + .name = "gcc_qupv3_wrap0_s1_clk_src", + .parent_data = gcc_parents_0, + .num_parents = 3, + .flags = clk_set_rate_parent, + .ops = &clk_rcg2_ops, + }, +}; + +static struct clk_rcg2 gcc_qupv3_wrap0_s2_clk_src = { + .cmd_rcgr = 0x173a8, + .mnd_width = 16, + .hid_width = 5, + .parent_map = gcc_parent_map_0, + .freq_tbl = ftbl_gcc_qupv3_wrap0_s0_clk_src, + .clkr.hw.init = &(struct clk_init_data){ + .name = "gcc_qupv3_wrap0_s2_clk_src", + .parent_data = gcc_parents_0, + .num_parents = 3, + .flags = clk_set_rate_parent, + .ops = &clk_rcg2_ops, + }, +}; + +static struct clk_rcg2 gcc_qupv3_wrap0_s3_clk_src = { + .cmd_rcgr = 0x174d8, + .mnd_width = 16, + .hid_width = 5, + .parent_map = gcc_parent_map_0, + .freq_tbl = ftbl_gcc_qupv3_wrap0_s0_clk_src, + .clkr.hw.init = &(struct clk_init_data){ + .name = "gcc_qupv3_wrap0_s3_clk_src", + .parent_data = gcc_parents_0, + .num_parents = 3, + .flags = clk_set_rate_parent, + .ops = &clk_rcg2_ops, + }, +}; + +static struct clk_rcg2 gcc_qupv3_wrap0_s4_clk_src = { + .cmd_rcgr = 0x17608, + .mnd_width = 16, + .hid_width = 5, + .parent_map = gcc_parent_map_0, + .freq_tbl = ftbl_gcc_qupv3_wrap0_s0_clk_src, + .clkr.hw.init = &(struct clk_init_data){ + .name = "gcc_qupv3_wrap0_s4_clk_src", + .parent_data = gcc_parents_0, + .num_parents = 3, + .flags = clk_set_rate_parent, + .ops = &clk_rcg2_ops, + }, +}; + +static struct clk_rcg2 gcc_qupv3_wrap0_s5_clk_src = { + .cmd_rcgr = 0x17738, + .mnd_width = 16, + .hid_width = 5, + .parent_map = gcc_parent_map_0, + .freq_tbl = ftbl_gcc_qupv3_wrap0_s0_clk_src, + .clkr.hw.init = &(struct clk_init_data){ + .name = "gcc_qupv3_wrap0_s5_clk_src", + .parent_data = gcc_parents_0, + .num_parents = 3, + .flags = clk_set_rate_parent, + .ops = &clk_rcg2_ops, + }, +}; + +static struct clk_rcg2 gcc_qupv3_wrap0_s6_clk_src = { + .cmd_rcgr = 0x17868, + .mnd_width = 16, + .hid_width = 5, + .parent_map = gcc_parent_map_0, + .freq_tbl = ftbl_gcc_qupv3_wrap0_s0_clk_src, + .clkr.hw.init = &(struct clk_init_data){ + .name = "gcc_qupv3_wrap0_s6_clk_src", + .parent_data = gcc_parents_0, + .num_parents = 3, + .flags = clk_set_rate_parent, + .ops = &clk_rcg2_ops, + }, +}; + +static struct clk_rcg2 gcc_qupv3_wrap0_s7_clk_src = { + .cmd_rcgr = 0x17998, + .mnd_width = 16, + .hid_width = 5, + .parent_map = gcc_parent_map_0, + .freq_tbl = ftbl_gcc_qupv3_wrap0_s0_clk_src, + .clkr.hw.init = &(struct clk_init_data){ + .name = "gcc_qupv3_wrap0_s7_clk_src", + .parent_data = gcc_parents_0, + .num_parents = 3, + .flags = clk_set_rate_parent, + .ops = &clk_rcg2_ops, + }, +}; + +static struct clk_rcg2 gcc_qupv3_wrap1_s0_clk_src = { + .cmd_rcgr = 0x18148, + .mnd_width = 16, + .hid_width = 5, + .parent_map = gcc_parent_map_0, + .freq_tbl = ftbl_gcc_qupv3_wrap0_s0_clk_src, + .clkr.hw.init = &(struct clk_init_data){ + .name = "gcc_qupv3_wrap1_s0_clk_src", + .parent_data = gcc_parents_0, + .num_parents = 3, + .flags = clk_set_rate_parent, + .ops = &clk_rcg2_ops, + }, +}; + +static struct clk_rcg2 gcc_qupv3_wrap1_s1_clk_src = { + .cmd_rcgr = 0x18278, + .mnd_width = 16, + .hid_width = 5, + .parent_map = gcc_parent_map_0, + .freq_tbl = ftbl_gcc_qupv3_wrap0_s0_clk_src, + .clkr.hw.init = &(struct clk_init_data){ + .name = "gcc_qupv3_wrap1_s1_clk_src", + .parent_data = gcc_parents_0, + .num_parents = 3, + .flags = clk_set_rate_parent, + .ops = &clk_rcg2_ops, + }, +}; + +static struct clk_rcg2 gcc_qupv3_wrap1_s2_clk_src = { + .cmd_rcgr = 0x183a8, + .mnd_width = 16, + .hid_width = 5, + .parent_map = gcc_parent_map_0, + .freq_tbl = ftbl_gcc_qupv3_wrap0_s0_clk_src, + .clkr.hw.init = &(struct clk_init_data){ + .name = "gcc_qupv3_wrap1_s2_clk_src", + .parent_data = gcc_parents_0, + .num_parents = 3, + .flags = clk_set_rate_parent, + .ops = &clk_rcg2_ops, + }, +}; + +static struct clk_rcg2 gcc_qupv3_wrap1_s3_clk_src = { + .cmd_rcgr = 0x184d8, + .mnd_width = 16, + .hid_width = 5, + .parent_map = gcc_parent_map_0, + .freq_tbl = ftbl_gcc_qupv3_wrap0_s0_clk_src, + .clkr.hw.init = &(struct clk_init_data){ + .name = "gcc_qupv3_wrap1_s3_clk_src", + .parent_data = gcc_parents_0, + .num_parents = 3, + .flags = clk_set_rate_parent, + .ops = &clk_rcg2_ops, + }, +}; + +static struct clk_rcg2 gcc_qupv3_wrap1_s4_clk_src = { + .cmd_rcgr = 0x18608, + .mnd_width = 16, + .hid_width = 5, + .parent_map = gcc_parent_map_0, + .freq_tbl = ftbl_gcc_qupv3_wrap0_s0_clk_src, + .clkr.hw.init = &(struct clk_init_data){ + .name = "gcc_qupv3_wrap1_s4_clk_src", + .parent_data = gcc_parents_0, + .num_parents = 3, + .flags = clk_set_rate_parent, + .ops = &clk_rcg2_ops, + }, +}; + +static struct clk_rcg2 gcc_qupv3_wrap1_s5_clk_src = { + .cmd_rcgr = 0x18738, + .mnd_width = 16, + .hid_width = 5, + .parent_map = gcc_parent_map_0, + .freq_tbl = ftbl_gcc_qupv3_wrap0_s0_clk_src, + .clkr.hw.init = &(struct clk_init_data){ + .name = "gcc_qupv3_wrap1_s5_clk_src", + .parent_data = gcc_parents_0, + .num_parents = 3, + .flags = clk_set_rate_parent, + .ops = &clk_rcg2_ops, + }, +}; + +static struct clk_rcg2 gcc_qupv3_wrap2_s0_clk_src = { + .cmd_rcgr = 0x1e148, + .mnd_width = 16, + .hid_width = 5, + .parent_map = gcc_parent_map_0, + .freq_tbl = ftbl_gcc_qupv3_wrap0_s0_clk_src, + .clkr.hw.init = &(struct clk_init_data){ + .name = "gcc_qupv3_wrap2_s0_clk_src", + .parent_data = gcc_parents_0, + .num_parents = 3, + .flags = clk_set_rate_parent, + .ops = &clk_rcg2_ops, + }, +}; + +static struct clk_rcg2 gcc_qupv3_wrap2_s1_clk_src = { + .cmd_rcgr = 0x1e278, + .mnd_width = 16, + .hid_width = 5, + .parent_map = gcc_parent_map_0, + .freq_tbl = ftbl_gcc_qupv3_wrap0_s0_clk_src, + .clkr.hw.init = &(struct clk_init_data){ + .name = "gcc_qupv3_wrap2_s1_clk_src", + .parent_data = gcc_parents_0, + .num_parents = 3, + .flags = clk_set_rate_parent, + .ops = &clk_rcg2_ops, + }, +}; + +static struct clk_rcg2 gcc_qupv3_wrap2_s2_clk_src = { + .cmd_rcgr = 0x1e3a8, + .mnd_width = 16, + .hid_width = 5, + .parent_map = gcc_parent_map_0, + .freq_tbl = ftbl_gcc_qupv3_wrap0_s0_clk_src, + .clkr.hw.init = &(struct clk_init_data){ + .name = "gcc_qupv3_wrap2_s2_clk_src", + .parent_data = gcc_parents_0, + .num_parents = 3, + .flags = clk_set_rate_parent, + .ops = &clk_rcg2_ops, + }, +}; + +static struct clk_rcg2 gcc_qupv3_wrap2_s3_clk_src = { + .cmd_rcgr = 0x1e4d8, + .mnd_width = 16, + .hid_width = 5, + .parent_map = gcc_parent_map_0, + .freq_tbl = ftbl_gcc_qupv3_wrap0_s0_clk_src, + .clkr.hw.init = &(struct clk_init_data){ + .name = "gcc_qupv3_wrap2_s3_clk_src", + .parent_data = gcc_parents_0, + .num_parents = 3, + .flags = clk_set_rate_parent, + .ops = &clk_rcg2_ops, + }, +}; + +static struct clk_rcg2 gcc_qupv3_wrap2_s4_clk_src = { + .cmd_rcgr = 0x1e608, + .mnd_width = 16, + .hid_width = 5, + .parent_map = gcc_parent_map_0, + .freq_tbl = ftbl_gcc_qupv3_wrap0_s0_clk_src, + .clkr.hw.init = &(struct clk_init_data){ + .name = "gcc_qupv3_wrap2_s4_clk_src", + .parent_data = gcc_parents_0, + .num_parents = 3, + .flags = clk_set_rate_parent, + .ops = &clk_rcg2_ops, + }, +}; + +static struct clk_rcg2 gcc_qupv3_wrap2_s5_clk_src = { + .cmd_rcgr = 0x1e738, + .mnd_width = 16, + .hid_width = 5, + .parent_map = gcc_parent_map_0, + .freq_tbl = ftbl_gcc_qupv3_wrap0_s0_clk_src, + .clkr.hw.init = &(struct clk_init_data){ + .name = "gcc_qupv3_wrap2_s5_clk_src", + .parent_data = gcc_parents_0, + .num_parents = 3, + .flags = clk_set_rate_parent, + .ops = &clk_rcg2_ops, + }, +}; + +static const struct freq_tbl ftbl_gcc_sdcc2_apps_clk_src[] = { + f(400000, p_bi_tcxo, 12, 1, 4), + f(9600000, p_bi_tcxo, 2, 0, 0), + f(19200000, p_bi_tcxo, 1, 0, 0), + f(25000000, p_gpll0_out_main, 12, 1, 2), + f(50000000, p_gpll0_out_main, 12, 0, 0), + f(100000000, p_gpll0_out_main, 6, 0, 0), + f(200000000, p_gpll0_out_main, 3, 0, 0), + { } +}; + +static struct clk_rcg2 gcc_sdcc2_apps_clk_src = { + .cmd_rcgr = 0x1400c, + .mnd_width = 8, + .hid_width = 5, + .parent_map = gcc_parent_map_7, + .freq_tbl = ftbl_gcc_sdcc2_apps_clk_src, + .clkr.hw.init = &(struct clk_init_data){ + .name = "gcc_sdcc2_apps_clk_src", + .parent_data = gcc_parents_7, + .num_parents = 5, + .flags = clk_set_rate_parent, + .ops = &clk_rcg2_floor_ops, + }, +}; + +static const struct freq_tbl ftbl_gcc_sdcc4_apps_clk_src[] = { + f(400000, p_bi_tcxo, 12, 1, 4), + f(9600000, p_bi_tcxo, 2, 0, 0), + f(19200000, p_bi_tcxo, 1, 0, 0), + f(37500000, p_gpll0_out_main, 16, 0, 0), + f(50000000, p_gpll0_out_main, 12, 0, 0), + f(75000000, p_gpll0_out_main, 8, 0, 0), + { } +}; + +static struct clk_rcg2 gcc_sdcc4_apps_clk_src = { + .cmd_rcgr = 0x1600c, + .mnd_width = 8, + .hid_width = 5, + .parent_map = gcc_parent_map_5, + .freq_tbl = ftbl_gcc_sdcc4_apps_clk_src, + .clkr.hw.init = &(struct clk_init_data){ + .name = "gcc_sdcc4_apps_clk_src", + .parent_data = gcc_parents_5, + .num_parents = 3, + .flags = clk_set_rate_parent, + .ops = &clk_rcg2_floor_ops, + }, +}; + +static const struct freq_tbl ftbl_gcc_tsif_ref_clk_src[] = { + f(105495, p_bi_tcxo, 2, 1, 91), + { } +}; + +static struct clk_rcg2 gcc_tsif_ref_clk_src = { + .cmd_rcgr = 0x36010, + .mnd_width = 8, + .hid_width = 5, + .parent_map = gcc_parent_map_8, + .freq_tbl = ftbl_gcc_tsif_ref_clk_src, + .clkr.hw.init = &(struct clk_init_data){ + .name = "gcc_tsif_ref_clk_src", + .parent_data = gcc_parents_8, + .num_parents = 4, + .flags = clk_set_rate_parent, + .ops = &clk_rcg2_ops, + }, +}; + +static const struct freq_tbl ftbl_gcc_ufs_card_2_axi_clk_src[] = { + f(37500000, p_gpll0_out_even, 8, 0, 0), + f(75000000, p_gpll0_out_even, 4, 0, 0), + f(150000000, p_gpll0_out_main, 4, 0, 0), + f(300000000, p_gpll0_out_main, 2, 0, 0), + { } +}; + +static struct clk_rcg2 gcc_ufs_card_2_axi_clk_src = { + .cmd_rcgr = 0xa2020, + .mnd_width = 8, + .hid_width = 5, + .parent_map = gcc_parent_map_0, + .freq_tbl = ftbl_gcc_ufs_card_2_axi_clk_src, + .clkr.hw.init = &(struct clk_init_data){ + .name = "gcc_ufs_card_2_axi_clk_src", + .parent_data = gcc_parents_0, + .num_parents = 3, + .flags = clk_set_rate_parent, + .ops = &clk_rcg2_ops, + }, +}; + +static struct clk_rcg2 gcc_ufs_card_2_ice_core_clk_src = { + .cmd_rcgr = 0xa2060, + .mnd_width = 0, + .hid_width = 5, + .parent_map = gcc_parent_map_0, + .freq_tbl = ftbl_gcc_ufs_card_2_axi_clk_src, + .clkr.hw.init = &(struct clk_init_data){ + .name = "gcc_ufs_card_2_ice_core_clk_src", + .parent_data = gcc_parents_0, + .num_parents = 3, + .flags = clk_set_rate_parent, + .ops = &clk_rcg2_ops, + }, +}; + +static const struct freq_tbl ftbl_gcc_ufs_card_2_phy_aux_clk_src[] = { + f(19200000, p_bi_tcxo, 1, 0, 0), + { } +}; + +static struct clk_rcg2 gcc_ufs_card_2_phy_aux_clk_src = { + .cmd_rcgr = 0xa2094, + .mnd_width = 0, + .hid_width = 5, + .parent_map = gcc_parent_map_4, + .freq_tbl = ftbl_gcc_ufs_card_2_phy_aux_clk_src, + .clkr.hw.init = &(struct clk_init_data){ + .name = "gcc_ufs_card_2_phy_aux_clk_src", + .parent_data = gcc_parents_4, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_rcg2_ops, + }, +}; + +static struct clk_rcg2 gcc_ufs_card_2_unipro_core_clk_src = { + .cmd_rcgr = 0xa2078, + .mnd_width = 0, + .hid_width = 5, + .parent_map = gcc_parent_map_0, + .freq_tbl = ftbl_gcc_ufs_card_2_axi_clk_src, + .clkr.hw.init = &(struct clk_init_data){ + .name = "gcc_ufs_card_2_unipro_core_clk_src", + .parent_data = gcc_parents_0, + .num_parents = 3, + .flags = clk_set_rate_parent, + .ops = &clk_rcg2_ops, + }, +}; + +static const struct freq_tbl ftbl_gcc_ufs_card_axi_clk_src[] = { + f(25000000, p_gpll0_out_even, 12, 0, 0), + f(50000000, p_gpll0_out_even, 6, 0, 0), + f(100000000, p_gpll0_out_main, 6, 0, 0), + f(200000000, p_gpll0_out_main, 3, 0, 0), + f(240000000, p_gpll0_out_main, 2.5, 0, 0), + { } +}; + +static struct clk_rcg2 gcc_ufs_card_axi_clk_src = { + .cmd_rcgr = 0x75020, + .mnd_width = 8, + .hid_width = 5, + .parent_map = gcc_parent_map_0, + .freq_tbl = ftbl_gcc_ufs_card_axi_clk_src, + .clkr.hw.init = &(struct clk_init_data){ + .name = "gcc_ufs_card_axi_clk_src", + .parent_data = gcc_parents_0, + .num_parents = 3, + .flags = clk_set_rate_parent, + .ops = &clk_rcg2_ops, + }, +}; + +static const struct freq_tbl ftbl_gcc_ufs_card_ice_core_clk_src[] = { + f(75000000, p_gpll0_out_even, 4, 0, 0), + f(150000000, p_gpll0_out_main, 4, 0, 0), + f(300000000, p_gpll0_out_main, 2, 0, 0), + { } +}; + +static struct clk_rcg2 gcc_ufs_card_ice_core_clk_src = { + .cmd_rcgr = 0x75060, + .mnd_width = 0, + .hid_width = 5, + .parent_map = gcc_parent_map_0, + .freq_tbl = ftbl_gcc_ufs_card_ice_core_clk_src, + .clkr.hw.init = &(struct clk_init_data){ + .name = "gcc_ufs_card_ice_core_clk_src", + .parent_data = gcc_parents_0, + .num_parents = 3, + .flags = clk_set_rate_parent, + .ops = &clk_rcg2_ops, + }, +}; + +static struct clk_rcg2 gcc_ufs_card_phy_aux_clk_src = { + .cmd_rcgr = 0x75094, + .mnd_width = 0, + .hid_width = 5, + .parent_map = gcc_parent_map_4, + .freq_tbl = ftbl_gcc_ufs_card_2_phy_aux_clk_src, + .clkr.hw.init = &(struct clk_init_data){ + .name = "gcc_ufs_card_phy_aux_clk_src", + .parent_data = gcc_parents_4, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_rcg2_ops, + }, +}; + +static const struct freq_tbl ftbl_gcc_ufs_card_unipro_core_clk_src[] = { + f(37500000, p_gpll0_out_even, 8, 0, 0), + f(75000000, p_gpll0_out_main, 8, 0, 0), + f(150000000, p_gpll0_out_main, 4, 0, 0), + { } +}; + +static struct clk_rcg2 gcc_ufs_card_unipro_core_clk_src = { + .cmd_rcgr = 0x75078, + .mnd_width = 0, + .hid_width = 5, + .parent_map = gcc_parent_map_0, + .freq_tbl = ftbl_gcc_ufs_card_unipro_core_clk_src, + .clkr.hw.init = &(struct clk_init_data){ + .name = "gcc_ufs_card_unipro_core_clk_src", + .parent_data = gcc_parents_0, + .num_parents = 3, + .flags = clk_set_rate_parent, + .ops = &clk_rcg2_ops, + }, +}; + +static const struct freq_tbl ftbl_gcc_ufs_phy_axi_clk_src[] = { + f(25000000, p_gpll0_out_even, 12, 0, 0), + f(37500000, p_gpll0_out_even, 8, 0, 0), + f(75000000, p_gpll0_out_even, 4, 0, 0), + f(150000000, p_gpll0_out_main, 4, 0, 0), + f(300000000, p_gpll0_out_main, 2, 0, 0), + { } +}; + +static struct clk_rcg2 gcc_ufs_phy_axi_clk_src = { + .cmd_rcgr = 0x77020, + .mnd_width = 8, + .hid_width = 5, + .parent_map = gcc_parent_map_0, + .freq_tbl = ftbl_gcc_ufs_phy_axi_clk_src, + .clkr.hw.init = &(struct clk_init_data){ + .name = "gcc_ufs_phy_axi_clk_src", + .parent_data = gcc_parents_0, + .num_parents = 3, + .flags = clk_set_rate_parent, + .ops = &clk_rcg2_ops, + }, +}; + +static struct clk_rcg2 gcc_ufs_phy_ice_core_clk_src = { + .cmd_rcgr = 0x77060, + .mnd_width = 0, + .hid_width = 5, + .parent_map = gcc_parent_map_0, + .freq_tbl = ftbl_gcc_ufs_card_2_axi_clk_src, + .clkr.hw.init = &(struct clk_init_data){ + .name = "gcc_ufs_phy_ice_core_clk_src", + .parent_data = gcc_parents_0, + .num_parents = 3, + .flags = clk_set_rate_parent, + .ops = &clk_rcg2_ops, + }, +}; + +static struct clk_rcg2 gcc_ufs_phy_phy_aux_clk_src = { + .cmd_rcgr = 0x77094, + .mnd_width = 0, + .hid_width = 5, + .parent_map = gcc_parent_map_4, + .freq_tbl = ftbl_gcc_pcie_0_aux_clk_src, + .clkr.hw.init = &(struct clk_init_data){ + .name = "gcc_ufs_phy_phy_aux_clk_src", + .parent_data = gcc_parents_4, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_rcg2_ops, + }, +}; + +static struct clk_rcg2 gcc_ufs_phy_unipro_core_clk_src = { + .cmd_rcgr = 0x77078, + .mnd_width = 0, + .hid_width = 5, + .parent_map = gcc_parent_map_0, + .freq_tbl = ftbl_gcc_ufs_card_2_axi_clk_src, + .clkr.hw.init = &(struct clk_init_data){ + .name = "gcc_ufs_phy_unipro_core_clk_src", + .parent_data = gcc_parents_0, + .num_parents = 3, + .flags = clk_set_rate_parent, + .ops = &clk_rcg2_ops, + }, +}; + +static const struct freq_tbl ftbl_gcc_usb30_mp_master_clk_src[] = { + f(33333333, p_gpll0_out_even, 9, 0, 0), + f(66666667, p_gpll0_out_even, 4.5, 0, 0), + f(133333333, p_gpll0_out_main, 4.5, 0, 0), + f(200000000, p_gpll0_out_main, 3, 0, 0), + f(240000000, p_gpll0_out_main, 2.5, 0, 0), + { } +}; + +static struct clk_rcg2 gcc_usb30_mp_master_clk_src = { + .cmd_rcgr = 0xa601c, + .mnd_width = 8, + .hid_width = 5, + .parent_map = gcc_parent_map_0, + .freq_tbl = ftbl_gcc_usb30_mp_master_clk_src, + .clkr.hw.init = &(struct clk_init_data){ + .name = "gcc_usb30_mp_master_clk_src", + .parent_data = gcc_parents_0, + .num_parents = 3, + .flags = clk_set_rate_parent, + .ops = &clk_rcg2_ops, + }, +}; + +static const struct freq_tbl ftbl_gcc_usb30_mp_mock_utmi_clk_src[] = { + f(19200000, p_bi_tcxo, 1, 0, 0), + f(20000000, p_gpll0_out_even, 15, 0, 0), + f(40000000, p_gpll0_out_even, 7.5, 0, 0), + f(60000000, p_gpll0_out_main, 10, 0, 0), + { } +}; + +static struct clk_rcg2 gcc_usb30_mp_mock_utmi_clk_src = { + .cmd_rcgr = 0xa6034, + .mnd_width = 0, + .hid_width = 5, + .parent_map = gcc_parent_map_0, + .freq_tbl = ftbl_gcc_usb30_mp_mock_utmi_clk_src, + .clkr.hw.init = &(struct clk_init_data){ + .name = "gcc_usb30_mp_mock_utmi_clk_src", + .parent_data = gcc_parents_0, + .num_parents = 3, + .flags = clk_set_rate_parent, + .ops = &clk_rcg2_ops, + }, +}; + +static struct clk_rcg2 gcc_usb30_prim_master_clk_src = { + .cmd_rcgr = 0xf01c, + .mnd_width = 8, + .hid_width = 5, + .parent_map = gcc_parent_map_0, + .freq_tbl = ftbl_gcc_usb30_mp_master_clk_src, + .clkr.hw.init = &(struct clk_init_data){ + .name = "gcc_usb30_prim_master_clk_src", + .parent_data = gcc_parents_0, + .num_parents = 3, + .flags = clk_set_rate_parent, + .ops = &clk_rcg2_ops, + }, +}; + +static struct clk_rcg2 gcc_usb30_prim_mock_utmi_clk_src = { + .cmd_rcgr = 0xf034, + .mnd_width = 0, + .hid_width = 5, + .parent_map = gcc_parent_map_0, + .freq_tbl = ftbl_gcc_usb30_mp_mock_utmi_clk_src, + .clkr.hw.init = &(struct clk_init_data){ + .name = "gcc_usb30_prim_mock_utmi_clk_src", + .parent_data = gcc_parents_0, + .num_parents = 3, + .flags = clk_set_rate_parent, + .ops = &clk_rcg2_ops, + }, +}; + +static struct clk_rcg2 gcc_usb30_sec_master_clk_src = { + .cmd_rcgr = 0x1001c, + .mnd_width = 8, + .hid_width = 5, + .parent_map = gcc_parent_map_0, + .freq_tbl = ftbl_gcc_usb30_mp_master_clk_src, + .clkr.hw.init = &(struct clk_init_data){ + .name = "gcc_usb30_sec_master_clk_src", + .parent_data = gcc_parents_0, + .num_parents = 3, + .flags = clk_set_rate_parent, + .ops = &clk_rcg2_ops, + }, +}; + +static struct clk_rcg2 gcc_usb30_sec_mock_utmi_clk_src = { + .cmd_rcgr = 0x10034, + .mnd_width = 0, + .hid_width = 5, + .parent_map = gcc_parent_map_0, + .freq_tbl = ftbl_gcc_usb30_mp_mock_utmi_clk_src, + .clkr.hw.init = &(struct clk_init_data){ + .name = "gcc_usb30_sec_mock_utmi_clk_src", + .parent_data = gcc_parents_0, + .num_parents = 3, + .flags = clk_set_rate_parent, + .ops = &clk_rcg2_ops, + }, +}; + +static struct clk_rcg2 gcc_usb3_mp_phy_aux_clk_src = { + .cmd_rcgr = 0xa6068, + .mnd_width = 0, + .hid_width = 5, + .parent_map = gcc_parent_map_2, + .freq_tbl = ftbl_gcc_ufs_card_2_phy_aux_clk_src, + .clkr.hw.init = &(struct clk_init_data){ + .name = "gcc_usb3_mp_phy_aux_clk_src", + .parent_data = gcc_parents_2, + .num_parents = 2, + .flags = clk_set_rate_parent, + .ops = &clk_rcg2_ops, + }, +}; + +static struct clk_rcg2 gcc_usb3_prim_phy_aux_clk_src = { + .cmd_rcgr = 0xf060, + .mnd_width = 0, + .hid_width = 5, + .parent_map = gcc_parent_map_2, + .freq_tbl = ftbl_gcc_ufs_card_2_phy_aux_clk_src, + .clkr.hw.init = &(struct clk_init_data){ + .name = "gcc_usb3_prim_phy_aux_clk_src", + .parent_data = gcc_parents_2, + .num_parents = 2, + .flags = clk_set_rate_parent, + .ops = &clk_rcg2_ops, + }, +}; + +static struct clk_rcg2 gcc_usb3_sec_phy_aux_clk_src = { + .cmd_rcgr = 0x10060, + .mnd_width = 0, + .hid_width = 5, + .parent_map = gcc_parent_map_2, + .freq_tbl = ftbl_gcc_ufs_card_2_phy_aux_clk_src, + .clkr.hw.init = &(struct clk_init_data){ + .name = "gcc_usb3_sec_phy_aux_clk_src", + .parent_data = gcc_parents_2, + .num_parents = 2, + .flags = clk_set_rate_parent, + .ops = &clk_rcg2_ops, + }, +}; + +static struct clk_branch gcc_aggre_noc_pcie_tbu_clk = { + .halt_reg = 0x90018, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0x90018, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_aggre_noc_pcie_tbu_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_aggre_ufs_card_axi_clk = { + .halt_reg = 0x750c0, + .halt_check = branch_halt, + .hwcg_reg = 0x750c0, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x750c0, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_aggre_ufs_card_axi_clk", + .parent_hws = (const struct clk_hw *[]){ + &gcc_ufs_card_axi_clk_src.clkr.hw + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_aggre_ufs_card_axi_hw_ctl_clk = { + .halt_reg = 0x750c0, + .halt_check = branch_halt, + .hwcg_reg = 0x750c0, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x750c0, + .enable_mask = bit(1), + .hw.init = &(struct clk_init_data){ + .name = "gcc_aggre_ufs_card_axi_hw_ctl_clk", + .parent_hws = (const struct clk_hw *[]){ + &gcc_aggre_ufs_card_axi_clk.clkr.hw + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch_simple_ops, + }, + }, +}; + +static struct clk_branch gcc_aggre_ufs_phy_axi_clk = { + .halt_reg = 0x770c0, + .halt_check = branch_halt, + .hwcg_reg = 0x770c0, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x770c0, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_aggre_ufs_phy_axi_clk", + .parent_hws = (const struct clk_hw *[]){ + &gcc_ufs_phy_axi_clk_src.clkr.hw + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_aggre_ufs_phy_axi_hw_ctl_clk = { + .halt_reg = 0x770c0, + .halt_check = branch_halt, + .hwcg_reg = 0x770c0, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x770c0, + .enable_mask = bit(1), + .hw.init = &(struct clk_init_data){ + .name = "gcc_aggre_ufs_phy_axi_hw_ctl_clk", + .parent_hws = (const struct clk_hw *[]){ + &gcc_aggre_ufs_phy_axi_clk.clkr.hw + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch_simple_ops, + }, + }, +}; + +static struct clk_branch gcc_aggre_usb3_mp_axi_clk = { + .halt_reg = 0xa6084, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0xa6084, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_aggre_usb3_mp_axi_clk", + .parent_hws = (const struct clk_hw *[]){ + &gcc_usb30_mp_master_clk_src.clkr.hw + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_aggre_usb3_prim_axi_clk = { + .halt_reg = 0xf07c, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0xf07c, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_aggre_usb3_prim_axi_clk", + .parent_hws = (const struct clk_hw *[]){ + &gcc_usb30_prim_master_clk_src.clkr.hw + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_aggre_usb3_sec_axi_clk = { + .halt_reg = 0x1007c, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0x1007c, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_aggre_usb3_sec_axi_clk", + .parent_hws = (const struct clk_hw *[]){ + &gcc_usb30_sec_master_clk_src.clkr.hw + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_boot_rom_ahb_clk = { + .halt_reg = 0x38004, + .halt_check = branch_halt_voted, + .hwcg_reg = 0x38004, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x52004, + .enable_mask = bit(10), + .hw.init = &(struct clk_init_data){ + .name = "gcc_boot_rom_ahb_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_camera_hf_axi_clk = { + .halt_reg = 0xb030, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0xb030, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_camera_hf_axi_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_camera_sf_axi_clk = { + .halt_reg = 0xb034, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0xb034, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_camera_sf_axi_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_cfg_noc_usb3_mp_axi_clk = { + .halt_reg = 0xa609c, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0xa609c, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_cfg_noc_usb3_mp_axi_clk", + .parent_hws = (const struct clk_hw *[]){ + &gcc_usb30_mp_master_clk_src.clkr.hw + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_cfg_noc_usb3_prim_axi_clk = { + .halt_reg = 0xf078, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0xf078, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_cfg_noc_usb3_prim_axi_clk", + .parent_hws = (const struct clk_hw *[]){ + &gcc_usb30_prim_master_clk_src.clkr.hw + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_cfg_noc_usb3_sec_axi_clk = { + .halt_reg = 0x10078, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0x10078, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_cfg_noc_usb3_sec_axi_clk", + .parent_hws = (const struct clk_hw *[]){ + &gcc_usb30_sec_master_clk_src.clkr.hw + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +/* for cpuss functionality the ahb clock needs to be left enabled */ +static struct clk_branch gcc_cpuss_ahb_clk = { + .halt_reg = 0x48000, + .halt_check = branch_halt_voted, + .clkr = { + .enable_reg = 0x52004, + .enable_mask = bit(21), + .hw.init = &(struct clk_init_data){ + .name = "gcc_cpuss_ahb_clk", + .parent_hws = (const struct clk_hw *[]){ + &gcc_cpuss_ahb_clk_src.clkr.hw + }, + .num_parents = 1, + .flags = clk_is_critical | clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_cpuss_rbcpr_clk = { + .halt_reg = 0x48008, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0x48008, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_cpuss_rbcpr_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_ddrss_gpu_axi_clk = { + .halt_reg = 0x71154, + .halt_check = branch_voted, + .clkr = { + .enable_reg = 0x71154, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_ddrss_gpu_axi_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_disp_hf_axi_clk = { + .halt_reg = 0xb038, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0xb038, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_disp_hf_axi_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_disp_sf_axi_clk = { + .halt_reg = 0xb03c, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0xb03c, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_disp_sf_axi_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_emac_axi_clk = { + .halt_reg = 0x6010, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0x6010, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_emac_axi_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_emac_ptp_clk = { + .halt_reg = 0x6034, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0x6034, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_emac_ptp_clk", + .parent_hws = (const struct clk_hw *[]){ + &gcc_emac_ptp_clk_src.clkr.hw + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_emac_rgmii_clk = { + .halt_reg = 0x6018, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0x6018, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_emac_rgmii_clk", + .parent_hws = (const struct clk_hw *[]){ + &gcc_emac_rgmii_clk_src.clkr.hw + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_emac_slv_ahb_clk = { + .halt_reg = 0x6014, + .halt_check = branch_halt, + .hwcg_reg = 0x6014, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x6014, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_emac_slv_ahb_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_gp1_clk = { + .halt_reg = 0x64000, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0x64000, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_gp1_clk", + .parent_hws = (const struct clk_hw *[]){ + &gcc_gp1_clk_src.clkr.hw + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_gp2_clk = { + .halt_reg = 0x65000, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0x65000, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_gp2_clk", + .parent_hws = (const struct clk_hw *[]){ + &gcc_gp2_clk_src.clkr.hw + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_gp3_clk = { + .halt_reg = 0x66000, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0x66000, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_gp3_clk", + .parent_hws = (const struct clk_hw *[]){ + &gcc_gp3_clk_src.clkr.hw + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_gp4_clk = { + .halt_reg = 0xbe000, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0xbe000, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_gp4_clk", + .parent_hws = (const struct clk_hw *[]){ + &gcc_gp4_clk_src.clkr.hw + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_gp5_clk = { + .halt_reg = 0xbf000, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0xbf000, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_gp5_clk", + .parent_hws = (const struct clk_hw *[]){ + &gcc_gp5_clk_src.clkr.hw + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_gpu_gpll0_clk_src = { + .halt_check = branch_halt_delay, + .clkr = { + .enable_reg = 0x52004, + .enable_mask = bit(15), + .hw.init = &(struct clk_init_data){ + .name = "gcc_gpu_gpll0_clk_src", + .parent_hws = (const struct clk_hw *[]){ &gpll0.clkr.hw }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_gpu_gpll0_div_clk_src = { + .halt_check = branch_halt_delay, + .clkr = { + .enable_reg = 0x52004, + .enable_mask = bit(16), + .hw.init = &(struct clk_init_data){ + .name = "gcc_gpu_gpll0_div_clk_src", + .parent_hws = (const struct clk_hw *[]){ + &gpll0_out_even.clkr.hw + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_gpu_memnoc_gfx_clk = { + .halt_reg = 0x7100c, + .halt_check = branch_voted, + .clkr = { + .enable_reg = 0x7100c, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_gpu_memnoc_gfx_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_gpu_snoc_dvm_gfx_clk = { + .halt_reg = 0x71018, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0x71018, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_gpu_snoc_dvm_gfx_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_npu_at_clk = { + .halt_reg = 0x4d010, + .halt_check = branch_voted, + .clkr = { + .enable_reg = 0x4d010, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_npu_at_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_npu_axi_clk = { + .halt_reg = 0x4d008, + .halt_check = branch_voted, + .clkr = { + .enable_reg = 0x4d008, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_npu_axi_clk", + .parent_hws = (const struct clk_hw *[]){ + &gcc_npu_axi_clk_src.clkr.hw + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_npu_gpll0_clk_src = { + .halt_check = branch_halt_delay, + .clkr = { + .enable_reg = 0x52004, + .enable_mask = bit(18), + .hw.init = &(struct clk_init_data){ + .name = "gcc_npu_gpll0_clk_src", + .parent_hws = (const struct clk_hw *[]){ &gpll0.clkr.hw }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_npu_gpll0_div_clk_src = { + .halt_check = branch_halt_delay, + .clkr = { + .enable_reg = 0x52004, + .enable_mask = bit(19), + .hw.init = &(struct clk_init_data){ + .name = "gcc_npu_gpll0_div_clk_src", + .parent_hws = (const struct clk_hw *[]){ + &gpll0_out_even.clkr.hw + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_npu_trig_clk = { + .halt_reg = 0x4d00c, + .halt_check = branch_voted, + .clkr = { + .enable_reg = 0x4d00c, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_npu_trig_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_pcie0_phy_refgen_clk = { + .halt_reg = 0x6f02c, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0x6f02c, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_pcie0_phy_refgen_clk", + .parent_hws = (const struct clk_hw *[]){ + &gcc_pcie_phy_refgen_clk_src.clkr.hw + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_pcie1_phy_refgen_clk = { + .halt_reg = 0x6f030, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0x6f030, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_pcie1_phy_refgen_clk", + .parent_hws = (const struct clk_hw *[]){ + &gcc_pcie_phy_refgen_clk_src.clkr.hw + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_pcie2_phy_refgen_clk = { + .halt_reg = 0x6f034, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0x6f034, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_pcie2_phy_refgen_clk", + .parent_hws = (const struct clk_hw *[]){ + &gcc_pcie_phy_refgen_clk_src.clkr.hw + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_pcie3_phy_refgen_clk = { + .halt_reg = 0x6f038, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0x6f038, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_pcie3_phy_refgen_clk", + .parent_hws = (const struct clk_hw *[]){ + &gcc_pcie_phy_refgen_clk_src.clkr.hw + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_pcie_0_aux_clk = { + .halt_reg = 0x6b020, + .halt_check = branch_halt_voted, + .clkr = { + .enable_reg = 0x5200c, + .enable_mask = bit(3), + .hw.init = &(struct clk_init_data){ + .name = "gcc_pcie_0_aux_clk", + .parent_hws = (const struct clk_hw *[]){ + &gcc_pcie_0_aux_clk_src.clkr.hw + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_pcie_0_cfg_ahb_clk = { + .halt_reg = 0x6b01c, + .halt_check = branch_halt_voted, + .hwcg_reg = 0x6b01c, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x5200c, + .enable_mask = bit(2), + .hw.init = &(struct clk_init_data){ + .name = "gcc_pcie_0_cfg_ahb_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_pcie_0_clkref_clk = { + .halt_reg = 0x8c00c, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0x8c00c, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_pcie_0_clkref_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_pcie_0_mstr_axi_clk = { + .halt_reg = 0x6b018, + .halt_check = branch_halt_voted, + .clkr = { + .enable_reg = 0x5200c, + .enable_mask = bit(1), + .hw.init = &(struct clk_init_data){ + .name = "gcc_pcie_0_mstr_axi_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_pcie_0_pipe_clk = { + .halt_reg = 0x6b024, + .halt_check = branch_halt_skip, + .clkr = { + .enable_reg = 0x5200c, + .enable_mask = bit(4), + .hw.init = &(struct clk_init_data){ + .name = "gcc_pcie_0_pipe_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_pcie_0_slv_axi_clk = { + .halt_reg = 0x6b014, + .halt_check = branch_halt_voted, + .hwcg_reg = 0x6b014, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x5200c, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_pcie_0_slv_axi_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_pcie_0_slv_q2a_axi_clk = { + .halt_reg = 0x6b010, + .halt_check = branch_halt_voted, + .clkr = { + .enable_reg = 0x5200c, + .enable_mask = bit(5), + .hw.init = &(struct clk_init_data){ + .name = "gcc_pcie_0_slv_q2a_axi_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_pcie_1_aux_clk = { + .halt_reg = 0x8d020, + .halt_check = branch_halt_voted, + .clkr = { + .enable_reg = 0x52004, + .enable_mask = bit(29), + .hw.init = &(struct clk_init_data){ + .name = "gcc_pcie_1_aux_clk", + .parent_hws = (const struct clk_hw *[]){ + &gcc_pcie_1_aux_clk_src.clkr.hw + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_pcie_1_cfg_ahb_clk = { + .halt_reg = 0x8d01c, + .halt_check = branch_halt_voted, + .hwcg_reg = 0x8d01c, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x52004, + .enable_mask = bit(28), + .hw.init = &(struct clk_init_data){ + .name = "gcc_pcie_1_cfg_ahb_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_pcie_1_clkref_clk = { + .halt_reg = 0x8c02c, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0x8c02c, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_pcie_1_clkref_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_pcie_1_mstr_axi_clk = { + .halt_reg = 0x8d018, + .halt_check = branch_halt_voted, + .clkr = { + .enable_reg = 0x52004, + .enable_mask = bit(27), + .hw.init = &(struct clk_init_data){ + .name = "gcc_pcie_1_mstr_axi_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_pcie_1_pipe_clk = { + .halt_reg = 0x8d024, + .halt_check = branch_halt_skip, + .clkr = { + .enable_reg = 0x52004, + .enable_mask = bit(30), + .hw.init = &(struct clk_init_data){ + .name = "gcc_pcie_1_pipe_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_pcie_1_slv_axi_clk = { + .halt_reg = 0x8d014, + .halt_check = branch_halt_voted, + .hwcg_reg = 0x8d014, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x52004, + .enable_mask = bit(26), + .hw.init = &(struct clk_init_data){ + .name = "gcc_pcie_1_slv_axi_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_pcie_1_slv_q2a_axi_clk = { + .halt_reg = 0x8d010, + .halt_check = branch_halt_voted, + .clkr = { + .enable_reg = 0x52004, + .enable_mask = bit(25), + .hw.init = &(struct clk_init_data){ + .name = "gcc_pcie_1_slv_q2a_axi_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_pcie_2_aux_clk = { + .halt_reg = 0x9d020, + .halt_check = branch_halt_voted, + .clkr = { + .enable_reg = 0x52014, + .enable_mask = bit(14), + .hw.init = &(struct clk_init_data){ + .name = "gcc_pcie_2_aux_clk", + .parent_hws = (const struct clk_hw *[]){ + &gcc_pcie_2_aux_clk_src.clkr.hw + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_pcie_2_cfg_ahb_clk = { + .halt_reg = 0x9d01c, + .halt_check = branch_halt_voted, + .hwcg_reg = 0x9d01c, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x52014, + .enable_mask = bit(13), + .hw.init = &(struct clk_init_data){ + .name = "gcc_pcie_2_cfg_ahb_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_pcie_2_clkref_clk = { + .halt_reg = 0x8c014, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0x8c014, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_pcie_2_clkref_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_pcie_2_mstr_axi_clk = { + .halt_reg = 0x9d018, + .halt_check = branch_halt_voted, + .clkr = { + .enable_reg = 0x52014, + .enable_mask = bit(12), + .hw.init = &(struct clk_init_data){ + .name = "gcc_pcie_2_mstr_axi_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_pcie_2_pipe_clk = { + .halt_reg = 0x9d024, + .halt_check = branch_halt_skip, + .clkr = { + .enable_reg = 0x52014, + .enable_mask = bit(15), + .hw.init = &(struct clk_init_data){ + .name = "gcc_pcie_2_pipe_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_pcie_2_slv_axi_clk = { + .halt_reg = 0x9d014, + .halt_check = branch_halt_voted, + .hwcg_reg = 0x9d014, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x52014, + .enable_mask = bit(11), + .hw.init = &(struct clk_init_data){ + .name = "gcc_pcie_2_slv_axi_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_pcie_2_slv_q2a_axi_clk = { + .halt_reg = 0x9d010, + .halt_check = branch_halt_voted, + .clkr = { + .enable_reg = 0x52014, + .enable_mask = bit(10), + .hw.init = &(struct clk_init_data){ + .name = "gcc_pcie_2_slv_q2a_axi_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_pcie_3_aux_clk = { + .halt_reg = 0xa3020, + .halt_check = branch_halt_voted, + .clkr = { + .enable_reg = 0x52014, + .enable_mask = bit(20), + .hw.init = &(struct clk_init_data){ + .name = "gcc_pcie_3_aux_clk", + .parent_hws = (const struct clk_hw *[]){ + &gcc_pcie_3_aux_clk_src.clkr.hw + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_pcie_3_cfg_ahb_clk = { + .halt_reg = 0xa301c, + .halt_check = branch_halt_voted, + .hwcg_reg = 0xa301c, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x52014, + .enable_mask = bit(19), + .hw.init = &(struct clk_init_data){ + .name = "gcc_pcie_3_cfg_ahb_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_pcie_3_clkref_clk = { + .halt_reg = 0x8c018, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0x8c018, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_pcie_3_clkref_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_pcie_3_mstr_axi_clk = { + .halt_reg = 0xa3018, + .halt_check = branch_halt_voted, + .clkr = { + .enable_reg = 0x52014, + .enable_mask = bit(18), + .hw.init = &(struct clk_init_data){ + .name = "gcc_pcie_3_mstr_axi_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_pcie_3_pipe_clk = { + .halt_reg = 0xa3024, + .halt_check = branch_halt_skip, + .clkr = { + .enable_reg = 0x52014, + .enable_mask = bit(21), + .hw.init = &(struct clk_init_data){ + .name = "gcc_pcie_3_pipe_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_pcie_3_slv_axi_clk = { + .halt_reg = 0xa3014, + .halt_check = branch_halt_voted, + .hwcg_reg = 0xa3014, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x52014, + .enable_mask = bit(17), + .hw.init = &(struct clk_init_data){ + .name = "gcc_pcie_3_slv_axi_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_pcie_3_slv_q2a_axi_clk = { + .halt_reg = 0xa3010, + .halt_check = branch_halt_voted, + .clkr = { + .enable_reg = 0x52014, + .enable_mask = bit(16), + .hw.init = &(struct clk_init_data){ + .name = "gcc_pcie_3_slv_q2a_axi_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_pcie_phy_aux_clk = { + .halt_reg = 0x6f004, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0x6f004, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_pcie_phy_aux_clk", + .parent_hws = (const struct clk_hw *[]){ + &gcc_pcie_0_aux_clk_src.clkr.hw + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_pdm2_clk = { + .halt_reg = 0x3300c, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0x3300c, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_pdm2_clk", + .parent_hws = (const struct clk_hw *[]){ + &gcc_pdm2_clk_src.clkr.hw + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_pdm_ahb_clk = { + .halt_reg = 0x33004, + .halt_check = branch_halt, + .hwcg_reg = 0x33004, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x33004, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_pdm_ahb_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_pdm_xo4_clk = { + .halt_reg = 0x33008, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0x33008, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_pdm_xo4_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_prng_ahb_clk = { + .halt_reg = 0x34004, + .halt_check = branch_halt_voted, + .clkr = { + .enable_reg = 0x52004, + .enable_mask = bit(13), + .hw.init = &(struct clk_init_data){ + .name = "gcc_prng_ahb_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_qmip_camera_nrt_ahb_clk = { + .halt_reg = 0xb018, + .halt_check = branch_halt, + .hwcg_reg = 0xb018, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0xb018, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_qmip_camera_nrt_ahb_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_qmip_camera_rt_ahb_clk = { + .halt_reg = 0xb01c, + .halt_check = branch_halt, + .hwcg_reg = 0xb01c, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0xb01c, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_qmip_camera_rt_ahb_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_qmip_disp_ahb_clk = { + .halt_reg = 0xb020, + .halt_check = branch_halt, + .hwcg_reg = 0xb020, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0xb020, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_qmip_disp_ahb_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_qmip_video_cvp_ahb_clk = { + .halt_reg = 0xb010, + .halt_check = branch_halt, + .hwcg_reg = 0xb010, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0xb010, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_qmip_video_cvp_ahb_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_qmip_video_vcodec_ahb_clk = { + .halt_reg = 0xb014, + .halt_check = branch_halt, + .hwcg_reg = 0xb014, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0xb014, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_qmip_video_vcodec_ahb_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_qspi_1_cnoc_periph_ahb_clk = { + .halt_reg = 0x4a004, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0x4a004, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_qspi_1_cnoc_periph_ahb_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_qspi_1_core_clk = { + .halt_reg = 0x4a008, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0x4a008, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_qspi_1_core_clk", + .parent_hws = (const struct clk_hw *[]){ + &gcc_qspi_1_core_clk_src.clkr.hw + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_qspi_cnoc_periph_ahb_clk = { + .halt_reg = 0x4b000, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0x4b000, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_qspi_cnoc_periph_ahb_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_qspi_core_clk = { + .halt_reg = 0x4b004, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0x4b004, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_qspi_core_clk", + .parent_hws = (const struct clk_hw *[]){ + &gcc_qspi_core_clk_src.clkr.hw + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_qupv3_wrap0_s0_clk = { + .halt_reg = 0x17144, + .halt_check = branch_halt_voted, + .clkr = { + .enable_reg = 0x5200c, + .enable_mask = bit(10), + .hw.init = &(struct clk_init_data){ + .name = "gcc_qupv3_wrap0_s0_clk", + .parent_hws = (const struct clk_hw *[]){ + &gcc_qupv3_wrap0_s0_clk_src.clkr.hw + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_qupv3_wrap0_s1_clk = { + .halt_reg = 0x17274, + .halt_check = branch_halt_voted, + .clkr = { + .enable_reg = 0x5200c, + .enable_mask = bit(11), + .hw.init = &(struct clk_init_data){ + .name = "gcc_qupv3_wrap0_s1_clk", + .parent_hws = (const struct clk_hw *[]){ + &gcc_qupv3_wrap0_s1_clk_src.clkr.hw + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_qupv3_wrap0_s2_clk = { + .halt_reg = 0x173a4, + .halt_check = branch_halt_voted, + .clkr = { + .enable_reg = 0x5200c, + .enable_mask = bit(12), + .hw.init = &(struct clk_init_data){ + .name = "gcc_qupv3_wrap0_s2_clk", + .parent_hws = (const struct clk_hw *[]){ + &gcc_qupv3_wrap0_s2_clk_src.clkr.hw + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_qupv3_wrap0_s3_clk = { + .halt_reg = 0x174d4, + .halt_check = branch_halt_voted, + .clkr = { + .enable_reg = 0x5200c, + .enable_mask = bit(13), + .hw.init = &(struct clk_init_data){ + .name = "gcc_qupv3_wrap0_s3_clk", + .parent_hws = (const struct clk_hw *[]){ + &gcc_qupv3_wrap0_s3_clk_src.clkr.hw + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_qupv3_wrap0_s4_clk = { + .halt_reg = 0x17604, + .halt_check = branch_halt_voted, + .clkr = { + .enable_reg = 0x5200c, + .enable_mask = bit(14), + .hw.init = &(struct clk_init_data){ + .name = "gcc_qupv3_wrap0_s4_clk", + .parent_hws = (const struct clk_hw *[]){ + &gcc_qupv3_wrap0_s4_clk_src.clkr.hw + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_qupv3_wrap0_s5_clk = { + .halt_reg = 0x17734, + .halt_check = branch_halt_voted, + .clkr = { + .enable_reg = 0x5200c, + .enable_mask = bit(15), + .hw.init = &(struct clk_init_data){ + .name = "gcc_qupv3_wrap0_s5_clk", + .parent_hws = (const struct clk_hw *[]){ + &gcc_qupv3_wrap0_s5_clk_src.clkr.hw + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_qupv3_wrap0_s6_clk = { + .halt_reg = 0x17864, + .halt_check = branch_halt_voted, + .clkr = { + .enable_reg = 0x5200c, + .enable_mask = bit(16), + .hw.init = &(struct clk_init_data){ + .name = "gcc_qupv3_wrap0_s6_clk", + .parent_hws = (const struct clk_hw *[]){ + &gcc_qupv3_wrap0_s6_clk_src.clkr.hw + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_qupv3_wrap0_s7_clk = { + .halt_reg = 0x17994, + .halt_check = branch_halt_voted, + .clkr = { + .enable_reg = 0x5200c, + .enable_mask = bit(17), + .hw.init = &(struct clk_init_data){ + .name = "gcc_qupv3_wrap0_s7_clk", + .parent_hws = (const struct clk_hw *[]){ + &gcc_qupv3_wrap0_s7_clk_src.clkr.hw + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_qupv3_wrap1_s0_clk = { + .halt_reg = 0x18144, + .halt_check = branch_halt_voted, + .clkr = { + .enable_reg = 0x5200c, + .enable_mask = bit(22), + .hw.init = &(struct clk_init_data){ + .name = "gcc_qupv3_wrap1_s0_clk", + .parent_hws = (const struct clk_hw *[]){ + &gcc_qupv3_wrap1_s0_clk_src.clkr.hw + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_qupv3_wrap1_s1_clk = { + .halt_reg = 0x18274, + .halt_check = branch_halt_voted, + .clkr = { + .enable_reg = 0x5200c, + .enable_mask = bit(23), + .hw.init = &(struct clk_init_data){ + .name = "gcc_qupv3_wrap1_s1_clk", + .parent_hws = (const struct clk_hw *[]){ + &gcc_qupv3_wrap1_s1_clk_src.clkr.hw + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_qupv3_wrap1_s2_clk = { + .halt_reg = 0x183a4, + .halt_check = branch_halt_voted, + .clkr = { + .enable_reg = 0x5200c, + .enable_mask = bit(24), + .hw.init = &(struct clk_init_data){ + .name = "gcc_qupv3_wrap1_s2_clk", + .parent_hws = (const struct clk_hw *[]){ + &gcc_qupv3_wrap1_s2_clk_src.clkr.hw + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_qupv3_wrap1_s3_clk = { + .halt_reg = 0x184d4, + .halt_check = branch_halt_voted, + .clkr = { + .enable_reg = 0x5200c, + .enable_mask = bit(25), + .hw.init = &(struct clk_init_data){ + .name = "gcc_qupv3_wrap1_s3_clk", + .parent_hws = (const struct clk_hw *[]){ + &gcc_qupv3_wrap1_s3_clk_src.clkr.hw + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_qupv3_wrap1_s4_clk = { + .halt_reg = 0x18604, + .halt_check = branch_halt_voted, + .clkr = { + .enable_reg = 0x5200c, + .enable_mask = bit(26), + .hw.init = &(struct clk_init_data){ + .name = "gcc_qupv3_wrap1_s4_clk", + .parent_hws = (const struct clk_hw *[]){ + &gcc_qupv3_wrap1_s4_clk_src.clkr.hw + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_qupv3_wrap1_s5_clk = { + .halt_reg = 0x18734, + .halt_check = branch_halt_voted, + .clkr = { + .enable_reg = 0x5200c, + .enable_mask = bit(27), + .hw.init = &(struct clk_init_data){ + .name = "gcc_qupv3_wrap1_s5_clk", + .parent_hws = (const struct clk_hw *[]){ + &gcc_qupv3_wrap1_s5_clk_src.clkr.hw + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_qupv3_wrap2_s0_clk = { + .halt_reg = 0x1e144, + .halt_check = branch_halt_voted, + .clkr = { + .enable_reg = 0x52014, + .enable_mask = bit(4), + .hw.init = &(struct clk_init_data){ + .name = "gcc_qupv3_wrap2_s0_clk", + .parent_hws = (const struct clk_hw *[]){ + &gcc_qupv3_wrap2_s0_clk_src.clkr.hw + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_qupv3_wrap2_s1_clk = { + .halt_reg = 0x1e274, + .halt_check = branch_halt_voted, + .clkr = { + .enable_reg = 0x52014, + .enable_mask = bit(5), + .hw.init = &(struct clk_init_data){ + .name = "gcc_qupv3_wrap2_s1_clk", + .parent_hws = (const struct clk_hw *[]){ + &gcc_qupv3_wrap2_s1_clk_src.clkr.hw + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_qupv3_wrap2_s2_clk = { + .halt_reg = 0x1e3a4, + .halt_check = branch_halt_voted, + .clkr = { + .enable_reg = 0x52014, + .enable_mask = bit(6), + .hw.init = &(struct clk_init_data){ + .name = "gcc_qupv3_wrap2_s2_clk", + .parent_hws = (const struct clk_hw *[]){ + &gcc_qupv3_wrap2_s2_clk_src.clkr.hw + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_qupv3_wrap2_s3_clk = { + .halt_reg = 0x1e4d4, + .halt_check = branch_halt_voted, + .clkr = { + .enable_reg = 0x52014, + .enable_mask = bit(7), + .hw.init = &(struct clk_init_data){ + .name = "gcc_qupv3_wrap2_s3_clk", + .parent_hws = (const struct clk_hw *[]){ + &gcc_qupv3_wrap2_s3_clk_src.clkr.hw + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_qupv3_wrap2_s4_clk = { + .halt_reg = 0x1e604, + .halt_check = branch_halt_voted, + .clkr = { + .enable_reg = 0x52014, + .enable_mask = bit(8), + .hw.init = &(struct clk_init_data){ + .name = "gcc_qupv3_wrap2_s4_clk", + .parent_hws = (const struct clk_hw *[]){ + &gcc_qupv3_wrap2_s4_clk_src.clkr.hw + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_qupv3_wrap2_s5_clk = { + .halt_reg = 0x1e734, + .halt_check = branch_halt_voted, + .clkr = { + .enable_reg = 0x52014, + .enable_mask = bit(9), + .hw.init = &(struct clk_init_data){ + .name = "gcc_qupv3_wrap2_s5_clk", + .parent_hws = (const struct clk_hw *[]){ + &gcc_qupv3_wrap2_s5_clk_src.clkr.hw + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_qupv3_wrap_0_m_ahb_clk = { + .halt_reg = 0x17004, + .halt_check = branch_halt_voted, + .clkr = { + .enable_reg = 0x5200c, + .enable_mask = bit(6), + .hw.init = &(struct clk_init_data){ + .name = "gcc_qupv3_wrap_0_m_ahb_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_qupv3_wrap_0_s_ahb_clk = { + .halt_reg = 0x17008, + .halt_check = branch_halt_voted, + .hwcg_reg = 0x17008, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x5200c, + .enable_mask = bit(7), + .hw.init = &(struct clk_init_data){ + .name = "gcc_qupv3_wrap_0_s_ahb_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_qupv3_wrap_1_m_ahb_clk = { + .halt_reg = 0x18004, + .halt_check = branch_halt_voted, + .clkr = { + .enable_reg = 0x5200c, + .enable_mask = bit(20), + .hw.init = &(struct clk_init_data){ + .name = "gcc_qupv3_wrap_1_m_ahb_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_qupv3_wrap_1_s_ahb_clk = { + .halt_reg = 0x18008, + .halt_check = branch_halt_voted, + .hwcg_reg = 0x18008, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x5200c, + .enable_mask = bit(21), + .hw.init = &(struct clk_init_data){ + .name = "gcc_qupv3_wrap_1_s_ahb_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_qupv3_wrap_2_m_ahb_clk = { + .halt_reg = 0x1e004, + .halt_check = branch_halt_voted, + .clkr = { + .enable_reg = 0x52014, + .enable_mask = bit(2), + .hw.init = &(struct clk_init_data){ + .name = "gcc_qupv3_wrap_2_m_ahb_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_qupv3_wrap_2_s_ahb_clk = { + .halt_reg = 0x1e008, + .halt_check = branch_halt_voted, + .hwcg_reg = 0x1e008, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x52014, + .enable_mask = bit(1), + .hw.init = &(struct clk_init_data){ + .name = "gcc_qupv3_wrap_2_s_ahb_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_sdcc2_ahb_clk = { + .halt_reg = 0x14008, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0x14008, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_sdcc2_ahb_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_sdcc2_apps_clk = { + .halt_reg = 0x14004, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0x14004, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_sdcc2_apps_clk", + .parent_hws = (const struct clk_hw *[]){ + &gcc_sdcc2_apps_clk_src.clkr.hw + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_sdcc4_ahb_clk = { + .halt_reg = 0x16008, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0x16008, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_sdcc4_ahb_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_sdcc4_apps_clk = { + .halt_reg = 0x16004, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0x16004, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_sdcc4_apps_clk", + .parent_hws = (const struct clk_hw *[]){ + &gcc_sdcc4_apps_clk_src.clkr.hw + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +/* for cpuss functionality the sys noc clock needs to be left enabled */ +static struct clk_branch gcc_sys_noc_cpuss_ahb_clk = { + .halt_reg = 0x4819c, + .halt_check = branch_halt_voted, + .clkr = { + .enable_reg = 0x52004, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_sys_noc_cpuss_ahb_clk", + .parent_hws = (const struct clk_hw *[]){ + &gcc_cpuss_ahb_clk_src.clkr.hw + }, + .num_parents = 1, + .flags = clk_is_critical | clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_tsif_ahb_clk = { + .halt_reg = 0x36004, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0x36004, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_tsif_ahb_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_tsif_inactivity_timers_clk = { + .halt_reg = 0x3600c, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0x3600c, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_tsif_inactivity_timers_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_tsif_ref_clk = { + .halt_reg = 0x36008, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0x36008, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_tsif_ref_clk", + .parent_hws = (const struct clk_hw *[]){ + &gcc_tsif_ref_clk_src.clkr.hw + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_ufs_card_2_ahb_clk = { + .halt_reg = 0xa2014, + .halt_check = branch_halt, + .hwcg_reg = 0xa2014, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0xa2014, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_ufs_card_2_ahb_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_ufs_card_2_axi_clk = { + .halt_reg = 0xa2010, + .halt_check = branch_halt, + .hwcg_reg = 0xa2010, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0xa2010, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_ufs_card_2_axi_clk", + .parent_hws = (const struct clk_hw *[]){ + &gcc_ufs_card_2_axi_clk_src.clkr.hw + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_ufs_card_2_ice_core_clk = { + .halt_reg = 0xa205c, + .halt_check = branch_halt, + .hwcg_reg = 0xa205c, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0xa205c, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_ufs_card_2_ice_core_clk", + .parent_hws = (const struct clk_hw *[]){ + &gcc_ufs_card_2_ice_core_clk_src.clkr.hw + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_ufs_card_2_phy_aux_clk = { + .halt_reg = 0xa2090, + .halt_check = branch_halt, + .hwcg_reg = 0xa2090, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0xa2090, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_ufs_card_2_phy_aux_clk", + .parent_hws = (const struct clk_hw *[]){ + &gcc_ufs_card_2_phy_aux_clk_src.clkr.hw + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_ufs_card_2_rx_symbol_0_clk = { + .halt_reg = 0xa201c, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0xa201c, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_ufs_card_2_rx_symbol_0_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_ufs_card_2_rx_symbol_1_clk = { + .halt_reg = 0xa20ac, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0xa20ac, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_ufs_card_2_rx_symbol_1_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_ufs_card_2_tx_symbol_0_clk = { + .halt_reg = 0xa2018, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0xa2018, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_ufs_card_2_tx_symbol_0_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_ufs_card_2_unipro_core_clk = { + .halt_reg = 0xa2058, + .halt_check = branch_halt, + .hwcg_reg = 0xa2058, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0xa2058, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_ufs_card_2_unipro_core_clk", + .parent_hws = (const struct clk_hw *[]){ + &gcc_ufs_card_2_unipro_core_clk_src.clkr.hw + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_ufs_card_ahb_clk = { + .halt_reg = 0x75014, + .halt_check = branch_halt, + .hwcg_reg = 0x75014, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x75014, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_ufs_card_ahb_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_ufs_card_axi_clk = { + .halt_reg = 0x75010, + .halt_check = branch_halt, + .hwcg_reg = 0x75010, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x75010, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_ufs_card_axi_clk", + .parent_hws = (const struct clk_hw *[]){ + &gcc_ufs_card_axi_clk_src.clkr.hw + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_ufs_card_axi_hw_ctl_clk = { + .halt_reg = 0x75010, + .halt_check = branch_halt, + .hwcg_reg = 0x75010, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x75010, + .enable_mask = bit(1), + .hw.init = &(struct clk_init_data){ + .name = "gcc_ufs_card_axi_hw_ctl_clk", + .parent_hws = (const struct clk_hw *[]){ + &gcc_ufs_card_axi_clk.clkr.hw + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch_simple_ops, + }, + }, +}; + +static struct clk_branch gcc_ufs_card_ice_core_clk = { + .halt_reg = 0x7505c, + .halt_check = branch_halt, + .hwcg_reg = 0x7505c, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x7505c, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_ufs_card_ice_core_clk", + .parent_hws = (const struct clk_hw *[]){ + &gcc_ufs_card_ice_core_clk_src.clkr.hw + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_ufs_card_ice_core_hw_ctl_clk = { + .halt_reg = 0x7505c, + .halt_check = branch_halt, + .hwcg_reg = 0x7505c, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x7505c, + .enable_mask = bit(1), + .hw.init = &(struct clk_init_data){ + .name = "gcc_ufs_card_ice_core_hw_ctl_clk", + .parent_hws = (const struct clk_hw *[]){ + &gcc_ufs_card_ice_core_clk.clkr.hw + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch_simple_ops, + }, + }, +}; + +static struct clk_branch gcc_ufs_card_phy_aux_clk = { + .halt_reg = 0x75090, + .halt_check = branch_halt, + .hwcg_reg = 0x75090, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x75090, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_ufs_card_phy_aux_clk", + .parent_hws = (const struct clk_hw *[]){ + &gcc_ufs_card_phy_aux_clk_src.clkr.hw + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_ufs_card_phy_aux_hw_ctl_clk = { + .halt_reg = 0x75090, + .halt_check = branch_halt, + .hwcg_reg = 0x75090, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x75090, + .enable_mask = bit(1), + .hw.init = &(struct clk_init_data){ + .name = "gcc_ufs_card_phy_aux_hw_ctl_clk", + .parent_hws = (const struct clk_hw *[]){ + &gcc_ufs_card_phy_aux_clk.clkr.hw + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch_simple_ops, + }, + }, +}; + +static struct clk_branch gcc_ufs_card_rx_symbol_0_clk = { + .halt_reg = 0x7501c, + .halt_check = branch_halt_delay, + .clkr = { + .enable_reg = 0x7501c, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_ufs_card_rx_symbol_0_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_ufs_card_rx_symbol_1_clk = { + .halt_reg = 0x750ac, + .halt_check = branch_halt_delay, + .clkr = { + .enable_reg = 0x750ac, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_ufs_card_rx_symbol_1_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_ufs_card_tx_symbol_0_clk = { + .halt_reg = 0x75018, + .halt_check = branch_halt_delay, + .clkr = { + .enable_reg = 0x75018, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_ufs_card_tx_symbol_0_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_ufs_card_unipro_core_clk = { + .halt_reg = 0x75058, + .halt_check = branch_halt, + .hwcg_reg = 0x75058, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x75058, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_ufs_card_unipro_core_clk", + .parent_hws = (const struct clk_hw *[]){ + &gcc_ufs_card_unipro_core_clk_src.clkr.hw + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_ufs_card_unipro_core_hw_ctl_clk = { + .halt_reg = 0x75058, + .halt_check = branch_halt, + .hwcg_reg = 0x75058, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x75058, + .enable_mask = bit(1), + .hw.init = &(struct clk_init_data){ + .name = "gcc_ufs_card_unipro_core_hw_ctl_clk", + .parent_hws = (const struct clk_hw *[]){ + &gcc_ufs_card_unipro_core_clk.clkr.hw + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch_simple_ops, + }, + }, +}; + +static struct clk_branch gcc_ufs_phy_ahb_clk = { + .halt_reg = 0x77014, + .halt_check = branch_halt, + .hwcg_reg = 0x77014, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x77014, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_ufs_phy_ahb_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_ufs_phy_axi_clk = { + .halt_reg = 0x77010, + .halt_check = branch_halt, + .hwcg_reg = 0x77010, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x77010, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_ufs_phy_axi_clk", + .parent_hws = (const struct clk_hw *[]){ + &gcc_ufs_phy_axi_clk_src.clkr.hw + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_ufs_phy_axi_hw_ctl_clk = { + .halt_reg = 0x77010, + .halt_check = branch_halt, + .hwcg_reg = 0x77010, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x77010, + .enable_mask = bit(1), + .hw.init = &(struct clk_init_data){ + .name = "gcc_ufs_phy_axi_hw_ctl_clk", + .parent_hws = (const struct clk_hw *[]){ + &gcc_ufs_phy_axi_clk.clkr.hw + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch_simple_ops, + }, + }, +}; + +static struct clk_branch gcc_ufs_phy_ice_core_clk = { + .halt_reg = 0x7705c, + .halt_check = branch_halt, + .hwcg_reg = 0x7705c, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x7705c, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_ufs_phy_ice_core_clk", + .parent_hws = (const struct clk_hw *[]){ + &gcc_ufs_phy_ice_core_clk_src.clkr.hw + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_ufs_phy_ice_core_hw_ctl_clk = { + .halt_reg = 0x7705c, + .halt_check = branch_halt, + .hwcg_reg = 0x7705c, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x7705c, + .enable_mask = bit(1), + .hw.init = &(struct clk_init_data){ + .name = "gcc_ufs_phy_ice_core_hw_ctl_clk", + .parent_hws = (const struct clk_hw *[]){ + &gcc_ufs_phy_ice_core_clk.clkr.hw + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch_simple_ops, + }, + }, +}; + +static struct clk_branch gcc_ufs_phy_phy_aux_clk = { + .halt_reg = 0x77090, + .halt_check = branch_halt, + .hwcg_reg = 0x77090, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x77090, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_ufs_phy_phy_aux_clk", + .parent_hws = (const struct clk_hw *[]){ + &gcc_ufs_phy_phy_aux_clk_src.clkr.hw + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_ufs_phy_phy_aux_hw_ctl_clk = { + .halt_reg = 0x77090, + .halt_check = branch_halt, + .hwcg_reg = 0x77090, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x77090, + .enable_mask = bit(1), + .hw.init = &(struct clk_init_data){ + .name = "gcc_ufs_phy_phy_aux_hw_ctl_clk", + .parent_hws = (const struct clk_hw *[]){ + &gcc_ufs_phy_phy_aux_clk.clkr.hw + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch_simple_ops, + }, + }, +}; + +static struct clk_branch gcc_ufs_phy_rx_symbol_0_clk = { + .halt_reg = 0x7701c, + .halt_check = branch_halt_skip, + .clkr = { + .enable_reg = 0x7701c, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_ufs_phy_rx_symbol_0_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_ufs_phy_rx_symbol_1_clk = { + .halt_reg = 0x770ac, + .halt_check = branch_halt_skip, + .clkr = { + .enable_reg = 0x770ac, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_ufs_phy_rx_symbol_1_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_ufs_phy_tx_symbol_0_clk = { + .halt_reg = 0x77018, + .halt_check = branch_halt_skip, + .clkr = { + .enable_reg = 0x77018, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_ufs_phy_tx_symbol_0_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_ufs_phy_unipro_core_clk = { + .halt_reg = 0x77058, + .halt_check = branch_halt, + .hwcg_reg = 0x77058, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x77058, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_ufs_phy_unipro_core_clk", + .parent_hws = (const struct clk_hw *[]){ + &gcc_ufs_phy_unipro_core_clk_src.clkr.hw + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_ufs_phy_unipro_core_hw_ctl_clk = { + .halt_reg = 0x77058, + .halt_check = branch_halt, + .hwcg_reg = 0x77058, + .hwcg_bit = 1, + .clkr = { + .enable_reg = 0x77058, + .enable_mask = bit(1), + .hw.init = &(struct clk_init_data){ + .name = "gcc_ufs_phy_unipro_core_hw_ctl_clk", + .parent_hws = (const struct clk_hw *[]){ + &gcc_ufs_phy_unipro_core_clk.clkr.hw + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch_simple_ops, + }, + }, +}; + +static struct clk_branch gcc_usb30_mp_master_clk = { + .halt_reg = 0xa6010, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0xa6010, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_usb30_mp_master_clk", + .parent_hws = (const struct clk_hw *[]){ + &gcc_usb30_mp_master_clk_src.clkr.hw }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_usb30_mp_mock_utmi_clk = { + .halt_reg = 0xa6018, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0xa6018, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_usb30_mp_mock_utmi_clk", + .parent_hws = (const struct clk_hw *[]){ + &gcc_usb30_mp_mock_utmi_clk_src.clkr.hw + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_usb30_mp_sleep_clk = { + .halt_reg = 0xa6014, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0xa6014, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_usb30_mp_sleep_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_usb30_prim_master_clk = { + .halt_reg = 0xf010, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0xf010, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_usb30_prim_master_clk", + .parent_hws = (const struct clk_hw *[]){ + &gcc_usb30_prim_master_clk_src.clkr.hw }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_usb30_prim_mock_utmi_clk = { + .halt_reg = 0xf018, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0xf018, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_usb30_prim_mock_utmi_clk", + .parent_hws = (const struct clk_hw *[]){ + &gcc_usb30_prim_mock_utmi_clk_src.clkr.hw + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_usb30_prim_sleep_clk = { + .halt_reg = 0xf014, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0xf014, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_usb30_prim_sleep_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_usb30_sec_master_clk = { + .halt_reg = 0x10010, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0x10010, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_usb30_sec_master_clk", + .parent_hws = (const struct clk_hw *[]){ + &gcc_usb30_sec_master_clk_src.clkr.hw }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_usb30_sec_mock_utmi_clk = { + .halt_reg = 0x10018, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0x10018, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_usb30_sec_mock_utmi_clk", + .parent_hws = (const struct clk_hw *[]){ + &gcc_usb30_sec_mock_utmi_clk_src.clkr.hw + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_usb30_sec_sleep_clk = { + .halt_reg = 0x10014, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0x10014, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_usb30_sec_sleep_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_usb3_mp_phy_aux_clk = { + .halt_reg = 0xa6050, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0xa6050, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_usb3_mp_phy_aux_clk", + .parent_hws = (const struct clk_hw *[]){ + &gcc_usb3_mp_phy_aux_clk_src.clkr.hw + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_usb3_mp_phy_com_aux_clk = { + .halt_reg = 0xa6054, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0xa6054, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_usb3_mp_phy_com_aux_clk", + .parent_hws = (const struct clk_hw *[]){ + &gcc_usb3_mp_phy_aux_clk_src.clkr.hw + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_usb3_mp_phy_pipe_0_clk = { + .halt_reg = 0xa6058, + .halt_check = branch_halt_skip, + .clkr = { + .enable_reg = 0xa6058, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_usb3_mp_phy_pipe_0_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_usb3_mp_phy_pipe_1_clk = { + .halt_reg = 0xa605c, + .halt_check = branch_halt_skip, + .clkr = { + .enable_reg = 0xa605c, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_usb3_mp_phy_pipe_1_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_usb3_prim_clkref_clk = { + .halt_reg = 0x8c008, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0x8c008, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_usb3_prim_clkref_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_usb3_prim_phy_aux_clk = { + .halt_reg = 0xf050, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0xf050, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_usb3_prim_phy_aux_clk", + .parent_hws = (const struct clk_hw *[]){ + &gcc_usb3_prim_phy_aux_clk_src.clkr.hw + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_usb3_prim_phy_com_aux_clk = { + .halt_reg = 0xf054, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0xf054, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_usb3_prim_phy_com_aux_clk", + .parent_hws = (const struct clk_hw *[]){ + &gcc_usb3_prim_phy_aux_clk_src.clkr.hw + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_usb3_prim_phy_pipe_clk = { + .halt_reg = 0xf058, + .halt_check = branch_halt_skip, + .clkr = { + .enable_reg = 0xf058, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_usb3_prim_phy_pipe_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_usb3_sec_clkref_clk = { + .halt_reg = 0x8c028, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0x8c028, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_usb3_sec_clkref_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_usb3_sec_phy_aux_clk = { + .halt_reg = 0x10050, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0x10050, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_usb3_sec_phy_aux_clk", + .parent_hws = (const struct clk_hw *[]){ + &gcc_usb3_sec_phy_aux_clk_src.clkr.hw + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_usb3_sec_phy_com_aux_clk = { + .halt_reg = 0x10054, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0x10054, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_usb3_sec_phy_com_aux_clk", + .parent_hws = (const struct clk_hw *[]){ + &gcc_usb3_sec_phy_aux_clk_src.clkr.hw + }, + .num_parents = 1, + .flags = clk_set_rate_parent, + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_usb3_sec_phy_pipe_clk = { + .halt_reg = 0x10058, + .halt_check = branch_halt_skip, + .clkr = { + .enable_reg = 0x10058, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_usb3_sec_phy_pipe_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_video_axi0_clk = { + .halt_reg = 0xb024, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0xb024, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_video_axi0_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_video_axi1_clk = { + .halt_reg = 0xb028, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0xb028, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_video_axi1_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct clk_branch gcc_video_axic_clk = { + .halt_reg = 0xb02c, + .halt_check = branch_halt, + .clkr = { + .enable_reg = 0xb02c, + .enable_mask = bit(0), + .hw.init = &(struct clk_init_data){ + .name = "gcc_video_axic_clk", + .ops = &clk_branch2_ops, + }, + }, +}; + +static struct gdsc usb30_sec_gdsc = { + .gdscr = 0x10004, + .pd = { + .name = "usb30_sec_gdsc", + }, + .pwrsts = pwrsts_off_on, + .flags = poll_cfg_gdscr, +}; + +static struct gdsc emac_gdsc = { + .gdscr = 0x6004, + .pd = { + .name = "emac_gdsc", + }, + .pwrsts = pwrsts_off_on, + .flags = poll_cfg_gdscr, +}; + +static struct gdsc usb30_prim_gdsc = { + .gdscr = 0xf004, + .pd = { + .name = "usb30_prim_gdsc", + }, + .pwrsts = pwrsts_off_on, + .flags = poll_cfg_gdscr, +}; + +static struct gdsc pcie_0_gdsc = { + .gdscr = 0x6b004, + .pd = { + .name = "pcie_0_gdsc", + }, + .pwrsts = pwrsts_off_on, + .flags = poll_cfg_gdscr, +}; + +static struct gdsc ufs_card_gdsc = { + .gdscr = 0x75004, + .pd = { + .name = "ufs_card_gdsc", + }, + .pwrsts = pwrsts_off_on, + .flags = poll_cfg_gdscr, +}; + +static struct gdsc ufs_phy_gdsc = { + .gdscr = 0x77004, + .pd = { + .name = "ufs_phy_gdsc", + }, + .pwrsts = pwrsts_off_on, + .flags = poll_cfg_gdscr, +}; + +static struct gdsc pcie_1_gdsc = { + .gdscr = 0x8d004, + .pd = { + .name = "pcie_1_gdsc", + }, + .pwrsts = pwrsts_off_on, + .flags = poll_cfg_gdscr, +}; + +static struct gdsc pcie_2_gdsc = { + .gdscr = 0x9d004, + .pd = { + .name = "pcie_2_gdsc", + }, + .pwrsts = pwrsts_off_on, + .flags = poll_cfg_gdscr, +}; + +static struct gdsc ufs_card_2_gdsc = { + .gdscr = 0xa2004, + .pd = { + .name = "ufs_card_2_gdsc", + }, + .pwrsts = pwrsts_off_on, + .flags = poll_cfg_gdscr, +}; + +static struct gdsc pcie_3_gdsc = { + .gdscr = 0xa3004, + .pd = { + .name = "pcie_3_gdsc", + }, + .pwrsts = pwrsts_off_on, + .flags = poll_cfg_gdscr, +}; + +static struct gdsc usb30_mp_gdsc = { + .gdscr = 0xa6004, + .pd = { + .name = "usb30_mp_gdsc", + }, + .pwrsts = pwrsts_off_on, + .flags = poll_cfg_gdscr, +}; + +static struct clk_regmap *gcc_sc8180x_clocks[] = { + [gcc_aggre_noc_pcie_tbu_clk] = &gcc_aggre_noc_pcie_tbu_clk.clkr, + [gcc_aggre_ufs_card_axi_clk] = &gcc_aggre_ufs_card_axi_clk.clkr, + [gcc_aggre_ufs_card_axi_hw_ctl_clk] = &gcc_aggre_ufs_card_axi_hw_ctl_clk.clkr, + [gcc_aggre_ufs_phy_axi_clk] = &gcc_aggre_ufs_phy_axi_clk.clkr, + [gcc_aggre_ufs_phy_axi_hw_ctl_clk] = &gcc_aggre_ufs_phy_axi_hw_ctl_clk.clkr, + [gcc_aggre_usb3_mp_axi_clk] = &gcc_aggre_usb3_mp_axi_clk.clkr, + [gcc_aggre_usb3_prim_axi_clk] = &gcc_aggre_usb3_prim_axi_clk.clkr, + [gcc_aggre_usb3_sec_axi_clk] = &gcc_aggre_usb3_sec_axi_clk.clkr, + [gcc_boot_rom_ahb_clk] = &gcc_boot_rom_ahb_clk.clkr, + [gcc_camera_hf_axi_clk] = &gcc_camera_hf_axi_clk.clkr, + [gcc_camera_sf_axi_clk] = &gcc_camera_sf_axi_clk.clkr, + [gcc_cfg_noc_usb3_mp_axi_clk] = &gcc_cfg_noc_usb3_mp_axi_clk.clkr, + [gcc_cfg_noc_usb3_prim_axi_clk] = &gcc_cfg_noc_usb3_prim_axi_clk.clkr, + [gcc_cfg_noc_usb3_sec_axi_clk] = &gcc_cfg_noc_usb3_sec_axi_clk.clkr, + [gcc_cpuss_ahb_clk] = &gcc_cpuss_ahb_clk.clkr, + [gcc_cpuss_ahb_clk_src] = &gcc_cpuss_ahb_clk_src.clkr, + [gcc_cpuss_rbcpr_clk] = &gcc_cpuss_rbcpr_clk.clkr, + [gcc_ddrss_gpu_axi_clk] = &gcc_ddrss_gpu_axi_clk.clkr, + [gcc_disp_hf_axi_clk] = &gcc_disp_hf_axi_clk.clkr, + [gcc_disp_sf_axi_clk] = &gcc_disp_sf_axi_clk.clkr, + [gcc_emac_axi_clk] = &gcc_emac_axi_clk.clkr, + [gcc_emac_ptp_clk] = &gcc_emac_ptp_clk.clkr, + [gcc_emac_ptp_clk_src] = &gcc_emac_ptp_clk_src.clkr, + [gcc_emac_rgmii_clk] = &gcc_emac_rgmii_clk.clkr, + [gcc_emac_rgmii_clk_src] = &gcc_emac_rgmii_clk_src.clkr, + [gcc_emac_slv_ahb_clk] = &gcc_emac_slv_ahb_clk.clkr, + [gcc_gp1_clk] = &gcc_gp1_clk.clkr, + [gcc_gp1_clk_src] = &gcc_gp1_clk_src.clkr, + [gcc_gp2_clk] = &gcc_gp2_clk.clkr, + [gcc_gp2_clk_src] = &gcc_gp2_clk_src.clkr, + [gcc_gp3_clk] = &gcc_gp3_clk.clkr, + [gcc_gp3_clk_src] = &gcc_gp3_clk_src.clkr, + [gcc_gp4_clk] = &gcc_gp4_clk.clkr, + [gcc_gp4_clk_src] = &gcc_gp4_clk_src.clkr, + [gcc_gp5_clk] = &gcc_gp5_clk.clkr, + [gcc_gp5_clk_src] = &gcc_gp5_clk_src.clkr, + [gcc_gpu_gpll0_clk_src] = &gcc_gpu_gpll0_clk_src.clkr, + [gcc_gpu_gpll0_div_clk_src] = &gcc_gpu_gpll0_div_clk_src.clkr, + [gcc_gpu_memnoc_gfx_clk] = &gcc_gpu_memnoc_gfx_clk.clkr, + [gcc_gpu_snoc_dvm_gfx_clk] = &gcc_gpu_snoc_dvm_gfx_clk.clkr, + [gcc_npu_at_clk] = &gcc_npu_at_clk.clkr, + [gcc_npu_axi_clk] = &gcc_npu_axi_clk.clkr, + [gcc_npu_axi_clk_src] = &gcc_npu_axi_clk_src.clkr, + [gcc_npu_gpll0_clk_src] = &gcc_npu_gpll0_clk_src.clkr, + [gcc_npu_gpll0_div_clk_src] = &gcc_npu_gpll0_div_clk_src.clkr, + [gcc_npu_trig_clk] = &gcc_npu_trig_clk.clkr, + [gcc_pcie0_phy_refgen_clk] = &gcc_pcie0_phy_refgen_clk.clkr, + [gcc_pcie1_phy_refgen_clk] = &gcc_pcie1_phy_refgen_clk.clkr, + [gcc_pcie2_phy_refgen_clk] = &gcc_pcie2_phy_refgen_clk.clkr, + [gcc_pcie3_phy_refgen_clk] = &gcc_pcie3_phy_refgen_clk.clkr, + [gcc_pcie_0_aux_clk] = &gcc_pcie_0_aux_clk.clkr, + [gcc_pcie_0_aux_clk_src] = &gcc_pcie_0_aux_clk_src.clkr, + [gcc_pcie_0_cfg_ahb_clk] = &gcc_pcie_0_cfg_ahb_clk.clkr, + [gcc_pcie_0_clkref_clk] = &gcc_pcie_0_clkref_clk.clkr, + [gcc_pcie_0_mstr_axi_clk] = &gcc_pcie_0_mstr_axi_clk.clkr, + [gcc_pcie_0_pipe_clk] = &gcc_pcie_0_pipe_clk.clkr, + [gcc_pcie_0_slv_axi_clk] = &gcc_pcie_0_slv_axi_clk.clkr, + [gcc_pcie_0_slv_q2a_axi_clk] = &gcc_pcie_0_slv_q2a_axi_clk.clkr, + [gcc_pcie_1_aux_clk] = &gcc_pcie_1_aux_clk.clkr, + [gcc_pcie_1_aux_clk_src] = &gcc_pcie_1_aux_clk_src.clkr, + [gcc_pcie_1_cfg_ahb_clk] = &gcc_pcie_1_cfg_ahb_clk.clkr, + [gcc_pcie_1_clkref_clk] = &gcc_pcie_1_clkref_clk.clkr, + [gcc_pcie_1_mstr_axi_clk] = &gcc_pcie_1_mstr_axi_clk.clkr, + [gcc_pcie_1_pipe_clk] = &gcc_pcie_1_pipe_clk.clkr, + [gcc_pcie_1_slv_axi_clk] = &gcc_pcie_1_slv_axi_clk.clkr, + [gcc_pcie_1_slv_q2a_axi_clk] = &gcc_pcie_1_slv_q2a_axi_clk.clkr, + [gcc_pcie_2_aux_clk] = &gcc_pcie_2_aux_clk.clkr, + [gcc_pcie_2_aux_clk_src] = &gcc_pcie_2_aux_clk_src.clkr, + [gcc_pcie_2_cfg_ahb_clk] = &gcc_pcie_2_cfg_ahb_clk.clkr, + [gcc_pcie_2_clkref_clk] = &gcc_pcie_2_clkref_clk.clkr, + [gcc_pcie_2_mstr_axi_clk] = &gcc_pcie_2_mstr_axi_clk.clkr, + [gcc_pcie_2_pipe_clk] = &gcc_pcie_2_pipe_clk.clkr, + [gcc_pcie_2_slv_axi_clk] = &gcc_pcie_2_slv_axi_clk.clkr, + [gcc_pcie_2_slv_q2a_axi_clk] = &gcc_pcie_2_slv_q2a_axi_clk.clkr, + [gcc_pcie_3_aux_clk] = &gcc_pcie_3_aux_clk.clkr, + [gcc_pcie_3_aux_clk_src] = &gcc_pcie_3_aux_clk_src.clkr, + [gcc_pcie_3_cfg_ahb_clk] = &gcc_pcie_3_cfg_ahb_clk.clkr, + [gcc_pcie_3_clkref_clk] = &gcc_pcie_3_clkref_clk.clkr, + [gcc_pcie_3_mstr_axi_clk] = &gcc_pcie_3_mstr_axi_clk.clkr, + [gcc_pcie_3_pipe_clk] = &gcc_pcie_3_pipe_clk.clkr, + [gcc_pcie_3_slv_axi_clk] = &gcc_pcie_3_slv_axi_clk.clkr, + [gcc_pcie_3_slv_q2a_axi_clk] = &gcc_pcie_3_slv_q2a_axi_clk.clkr, + [gcc_pcie_phy_aux_clk] = &gcc_pcie_phy_aux_clk.clkr, + [gcc_pcie_phy_refgen_clk_src] = &gcc_pcie_phy_refgen_clk_src.clkr, + [gcc_pdm2_clk] = &gcc_pdm2_clk.clkr, + [gcc_pdm2_clk_src] = &gcc_pdm2_clk_src.clkr, + [gcc_pdm_ahb_clk] = &gcc_pdm_ahb_clk.clkr, + [gcc_pdm_xo4_clk] = &gcc_pdm_xo4_clk.clkr, + [gcc_prng_ahb_clk] = &gcc_prng_ahb_clk.clkr, + [gcc_qmip_camera_nrt_ahb_clk] = &gcc_qmip_camera_nrt_ahb_clk.clkr, + [gcc_qmip_camera_rt_ahb_clk] = &gcc_qmip_camera_rt_ahb_clk.clkr, + [gcc_qmip_disp_ahb_clk] = &gcc_qmip_disp_ahb_clk.clkr, + [gcc_qmip_video_cvp_ahb_clk] = &gcc_qmip_video_cvp_ahb_clk.clkr, + [gcc_qmip_video_vcodec_ahb_clk] = &gcc_qmip_video_vcodec_ahb_clk.clkr, + [gcc_qspi_1_cnoc_periph_ahb_clk] = &gcc_qspi_1_cnoc_periph_ahb_clk.clkr, + [gcc_qspi_1_core_clk] = &gcc_qspi_1_core_clk.clkr, + [gcc_qspi_1_core_clk_src] = &gcc_qspi_1_core_clk_src.clkr, + [gcc_qspi_cnoc_periph_ahb_clk] = &gcc_qspi_cnoc_periph_ahb_clk.clkr, + [gcc_qspi_core_clk] = &gcc_qspi_core_clk.clkr, + [gcc_qspi_core_clk_src] = &gcc_qspi_core_clk_src.clkr, + [gcc_qupv3_wrap0_s0_clk] = &gcc_qupv3_wrap0_s0_clk.clkr, + [gcc_qupv3_wrap0_s0_clk_src] = &gcc_qupv3_wrap0_s0_clk_src.clkr, + [gcc_qupv3_wrap0_s1_clk] = &gcc_qupv3_wrap0_s1_clk.clkr, + [gcc_qupv3_wrap0_s1_clk_src] = &gcc_qupv3_wrap0_s1_clk_src.clkr, + [gcc_qupv3_wrap0_s2_clk] = &gcc_qupv3_wrap0_s2_clk.clkr, + [gcc_qupv3_wrap0_s2_clk_src] = &gcc_qupv3_wrap0_s2_clk_src.clkr, + [gcc_qupv3_wrap0_s3_clk] = &gcc_qupv3_wrap0_s3_clk.clkr, + [gcc_qupv3_wrap0_s3_clk_src] = &gcc_qupv3_wrap0_s3_clk_src.clkr, + [gcc_qupv3_wrap0_s4_clk] = &gcc_qupv3_wrap0_s4_clk.clkr, + [gcc_qupv3_wrap0_s4_clk_src] = &gcc_qupv3_wrap0_s4_clk_src.clkr, + [gcc_qupv3_wrap0_s5_clk] = &gcc_qupv3_wrap0_s5_clk.clkr, + [gcc_qupv3_wrap0_s5_clk_src] = &gcc_qupv3_wrap0_s5_clk_src.clkr, + [gcc_qupv3_wrap0_s6_clk] = &gcc_qupv3_wrap0_s6_clk.clkr, + [gcc_qupv3_wrap0_s6_clk_src] = &gcc_qupv3_wrap0_s6_clk_src.clkr, + [gcc_qupv3_wrap0_s7_clk] = &gcc_qupv3_wrap0_s7_clk.clkr, + [gcc_qupv3_wrap0_s7_clk_src] = &gcc_qupv3_wrap0_s7_clk_src.clkr, + [gcc_qupv3_wrap1_s0_clk] = &gcc_qupv3_wrap1_s0_clk.clkr, + [gcc_qupv3_wrap1_s0_clk_src] = &gcc_qupv3_wrap1_s0_clk_src.clkr, + [gcc_qupv3_wrap1_s1_clk] = &gcc_qupv3_wrap1_s1_clk.clkr, + [gcc_qupv3_wrap1_s1_clk_src] = &gcc_qupv3_wrap1_s1_clk_src.clkr, + [gcc_qupv3_wrap1_s2_clk] = &gcc_qupv3_wrap1_s2_clk.clkr, + [gcc_qupv3_wrap1_s2_clk_src] = &gcc_qupv3_wrap1_s2_clk_src.clkr, + [gcc_qupv3_wrap1_s3_clk] = &gcc_qupv3_wrap1_s3_clk.clkr, + [gcc_qupv3_wrap1_s3_clk_src] = &gcc_qupv3_wrap1_s3_clk_src.clkr, + [gcc_qupv3_wrap1_s4_clk] = &gcc_qupv3_wrap1_s4_clk.clkr, + [gcc_qupv3_wrap1_s4_clk_src] = &gcc_qupv3_wrap1_s4_clk_src.clkr, + [gcc_qupv3_wrap1_s5_clk] = &gcc_qupv3_wrap1_s5_clk.clkr, + [gcc_qupv3_wrap1_s5_clk_src] = &gcc_qupv3_wrap1_s5_clk_src.clkr, + [gcc_qupv3_wrap2_s0_clk] = &gcc_qupv3_wrap2_s0_clk.clkr, + [gcc_qupv3_wrap2_s0_clk_src] = &gcc_qupv3_wrap2_s0_clk_src.clkr, + [gcc_qupv3_wrap2_s1_clk] = &gcc_qupv3_wrap2_s1_clk.clkr, + [gcc_qupv3_wrap2_s1_clk_src] = &gcc_qupv3_wrap2_s1_clk_src.clkr, + [gcc_qupv3_wrap2_s2_clk] = &gcc_qupv3_wrap2_s2_clk.clkr, + [gcc_qupv3_wrap2_s2_clk_src] = &gcc_qupv3_wrap2_s2_clk_src.clkr, + [gcc_qupv3_wrap2_s3_clk] = &gcc_qupv3_wrap2_s3_clk.clkr, + [gcc_qupv3_wrap2_s3_clk_src] = &gcc_qupv3_wrap2_s3_clk_src.clkr, + [gcc_qupv3_wrap2_s4_clk] = &gcc_qupv3_wrap2_s4_clk.clkr, + [gcc_qupv3_wrap2_s4_clk_src] = &gcc_qupv3_wrap2_s4_clk_src.clkr, + [gcc_qupv3_wrap2_s5_clk] = &gcc_qupv3_wrap2_s5_clk.clkr, + [gcc_qupv3_wrap2_s5_clk_src] = &gcc_qupv3_wrap2_s5_clk_src.clkr, + [gcc_qupv3_wrap_0_m_ahb_clk] = &gcc_qupv3_wrap_0_m_ahb_clk.clkr, + [gcc_qupv3_wrap_0_s_ahb_clk] = &gcc_qupv3_wrap_0_s_ahb_clk.clkr, + [gcc_qupv3_wrap_1_m_ahb_clk] = &gcc_qupv3_wrap_1_m_ahb_clk.clkr, + [gcc_qupv3_wrap_1_s_ahb_clk] = &gcc_qupv3_wrap_1_s_ahb_clk.clkr, + [gcc_qupv3_wrap_2_m_ahb_clk] = &gcc_qupv3_wrap_2_m_ahb_clk.clkr, + [gcc_qupv3_wrap_2_s_ahb_clk] = &gcc_qupv3_wrap_2_s_ahb_clk.clkr, + [gcc_sdcc2_ahb_clk] = &gcc_sdcc2_ahb_clk.clkr, + [gcc_sdcc2_apps_clk] = &gcc_sdcc2_apps_clk.clkr, + [gcc_sdcc2_apps_clk_src] = &gcc_sdcc2_apps_clk_src.clkr, + [gcc_sdcc4_ahb_clk] = &gcc_sdcc4_ahb_clk.clkr, + [gcc_sdcc4_apps_clk] = &gcc_sdcc4_apps_clk.clkr, + [gcc_sdcc4_apps_clk_src] = &gcc_sdcc4_apps_clk_src.clkr, + [gcc_sys_noc_cpuss_ahb_clk] = &gcc_sys_noc_cpuss_ahb_clk.clkr, + [gcc_tsif_ahb_clk] = &gcc_tsif_ahb_clk.clkr, + [gcc_tsif_inactivity_timers_clk] = &gcc_tsif_inactivity_timers_clk.clkr, + [gcc_tsif_ref_clk] = &gcc_tsif_ref_clk.clkr, + [gcc_tsif_ref_clk_src] = &gcc_tsif_ref_clk_src.clkr, + [gcc_ufs_card_2_ahb_clk] = &gcc_ufs_card_2_ahb_clk.clkr, + [gcc_ufs_card_2_axi_clk] = &gcc_ufs_card_2_axi_clk.clkr, + [gcc_ufs_card_2_axi_clk_src] = &gcc_ufs_card_2_axi_clk_src.clkr, + [gcc_ufs_card_2_ice_core_clk] = &gcc_ufs_card_2_ice_core_clk.clkr, + [gcc_ufs_card_2_ice_core_clk_src] = &gcc_ufs_card_2_ice_core_clk_src.clkr, + [gcc_ufs_card_2_phy_aux_clk] = &gcc_ufs_card_2_phy_aux_clk.clkr, + [gcc_ufs_card_2_phy_aux_clk_src] = &gcc_ufs_card_2_phy_aux_clk_src.clkr, + [gcc_ufs_card_2_rx_symbol_0_clk] = &gcc_ufs_card_2_rx_symbol_0_clk.clkr, + [gcc_ufs_card_2_rx_symbol_1_clk] = &gcc_ufs_card_2_rx_symbol_1_clk.clkr, + [gcc_ufs_card_2_tx_symbol_0_clk] = &gcc_ufs_card_2_tx_symbol_0_clk.clkr, + [gcc_ufs_card_2_unipro_core_clk] = &gcc_ufs_card_2_unipro_core_clk.clkr, + [gcc_ufs_card_2_unipro_core_clk_src] = &gcc_ufs_card_2_unipro_core_clk_src.clkr, + [gcc_ufs_card_ahb_clk] = &gcc_ufs_card_ahb_clk.clkr, + [gcc_ufs_card_axi_clk] = &gcc_ufs_card_axi_clk.clkr, + [gcc_ufs_card_axi_clk_src] = &gcc_ufs_card_axi_clk_src.clkr, + [gcc_ufs_card_axi_hw_ctl_clk] = &gcc_ufs_card_axi_hw_ctl_clk.clkr, + [gcc_ufs_card_ice_core_clk] = &gcc_ufs_card_ice_core_clk.clkr, + [gcc_ufs_card_ice_core_clk_src] = &gcc_ufs_card_ice_core_clk_src.clkr, + [gcc_ufs_card_ice_core_hw_ctl_clk] = &gcc_ufs_card_ice_core_hw_ctl_clk.clkr, + [gcc_ufs_card_phy_aux_clk] = &gcc_ufs_card_phy_aux_clk.clkr, + [gcc_ufs_card_phy_aux_clk_src] = &gcc_ufs_card_phy_aux_clk_src.clkr, + [gcc_ufs_card_phy_aux_hw_ctl_clk] = &gcc_ufs_card_phy_aux_hw_ctl_clk.clkr, + [gcc_ufs_card_rx_symbol_0_clk] = &gcc_ufs_card_rx_symbol_0_clk.clkr, + [gcc_ufs_card_rx_symbol_1_clk] = &gcc_ufs_card_rx_symbol_1_clk.clkr, + [gcc_ufs_card_tx_symbol_0_clk] = &gcc_ufs_card_tx_symbol_0_clk.clkr, + [gcc_ufs_card_unipro_core_clk] = &gcc_ufs_card_unipro_core_clk.clkr, + [gcc_ufs_card_unipro_core_clk_src] = &gcc_ufs_card_unipro_core_clk_src.clkr, + [gcc_ufs_card_unipro_core_hw_ctl_clk] = &gcc_ufs_card_unipro_core_hw_ctl_clk.clkr, + [gcc_ufs_phy_ahb_clk] = &gcc_ufs_phy_ahb_clk.clkr, + [gcc_ufs_phy_axi_clk] = &gcc_ufs_phy_axi_clk.clkr, + [gcc_ufs_phy_axi_clk_src] = &gcc_ufs_phy_axi_clk_src.clkr, + [gcc_ufs_phy_axi_hw_ctl_clk] = &gcc_ufs_phy_axi_hw_ctl_clk.clkr, + [gcc_ufs_phy_ice_core_clk] = &gcc_ufs_phy_ice_core_clk.clkr, + [gcc_ufs_phy_ice_core_clk_src] = &gcc_ufs_phy_ice_core_clk_src.clkr, + [gcc_ufs_phy_ice_core_hw_ctl_clk] = &gcc_ufs_phy_ice_core_hw_ctl_clk.clkr, + [gcc_ufs_phy_phy_aux_clk] = &gcc_ufs_phy_phy_aux_clk.clkr, + [gcc_ufs_phy_phy_aux_clk_src] = &gcc_ufs_phy_phy_aux_clk_src.clkr, + [gcc_ufs_phy_phy_aux_hw_ctl_clk] = &gcc_ufs_phy_phy_aux_hw_ctl_clk.clkr, + [gcc_ufs_phy_rx_symbol_0_clk] = &gcc_ufs_phy_rx_symbol_0_clk.clkr, + [gcc_ufs_phy_rx_symbol_1_clk] = &gcc_ufs_phy_rx_symbol_1_clk.clkr, + [gcc_ufs_phy_tx_symbol_0_clk] = &gcc_ufs_phy_tx_symbol_0_clk.clkr, + [gcc_ufs_phy_unipro_core_clk] = &gcc_ufs_phy_unipro_core_clk.clkr, + [gcc_ufs_phy_unipro_core_clk_src] = &gcc_ufs_phy_unipro_core_clk_src.clkr, + [gcc_ufs_phy_unipro_core_hw_ctl_clk] = &gcc_ufs_phy_unipro_core_hw_ctl_clk.clkr, + [gcc_usb30_mp_master_clk] = &gcc_usb30_mp_master_clk.clkr, + [gcc_usb30_mp_master_clk_src] = &gcc_usb30_mp_master_clk_src.clkr, + [gcc_usb30_mp_mock_utmi_clk] = &gcc_usb30_mp_mock_utmi_clk.clkr, + [gcc_usb30_mp_mock_utmi_clk_src] = &gcc_usb30_mp_mock_utmi_clk_src.clkr, + [gcc_usb30_mp_sleep_clk] = &gcc_usb30_mp_sleep_clk.clkr, + [gcc_usb30_prim_master_clk] = &gcc_usb30_prim_master_clk.clkr, + [gcc_usb30_prim_master_clk_src] = &gcc_usb30_prim_master_clk_src.clkr, + [gcc_usb30_prim_mock_utmi_clk] = &gcc_usb30_prim_mock_utmi_clk.clkr, + [gcc_usb30_prim_mock_utmi_clk_src] = &gcc_usb30_prim_mock_utmi_clk_src.clkr, + [gcc_usb30_prim_sleep_clk] = &gcc_usb30_prim_sleep_clk.clkr, + [gcc_usb30_sec_master_clk] = &gcc_usb30_sec_master_clk.clkr, + [gcc_usb30_sec_master_clk_src] = &gcc_usb30_sec_master_clk_src.clkr, + [gcc_usb30_sec_mock_utmi_clk] = &gcc_usb30_sec_mock_utmi_clk.clkr, + [gcc_usb30_sec_mock_utmi_clk_src] = &gcc_usb30_sec_mock_utmi_clk_src.clkr, + [gcc_usb30_sec_sleep_clk] = &gcc_usb30_sec_sleep_clk.clkr, + [gcc_usb3_mp_phy_aux_clk] = &gcc_usb3_mp_phy_aux_clk.clkr, + [gcc_usb3_mp_phy_aux_clk_src] = &gcc_usb3_mp_phy_aux_clk_src.clkr, + [gcc_usb3_mp_phy_com_aux_clk] = &gcc_usb3_mp_phy_com_aux_clk.clkr, + [gcc_usb3_mp_phy_pipe_0_clk] = &gcc_usb3_mp_phy_pipe_0_clk.clkr, + [gcc_usb3_mp_phy_pipe_1_clk] = &gcc_usb3_mp_phy_pipe_1_clk.clkr, + [gcc_usb3_prim_clkref_clk] = &gcc_usb3_prim_clkref_clk.clkr, + [gcc_usb3_prim_phy_aux_clk] = &gcc_usb3_prim_phy_aux_clk.clkr, + [gcc_usb3_prim_phy_aux_clk_src] = &gcc_usb3_prim_phy_aux_clk_src.clkr, + [gcc_usb3_prim_phy_com_aux_clk] = &gcc_usb3_prim_phy_com_aux_clk.clkr, + [gcc_usb3_prim_phy_pipe_clk] = &gcc_usb3_prim_phy_pipe_clk.clkr, + [gcc_usb3_sec_clkref_clk] = &gcc_usb3_sec_clkref_clk.clkr, + [gcc_usb3_sec_phy_aux_clk] = &gcc_usb3_sec_phy_aux_clk.clkr, + [gcc_usb3_sec_phy_aux_clk_src] = &gcc_usb3_sec_phy_aux_clk_src.clkr, + [gcc_usb3_sec_phy_com_aux_clk] = &gcc_usb3_sec_phy_com_aux_clk.clkr, + [gcc_usb3_sec_phy_pipe_clk] = &gcc_usb3_sec_phy_pipe_clk.clkr, + [gcc_video_axi0_clk] = &gcc_video_axi0_clk.clkr, + [gcc_video_axi1_clk] = &gcc_video_axi1_clk.clkr, + [gcc_video_axic_clk] = &gcc_video_axic_clk.clkr, + [gpll0] = &gpll0.clkr, + [gpll0_out_even] = &gpll0_out_even.clkr, + [gpll1] = &gpll1.clkr, + [gpll4] = &gpll4.clkr, + [gpll7] = &gpll7.clkr, +}; + +static const struct qcom_reset_map gcc_sc8180x_resets[] = { + [gcc_emac_bcr] = { 0x6000 }, + [gcc_gpu_bcr] = { 0x71000 }, + [gcc_mmss_bcr] = { 0xb000 }, + [gcc_npu_bcr] = { 0x4d000 }, + [gcc_pcie_0_bcr] = { 0x6b000 }, + [gcc_pcie_0_phy_bcr] = { 0x6c01c }, + [gcc_pcie_1_bcr] = { 0x8d000 }, + [gcc_pcie_1_phy_bcr] = { 0x8e01c }, + [gcc_pcie_2_bcr] = { 0x9d000 }, + [gcc_pcie_2_phy_bcr] = { 0xa701c }, + [gcc_pcie_3_bcr] = { 0xa3000 }, + [gcc_pcie_3_phy_bcr] = { 0xa801c }, + [gcc_pcie_phy_bcr] = { 0x6f000 }, + [gcc_pdm_bcr] = { 0x33000 }, + [gcc_prng_bcr] = { 0x34000 }, + [gcc_qspi_1_bcr] = { 0x4a000 }, + [gcc_qspi_bcr] = { 0x24008 }, + [gcc_qupv3_wrapper_0_bcr] = { 0x17000 }, + [gcc_qupv3_wrapper_1_bcr] = { 0x18000 }, + [gcc_qupv3_wrapper_2_bcr] = { 0x1e000 }, + [gcc_qusb2phy_5_bcr] = { 0x12010 }, + [gcc_qusb2phy_mp0_bcr] = { 0x12008 }, + [gcc_qusb2phy_mp1_bcr] = { 0x1200c }, + [gcc_qusb2phy_prim_bcr] = { 0x12000 }, + [gcc_qusb2phy_sec_bcr] = { 0x12004 }, + [gcc_usb3_phy_prim_sp0_bcr] = { 0x50000 }, + [gcc_usb3_phy_prim_sp1_bcr] = { 0x50004 }, + [gcc_usb3_dp_phy_prim_sp0_bcr] = { 0x50010 }, + [gcc_usb3_dp_phy_prim_sp1_bcr] = { 0x50014 }, + [gcc_usb3_phy_sec_bcr] = { 0x50018 }, + [gcc_usb3phy_phy_sec_bcr] = { 0x5001c }, + [gcc_usb3_dp_phy_sec_bcr] = { 0x50020 }, + [gcc_sdcc2_bcr] = { 0x14000 }, + [gcc_sdcc4_bcr] = { 0x16000 }, + [gcc_tsif_bcr] = { 0x36000 }, + [gcc_ufs_card_2_bcr] = { 0xa2000 }, + [gcc_ufs_card_bcr] = { 0x75000 }, + [gcc_ufs_phy_bcr] = { 0x77000 }, + [gcc_usb30_mp_bcr] = { 0xa6000 }, + [gcc_usb30_prim_bcr] = { 0xf000 }, + [gcc_usb30_sec_bcr] = { 0x10000 }, + [gcc_usb_phy_cfg_ahb2phy_bcr] = { 0x6a000 }, + [gcc_video_axic_clk_bcr] = { 0xb02c, 2 }, + [gcc_video_axi0_clk_bcr] = { 0xb024, 2 }, + [gcc_video_axi1_clk_bcr] = { 0xb028, 2 }, +}; + +static struct gdsc *gcc_sc8180x_gdscs[] = { + [emac_gdsc] = &emac_gdsc, + [pcie_0_gdsc] = &pcie_0_gdsc, + [pcie_1_gdsc] = &pcie_1_gdsc, + [pcie_2_gdsc] = &pcie_2_gdsc, + [pcie_3_gdsc] = &pcie_3_gdsc, + [ufs_card_gdsc] = &ufs_card_gdsc, + [ufs_card_2_gdsc] = &ufs_card_2_gdsc, + [ufs_phy_gdsc] = &ufs_phy_gdsc, + [usb30_mp_gdsc] = &usb30_mp_gdsc, + [usb30_prim_gdsc] = &usb30_prim_gdsc, + [usb30_sec_gdsc] = &usb30_sec_gdsc, +}; + +static const struct regmap_config gcc_sc8180x_regmap_config = { + .reg_bits = 32, + .reg_stride = 4, + .val_bits = 32, + .max_register = 0xc0004, + .fast_io = true, +}; + +static const struct qcom_cc_desc gcc_sc8180x_desc = { + .config = &gcc_sc8180x_regmap_config, + .clks = gcc_sc8180x_clocks, + .num_clks = array_size(gcc_sc8180x_clocks), + .resets = gcc_sc8180x_resets, + .num_resets = array_size(gcc_sc8180x_resets), + .gdscs = gcc_sc8180x_gdscs, + .num_gdscs = array_size(gcc_sc8180x_gdscs), +}; + +static const struct of_device_id gcc_sc8180x_match_table[] = { + { .compatible = "qcom,gcc-sc8180x" }, + { } +}; +module_device_table(of, gcc_sc8180x_match_table); + +static int gcc_sc8180x_probe(struct platform_device *pdev) +{ + struct regmap *regmap; + + regmap = qcom_cc_map(pdev, &gcc_sc8180x_desc); + if (is_err(regmap)) + return ptr_err(regmap); + + /* + * enable the following always-on clocks: + * gcc_video_ahb_clk, gcc_camera_ahb_clk, gcc_disp_ahb_clk, + * gcc_video_xo_clk, gcc_camera_xo_clk, gcc_disp_xo_clk, + * gcc_cpuss_gnoc_clk, gcc_cpuss_dvm_bus_clk, gcc_npu_cfg_ahb_clk and + * gcc_gpu_cfg_ahb_clk + */ + regmap_update_bits(regmap, 0xb004, bit(0), bit(0)); + regmap_update_bits(regmap, 0xb008, bit(0), bit(0)); + regmap_update_bits(regmap, 0xb00c, bit(0), bit(0)); + regmap_update_bits(regmap, 0xb040, bit(0), bit(0)); + regmap_update_bits(regmap, 0xb044, bit(0), bit(0)); + regmap_update_bits(regmap, 0xb048, bit(0), bit(0)); + regmap_update_bits(regmap, 0x48004, bit(0), bit(0)); + regmap_update_bits(regmap, 0x48190, bit(0), bit(0)); + regmap_update_bits(regmap, 0x4d004, bit(0), bit(0)); + regmap_update_bits(regmap, 0x71004, bit(0), bit(0)); + + /* disable the gpll0 active input to npu and gpu via misc registers */ + regmap_update_bits(regmap, 0x4d110, 0x3, 0x3); + regmap_update_bits(regmap, 0x71028, 0x3, 0x3); + + return qcom_cc_really_probe(pdev, &gcc_sc8180x_desc, regmap); +} + +static struct platform_driver gcc_sc8180x_driver = { + .probe = gcc_sc8180x_probe, + .driver = { + .name = "gcc-sc8180x", + .of_match_table = gcc_sc8180x_match_table, + }, +}; + +static int __init gcc_sc8180x_init(void) +{ + return platform_driver_register(&gcc_sc8180x_driver); +} +core_initcall(gcc_sc8180x_init); + +static void __exit gcc_sc8180x_exit(void) +{ + platform_driver_unregister(&gcc_sc8180x_driver); +} +module_exit(gcc_sc8180x_exit); + +module_description("qti gcc sc8180x driver"); +module_license("gpl v2");
|
Clock
|
4433594bbe5dcf473b06452dbea19430deb7154c
|
bjorn andersson
|
drivers
|
clk
|
qcom
|
clk: qcom: rpmh: add support for rpmh clocks on sc7280
|
add support for rpmh clocks on sc7280 socs.
|
this release allows to map an uid to a different one in a mount; it also adds support for selecting the preemption model at runtime; support for a low-overhead memory error detector designed to be used in production; support for the acrn hypervisor designed for embedded systems; btrfs initial support for zoned devices, subpage blocks sizes and performance improvements; support for eager nfs writes; support for a thermal power management to control the surface temperature of embedded devices in an unified way; the napi polling can be moved to a kernel thread; and support for non-blocking path lookups. as always, there are many other features, new drivers, improvements and fixes.
|
add support for rpmh clocks on sc7280
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'cryptography', 'security', 'networking', 'architectures x86 arm risc-v powerpc mips csky s390 pa-risc c6x']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'device voltage and frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'non-transparent bridge (ntb)', 'thunderbolt', 'clock', 'phy ("physical layer" framework)', 'cxl (compute express link)', 'various']
|
['qcom', 'rpmh']
|
['c']
| 1
| 23
| 1
|
--- diff --git a/drivers/clk/qcom/clk-rpmh.c b/drivers/clk/qcom/clk-rpmh.c --- a/drivers/clk/qcom/clk-rpmh.c +++ b/drivers/clk/qcom/clk-rpmh.c - * copyright (c) 2018-2020, the linux foundation. all rights reserved. + * copyright (c) 2018-2021, the linux foundation. all rights reserved. +static struct clk_hw *sc7280_rpmh_clocks[] = { + [rpmh_cxo_clk] = &sdm845_bi_tcxo.hw, + [rpmh_cxo_clk_a] = &sdm845_bi_tcxo_ao.hw, + [rpmh_ln_bb_clk2] = &sdm845_ln_bb_clk2.hw, + [rpmh_ln_bb_clk2_a] = &sdm845_ln_bb_clk2_ao.hw, + [rpmh_rf_clk1] = &sdm845_rf_clk1.hw, + [rpmh_rf_clk1_a] = &sdm845_rf_clk1_ao.hw, + [rpmh_rf_clk3] = &sdm845_rf_clk3.hw, + [rpmh_rf_clk3_a] = &sdm845_rf_clk3_ao.hw, + [rpmh_rf_clk4] = &sm8350_rf_clk4.hw, + [rpmh_rf_clk4_a] = &sm8350_rf_clk4_ao.hw, + [rpmh_ipa_clk] = &sdm845_ipa.hw, + [rpmh_pka_clk] = &sm8350_pka.hw, + [rpmh_hwkm_clk] = &sm8350_hwkm.hw, +}; + +static const struct clk_rpmh_desc clk_rpmh_sc7280 = { + .clks = sc7280_rpmh_clocks, + .num_clks = array_size(sc7280_rpmh_clocks), +}; + + { .compatible = "qcom,sc7280-rpmh-clk", .data = &clk_rpmh_sc7280},
|
Clock
|
fff2b9a651621f2979ca12c8206c74e3e07a6e31
|
taniya das
|
drivers
|
clk
|
qcom
|
clk: qcom: rpmhcc: add sc8180x rpmh clocks
|
add clocks provides by rpmh in the qualcomm sc8180x platform.
|
this release allows to map an uid to a different one in a mount; it also adds support for selecting the preemption model at runtime; support for a low-overhead memory error detector designed to be used in production; support for the acrn hypervisor designed for embedded systems; btrfs initial support for zoned devices, subpage blocks sizes and performance improvements; support for eager nfs writes; support for a thermal power management to control the surface temperature of embedded devices in an unified way; the napi polling can be moved to a kernel thread; and support for non-blocking path lookups. as always, there are many other features, new drivers, improvements and fixes.
|
add sc8180x rpmh clocks
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'cryptography', 'security', 'networking', 'architectures x86 arm risc-v powerpc mips csky s390 pa-risc c6x']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'device voltage and frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'non-transparent bridge (ntb)', 'thunderbolt', 'clock', 'phy ("physical layer" framework)', 'cxl (compute express link)', 'various']
|
['qcom', 'rpmhcc']
|
['c']
| 1
| 25
| 0
|
--- diff --git a/drivers/clk/qcom/clk-rpmh.c b/drivers/clk/qcom/clk-rpmh.c --- a/drivers/clk/qcom/clk-rpmh.c +++ b/drivers/clk/qcom/clk-rpmh.c +define_clk_rpmh_vrm(sc8180x, rf_clk1, rf_clk1_ao, "rfclkd1", 1); +define_clk_rpmh_vrm(sc8180x, rf_clk2, rf_clk2_ao, "rfclkd2", 1); +define_clk_rpmh_vrm(sc8180x, rf_clk3, rf_clk3_ao, "rfclkd3", 1); +define_clk_rpmh_vrm(sc8180x, rf_clk4, rf_clk4_ao, "rfclkd4", 1); +static struct clk_hw *sc8180x_rpmh_clocks[] = { + [rpmh_cxo_clk] = &sdm845_bi_tcxo.hw, + [rpmh_cxo_clk_a] = &sdm845_bi_tcxo_ao.hw, + [rpmh_ln_bb_clk2] = &sdm845_ln_bb_clk2.hw, + [rpmh_ln_bb_clk2_a] = &sdm845_ln_bb_clk2_ao.hw, + [rpmh_ln_bb_clk3] = &sdm845_ln_bb_clk3.hw, + [rpmh_ln_bb_clk3_a] = &sdm845_ln_bb_clk3_ao.hw, + [rpmh_rf_clk1] = &sc8180x_rf_clk1.hw, + [rpmh_rf_clk1_a] = &sc8180x_rf_clk1_ao.hw, + [rpmh_rf_clk2] = &sc8180x_rf_clk2.hw, + [rpmh_rf_clk2_a] = &sc8180x_rf_clk2_ao.hw, + [rpmh_rf_clk3] = &sc8180x_rf_clk3.hw, + [rpmh_rf_clk3_a] = &sc8180x_rf_clk3_ao.hw, +}; + +static const struct clk_rpmh_desc clk_rpmh_sc8180x = { + .clks = sc8180x_rpmh_clocks, + .num_clks = array_size(sc8180x_rpmh_clocks), +}; + + { .compatible = "qcom,sc8180x-rpmh-clk", .data = &clk_rpmh_sc8180x},
|
Clock
|
8a1f7fb17569536d7d3a3c9f9c4e02c303c1c1e2
|
bjorn andersson
|
drivers
|
clk
|
qcom
|
clk: renesas: r8a77965: add tmu clocks
|
this patch adds tmu{0,1,2,3,4} clocks.
|
this release allows to map an uid to a different one in a mount; it also adds support for selecting the preemption model at runtime; support for a low-overhead memory error detector designed to be used in production; support for the acrn hypervisor designed for embedded systems; btrfs initial support for zoned devices, subpage blocks sizes and performance improvements; support for eager nfs writes; support for a thermal power management to control the surface temperature of embedded devices in an unified way; the napi polling can be moved to a kernel thread; and support for non-blocking path lookups. as always, there are many other features, new drivers, improvements and fixes.
|
add tmu clocks
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'cryptography', 'security', 'networking', 'architectures x86 arm risc-v powerpc mips csky s390 pa-risc c6x']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'device voltage and frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'non-transparent bridge (ntb)', 'thunderbolt', 'clock', 'phy ("physical layer" framework)', 'cxl (compute express link)', 'various']
|
['renesas', 'r8a77965']
|
['c']
| 1
| 5
| 0
|
--- diff --git a/drivers/clk/renesas/r8a77965-cpg-mssr.c b/drivers/clk/renesas/r8a77965-cpg-mssr.c --- a/drivers/clk/renesas/r8a77965-cpg-mssr.c +++ b/drivers/clk/renesas/r8a77965-cpg-mssr.c + def_mod("tmu4", 121, r8a77965_clk_s0d6), + def_mod("tmu3", 122, r8a77965_clk_s3d2), + def_mod("tmu2", 123, r8a77965_clk_s3d2), + def_mod("tmu1", 124, r8a77965_clk_s3d2), + def_mod("tmu0", 125, r8a77965_clk_cp),
|
Clock
|
e0c0d449346085f0ac71f2adef2808dc9e679fa0
|
niklas s derlund
|
drivers
|
clk
|
renesas
|
clk: renesas: r8a7796: add tmu clocks
|
this patch adds tmu{0,1,2,3,4} clocks.
|
this release allows to map an uid to a different one in a mount; it also adds support for selecting the preemption model at runtime; support for a low-overhead memory error detector designed to be used in production; support for the acrn hypervisor designed for embedded systems; btrfs initial support for zoned devices, subpage blocks sizes and performance improvements; support for eager nfs writes; support for a thermal power management to control the surface temperature of embedded devices in an unified way; the napi polling can be moved to a kernel thread; and support for non-blocking path lookups. as always, there are many other features, new drivers, improvements and fixes.
|
add tmu clocks
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'cryptography', 'security', 'networking', 'architectures x86 arm risc-v powerpc mips csky s390 pa-risc c6x']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'device voltage and frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'non-transparent bridge (ntb)', 'thunderbolt', 'clock', 'phy ("physical layer" framework)', 'cxl (compute express link)', 'various']
|
['renesas', 'r8a7796']
|
['c']
| 1
| 5
| 0
|
--- diff --git a/drivers/clk/renesas/r8a7796-cpg-mssr.c b/drivers/clk/renesas/r8a7796-cpg-mssr.c --- a/drivers/clk/renesas/r8a7796-cpg-mssr.c +++ b/drivers/clk/renesas/r8a7796-cpg-mssr.c + def_mod("tmu4", 121, r8a7796_clk_s0d6), + def_mod("tmu3", 122, r8a7796_clk_s3d2), + def_mod("tmu2", 123, r8a7796_clk_s3d2), + def_mod("tmu1", 124, r8a7796_clk_s3d2), + def_mod("tmu0", 125, r8a7796_clk_cp),
|
Clock
|
a26edd3d3c286c47ad3e554922ab03816d7431fb
|
niklas s derlund
|
drivers
|
clk
|
renesas
|
clk: renesas: r8a77990: add tmu clocks
|
this patch adds tmu{0,1,2,3,4} clocks.
|
this release allows to map an uid to a different one in a mount; it also adds support for selecting the preemption model at runtime; support for a low-overhead memory error detector designed to be used in production; support for the acrn hypervisor designed for embedded systems; btrfs initial support for zoned devices, subpage blocks sizes and performance improvements; support for eager nfs writes; support for a thermal power management to control the surface temperature of embedded devices in an unified way; the napi polling can be moved to a kernel thread; and support for non-blocking path lookups. as always, there are many other features, new drivers, improvements and fixes.
|
add tmu clocks
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'cryptography', 'security', 'networking', 'architectures x86 arm risc-v powerpc mips csky s390 pa-risc c6x']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'device voltage and frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'non-transparent bridge (ntb)', 'thunderbolt', 'clock', 'phy ("physical layer" framework)', 'cxl (compute express link)', 'various']
|
['renesas', 'r8a77990']
|
['c']
| 1
| 5
| 0
|
--- diff --git a/drivers/clk/renesas/r8a77990-cpg-mssr.c b/drivers/clk/renesas/r8a77990-cpg-mssr.c --- a/drivers/clk/renesas/r8a77990-cpg-mssr.c +++ b/drivers/clk/renesas/r8a77990-cpg-mssr.c + def_mod("tmu4", 121, r8a77990_clk_s0d6c), + def_mod("tmu3", 122, r8a77990_clk_s3d2c), + def_mod("tmu2", 123, r8a77990_clk_s3d2c), + def_mod("tmu1", 124, r8a77990_clk_s3d2c), + def_mod("tmu0", 125, r8a77990_clk_cp),
|
Clock
|
0f3a9265941bbcf75f870f7e1ce20e90d735862d
|
niklas s derlund
|
drivers
|
clk
|
renesas
|
clk: renesas: r8a77995: add tmu clocks
|
this patch adds tmu{0,1,2,3,4} clocks.
|
this release allows to map an uid to a different one in a mount; it also adds support for selecting the preemption model at runtime; support for a low-overhead memory error detector designed to be used in production; support for the acrn hypervisor designed for embedded systems; btrfs initial support for zoned devices, subpage blocks sizes and performance improvements; support for eager nfs writes; support for a thermal power management to control the surface temperature of embedded devices in an unified way; the napi polling can be moved to a kernel thread; and support for non-blocking path lookups. as always, there are many other features, new drivers, improvements and fixes.
|
add tmu clocks
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'cryptography', 'security', 'networking', 'architectures x86 arm risc-v powerpc mips csky s390 pa-risc c6x']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'device voltage and frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'non-transparent bridge (ntb)', 'thunderbolt', 'clock', 'phy ("physical layer" framework)', 'cxl (compute express link)', 'various']
|
['renesas', 'r8a77995']
|
['c']
| 1
| 5
| 0
|
--- diff --git a/drivers/clk/renesas/r8a77995-cpg-mssr.c b/drivers/clk/renesas/r8a77995-cpg-mssr.c --- a/drivers/clk/renesas/r8a77995-cpg-mssr.c +++ b/drivers/clk/renesas/r8a77995-cpg-mssr.c + def_mod("tmu4", 121, r8a77995_clk_s1d4c), + def_mod("tmu3", 122, r8a77995_clk_s3d2c), + def_mod("tmu2", 123, r8a77995_clk_s3d2c), + def_mod("tmu1", 124, r8a77995_clk_s3d2c), + def_mod("tmu0", 125, r8a77995_clk_cp),
|
Clock
|
fa7f47972b13d1791494bb5019db8a8951a6fea3
|
niklas s derlund
|
drivers
|
clk
|
renesas
|
arm64: dts: renesas: r8a779a0: add & update scif nodes
|
this is the result of multiple patches taken from the bsp, combined, rebased, and properly sorted. scif0 gets dma properties, other scifs are entirely new.
|
this release allows to map an uid to a different one in a mount; it also adds support for selecting the preemption model at runtime; support for a low-overhead memory error detector designed to be used in production; support for the acrn hypervisor designed for embedded systems; btrfs initial support for zoned devices, subpage blocks sizes and performance improvements; support for eager nfs writes; support for a thermal power management to control the surface temperature of embedded devices in an unified way; the napi polling can be moved to a kernel thread; and support for non-blocking path lookups. as always, there are many other features, new drivers, improvements and fixes.
|
add & update (h)scif nodes
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'cryptography', 'security', 'networking', 'architectures x86 arm risc-v powerpc mips csky s390 pa-risc c6x']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'device voltage and frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'non-transparent bridge (ntb)', 'thunderbolt', 'clock', 'phy ("physical layer" framework)', 'cxl (compute express link)', 'various']
|
['renesas', 'v3u']
|
['dtsi']
| 1
| 50
| 0
|
--- diff --git a/arch/arm64/boot/dts/renesas/r8a779a0.dtsi b/arch/arm64/boot/dts/renesas/r8a779a0.dtsi --- a/arch/arm64/boot/dts/renesas/r8a779a0.dtsi +++ b/arch/arm64/boot/dts/renesas/r8a779a0.dtsi + dmas = <&dmac1 0x51>, <&dmac1 0x50>; + dma-names = "tx", "rx"; + scif1: serial@e6e68000 { + compatible = "renesas,scif-r8a779a0", + "renesas,rcar-gen3-scif", "renesas,scif"; + reg = <0 0xe6e68000 0 64>; + interrupts = <gic_spi 252 irq_type_level_high>; + clocks = <&cpg cpg_mod 703>, + <&cpg cpg_core r8a779a0_clk_s1d2>, + <&scif_clk>; + clock-names = "fck", "brg_int", "scif_clk"; + dmas = <&dmac1 0x53>, <&dmac1 0x52>; + dma-names = "tx", "rx"; + power-domains = <&sysc r8a779a0_pd_always_on>; + resets = <&cpg 703>; + status = "disabled"; + }; + + scif3: serial@e6c50000 { + compatible = "renesas,scif-r8a779a0", + "renesas,rcar-gen3-scif", "renesas,scif"; + reg = <0 0xe6c50000 0 64>; + interrupts = <gic_spi 253 irq_type_level_high>; + clocks = <&cpg cpg_mod 704>, + <&cpg cpg_core r8a779a0_clk_s1d2>, + <&scif_clk>; + clock-names = "fck", "brg_int", "scif_clk"; + dmas = <&dmac1 0x57>, <&dmac1 0x56>; + dma-names = "tx", "rx"; + power-domains = <&sysc r8a779a0_pd_always_on>; + resets = <&cpg 704>; + status = "disabled"; + }; + + scif4: serial@e6c40000 { + compatible = "renesas,scif-r8a779a0", + "renesas,rcar-gen3-scif", "renesas,scif"; + reg = <0 0xe6c40000 0 64>; + interrupts = <gic_spi 254 irq_type_level_high>; + clocks = <&cpg cpg_mod 705>, + <&cpg cpg_core r8a779a0_clk_s1d2>, + <&scif_clk>; + clock-names = "fck", "brg_int", "scif_clk"; + dmas = <&dmac1 0x59>, <&dmac1 0x58>; + dma-names = "tx", "rx"; + power-domains = <&sysc r8a779a0_pd_always_on>; + resets = <&cpg 705>; + status = "disabled"; + }; +
|
Clock
|
bff4e5dac9992ba5a6b2d318570b993f4c616b5c
|
wolfram sang
|
arch
|
arm64
|
boot, dts, renesas
|
arm64: dts: renesas: falcon: complete scif0 nodes
|
scif0 has been enabled by the firmware, so it worked already. still, add the proper nodes to make it work in any case.
|
this release allows to map an uid to a different one in a mount; it also adds support for selecting the preemption model at runtime; support for a low-overhead memory error detector designed to be used in production; support for the acrn hypervisor designed for embedded systems; btrfs initial support for zoned devices, subpage blocks sizes and performance improvements; support for eager nfs writes; support for a thermal power management to control the surface temperature of embedded devices in an unified way; the napi polling can be moved to a kernel thread; and support for non-blocking path lookups. as always, there are many other features, new drivers, improvements and fixes.
|
add & update (h)scif nodes
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'cryptography', 'security', 'networking', 'architectures x86 arm risc-v powerpc mips csky s390 pa-risc c6x']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'device voltage and frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'non-transparent bridge (ntb)', 'thunderbolt', 'clock', 'phy ("physical layer" framework)', 'cxl (compute express link)', 'various']
|
['renesas', 'v3u']
|
['dtsi']
| 1
| 21
| 0
|
--- diff --git a/arch/arm64/boot/dts/renesas/r8a779a0-falcon-cpu.dtsi b/arch/arm64/boot/dts/renesas/r8a779a0-falcon-cpu.dtsi --- a/arch/arm64/boot/dts/renesas/r8a779a0-falcon-cpu.dtsi +++ b/arch/arm64/boot/dts/renesas/r8a779a0-falcon-cpu.dtsi + pinctrl-0 = <&scif_clk_pins>; + pinctrl-names = "default"; + + + scif0_pins: scif0 { + groups = "scif0_data", "scif0_ctrl"; + function = "scif0"; + }; + + scif_clk_pins: scif_clk { + groups = "scif_clk"; + function = "scif_clk"; + }; + pinctrl-0 = <&scif0_pins>; + pinctrl-names = "default"; + + uart-has-rtscts; + +&scif_clk { + clock-frequency = <24000000>; +};
|
Clock
|
9e921faa305369e5cbe4fd8f3212a1ad6aa85c79
|
wolfram sang
|
arch
|
arm64
|
boot, dts, renesas
|
dt-bindings: serial: renesas,hscif: add r8a779a0 support
|
reviewed-by: geert uytterhoeven <geert+renesas@glider.be> acked-by: rob herring <robh@kernel.org> signed-off-by: wolfram sang <wsa+renesas@sang-engineering.com> link: https://lore.kernel.org/r/20201228112715.14947-4-wsa+renesas@sang-engineering.com signed-off-by: greg kroah-hartman <gregkh@linuxfoundation.org>
|
this release allows to map an uid to a different one in a mount; it also adds support for selecting the preemption model at runtime; support for a low-overhead memory error detector designed to be used in production; support for the acrn hypervisor designed for embedded systems; btrfs initial support for zoned devices, subpage blocks sizes and performance improvements; support for eager nfs writes; support for a thermal power management to control the surface temperature of embedded devices in an unified way; the napi polling can be moved to a kernel thread; and support for non-blocking path lookups. as always, there are many other features, new drivers, improvements and fixes.
|
add & update (h)scif nodes
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'cryptography', 'security', 'networking', 'architectures x86 arm risc-v powerpc mips csky s390 pa-risc c6x']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'device voltage and frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'non-transparent bridge (ntb)', 'thunderbolt', 'clock', 'phy ("physical layer" framework)', 'cxl (compute express link)', 'various']
|
['renesas', 'v3u']
|
['yaml']
| 1
| 1
| 0
|
--- diff --git a/documentation/devicetree/bindings/serial/renesas,hscif.yaml b/documentation/devicetree/bindings/serial/renesas,hscif.yaml --- a/documentation/devicetree/bindings/serial/renesas,hscif.yaml +++ b/documentation/devicetree/bindings/serial/renesas,hscif.yaml - renesas,hscif-r8a77980 # r-car v3h - renesas,hscif-r8a77990 # r-car e3 - renesas,hscif-r8a77995 # r-car d3 + - renesas,hscif-r8a779a0 # r-car v3u - const: renesas,rcar-gen3-hscif # r-car gen3 and rz/g2 - const: renesas,hscif # generic hscif compatible uart
|
Clock
|
f754ed71b79cca5b07e76aaf28ce3c8776ab1f7f
|
wolfram sang
|
documentation
|
devicetree
|
bindings, serial
|
clk: renesas: r8a779a0: add hscif support
|
signed-off-by: wolfram sang <wsa+renesas@sang-engineering.com> link: https://lore.kernel.org/r/20201228112715.14947-5-wsa+renesas@sang-engineering.com signed-off-by: geert uytterhoeven <geert+renesas@glider.be>
|
this release allows to map an uid to a different one in a mount; it also adds support for selecting the preemption model at runtime; support for a low-overhead memory error detector designed to be used in production; support for the acrn hypervisor designed for embedded systems; btrfs initial support for zoned devices, subpage blocks sizes and performance improvements; support for eager nfs writes; support for a thermal power management to control the surface temperature of embedded devices in an unified way; the napi polling can be moved to a kernel thread; and support for non-blocking path lookups. as always, there are many other features, new drivers, improvements and fixes.
|
add & update (h)scif nodes
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'cryptography', 'security', 'networking', 'architectures x86 arm risc-v powerpc mips csky s390 pa-risc c6x']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'device voltage and frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'non-transparent bridge (ntb)', 'thunderbolt', 'clock', 'phy ("physical layer" framework)', 'cxl (compute express link)', 'various']
|
['renesas', 'v3u']
|
['c']
| 1
| 4
| 0
|
--- diff --git a/drivers/clk/renesas/r8a779a0-cpg-mssr.c b/drivers/clk/renesas/r8a779a0-cpg-mssr.c --- a/drivers/clk/renesas/r8a779a0-cpg-mssr.c +++ b/drivers/clk/renesas/r8a779a0-cpg-mssr.c + def_mod("hscif0", 514, r8a779a0_clk_s1d2), + def_mod("hscif1", 515, r8a779a0_clk_s1d2), + def_mod("hscif2", 516, r8a779a0_clk_s1d2), + def_mod("hscif3", 517, r8a779a0_clk_s1d2),
|
Clock
|
2e16d0df87baa84485031b88b1b149badbc68810
|
wolfram sang
|
drivers
|
clk
|
renesas
|
arm64: dts: renesas: r8a779a0: add hscif support
|
define the generic parts of the hscif[0-3] device nodes.
|
this release allows to map an uid to a different one in a mount; it also adds support for selecting the preemption model at runtime; support for a low-overhead memory error detector designed to be used in production; support for the acrn hypervisor designed for embedded systems; btrfs initial support for zoned devices, subpage blocks sizes and performance improvements; support for eager nfs writes; support for a thermal power management to control the surface temperature of embedded devices in an unified way; the napi polling can be moved to a kernel thread; and support for non-blocking path lookups. as always, there are many other features, new drivers, improvements and fixes.
|
add & update (h)scif nodes
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'cryptography', 'security', 'networking', 'architectures x86 arm risc-v powerpc mips csky s390 pa-risc c6x']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'device voltage and frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'non-transparent bridge (ntb)', 'thunderbolt', 'clock', 'phy ("physical layer" framework)', 'cxl (compute express link)', 'various']
|
['renesas', 'v3u']
|
['dtsi']
| 1
| 64
| 0
|
--- diff --git a/arch/arm64/boot/dts/renesas/r8a779a0.dtsi b/arch/arm64/boot/dts/renesas/r8a779a0.dtsi --- a/arch/arm64/boot/dts/renesas/r8a779a0.dtsi +++ b/arch/arm64/boot/dts/renesas/r8a779a0.dtsi + hscif0: serial@e6540000 { + compatible = "renesas,hscif-r8a779a0", + "renesas,rcar-gen3-hscif", "renesas,hscif"; + reg = <0 0xe6540000 0 0x60>; + interrupts = <gic_spi 28 irq_type_level_high>; + clocks = <&cpg cpg_mod 514>, + <&cpg cpg_core r8a779a0_clk_s1d2>, + <&scif_clk>; + clock-names = "fck", "brg_int", "scif_clk"; + dmas = <&dmac1 0x31>, <&dmac1 0x30>; + dma-names = "tx", "rx"; + power-domains = <&sysc r8a779a0_pd_always_on>; + resets = <&cpg 514>; + status = "disabled"; + }; + + hscif1: serial@e6550000 { + compatible = "renesas,hscif-r8a779a0", + "renesas,rcar-gen3-hscif", "renesas,hscif"; + reg = <0 0xe6550000 0 0x60>; + interrupts = <gic_spi 29 irq_type_level_high>; + clocks = <&cpg cpg_mod 515>, + <&cpg cpg_core r8a779a0_clk_s1d2>, + <&scif_clk>; + clock-names = "fck", "brg_int", "scif_clk"; + dmas = <&dmac1 0x33>, <&dmac1 0x32>; + dma-names = "tx", "rx"; + power-domains = <&sysc r8a779a0_pd_always_on>; + resets = <&cpg 515>; + status = "disabled"; + }; + + hscif2: serial@e6560000 { + compatible = "renesas,hscif-r8a779a0", + "renesas,rcar-gen3-hscif", "renesas,hscif"; + reg = <0 0xe6560000 0 0x60>; + interrupts = <gic_spi 30 irq_type_level_high>; + clocks = <&cpg cpg_mod 516>, + <&cpg cpg_core r8a779a0_clk_s1d2>, + <&scif_clk>; + clock-names = "fck", "brg_int", "scif_clk"; + dmas = <&dmac1 0x35>, <&dmac1 0x34>; + dma-names = "tx", "rx"; + power-domains = <&sysc r8a779a0_pd_always_on>; + resets = <&cpg 516>; + status = "disabled"; + }; + + hscif3: serial@e66a0000 { + compatible = "renesas,hscif-r8a779a0", + "renesas,rcar-gen3-hscif", "renesas,hscif"; + reg = <0 0xe66a0000 0 0x60>; + interrupts = <gic_spi 31 irq_type_level_high>; + clocks = <&cpg cpg_mod 517>, + <&cpg cpg_core r8a779a0_clk_s1d2>, + <&scif_clk>; + clock-names = "fck", "brg_int", "scif_clk"; + dmas = <&dmac1 0x37>, <&dmac1 0x36>; + dma-names = "tx", "rx"; + power-domains = <&sysc r8a779a0_pd_always_on>; + resets = <&cpg 517>; + status = "disabled"; + }; +
|
Clock
|
088e6b23050487cae1bd7f70b439a453689b6f53
|
linh phung
|
arch
|
arm64
|
boot, dts, renesas
|
dt-bindings: mmc: renesas,sdhi: add r8a779a0 support
|
signed-off-by: wolfram sang <wsa+renesas@sang-engineering.com> acked-by: rob herring <robh@kernel.org> reviewed-by: geert uytterhoeven <geert+renesas@glider.be> link: https://lore.kernel.org/r/20201227174202.40834-2-wsa+renesas@sang-engineering.com signed-off-by: ulf hansson <ulf.hansson@linaro.org>
|
this release allows to map an uid to a different one in a mount; it also adds support for selecting the preemption model at runtime; support for a low-overhead memory error detector designed to be used in production; support for the acrn hypervisor designed for embedded systems; btrfs initial support for zoned devices, subpage blocks sizes and performance improvements; support for eager nfs writes; support for a thermal power management to control the surface temperature of embedded devices in an unified way; the napi polling can be moved to a kernel thread; and support for non-blocking path lookups. as always, there are many other features, new drivers, improvements and fixes.
|
add sdhi/mmc support
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'cryptography', 'security', 'networking', 'architectures x86 arm risc-v powerpc mips csky s390 pa-risc c6x']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'device voltage and frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'non-transparent bridge (ntb)', 'thunderbolt', 'clock', 'phy ("physical layer" framework)', 'cxl (compute express link)', 'various']
|
['renesas', 'v3u']
|
['yaml']
| 1
| 1
| 0
|
--- diff --git a/documentation/devicetree/bindings/mmc/renesas,sdhi.yaml b/documentation/devicetree/bindings/mmc/renesas,sdhi.yaml --- a/documentation/devicetree/bindings/mmc/renesas,sdhi.yaml +++ b/documentation/devicetree/bindings/mmc/renesas,sdhi.yaml - renesas,sdhi-r8a77980 # r-car v3h - renesas,sdhi-r8a77990 # r-car e3 - renesas,sdhi-r8a77995 # r-car d3 + - renesas,sdhi-r8a779a0 # r-car v3u - const: renesas,rcar-gen3-sdhi # r-car gen3 or rz/g2
|
Clock
|
a5ca4c32121297e2306438ef0b2c08f98bafa3f3
|
wolfram sang
|
documentation
|
devicetree
|
bindings, mmc
|
clk: renesas: rcar-gen3: remove cpg_quirks access when registering sd clock
|
we want to reuse sd clock handling for other socs and, thus, need to generalize it. so, don't access cpg_quirks in that realm.
|
this release allows to map an uid to a different one in a mount; it also adds support for selecting the preemption model at runtime; support for a low-overhead memory error detector designed to be used in production; support for the acrn hypervisor designed for embedded systems; btrfs initial support for zoned devices, subpage blocks sizes and performance improvements; support for eager nfs writes; support for a thermal power management to control the surface temperature of embedded devices in an unified way; the napi polling can be moved to a kernel thread; and support for non-blocking path lookups. as always, there are many other features, new drivers, improvements and fixes.
|
add sdhi/mmc support
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'cryptography', 'security', 'networking', 'architectures x86 arm risc-v powerpc mips csky s390 pa-risc c6x']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'device voltage and frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'non-transparent bridge (ntb)', 'thunderbolt', 'clock', 'phy ("physical layer" framework)', 'cxl (compute express link)', 'various']
|
['renesas', 'v3u']
|
['c']
| 1
| 10
| 9
|
--- diff --git a/drivers/clk/renesas/rcar-gen3-cpg.c b/drivers/clk/renesas/rcar-gen3-cpg.c --- a/drivers/clk/renesas/rcar-gen3-cpg.c +++ b/drivers/clk/renesas/rcar-gen3-cpg.c -static u32 cpg_quirks __initdata; - -#define pll_errata bit(0) /* missing pll0/2/4 post-divider */ -#define rckcr_cksel bit(1) /* manual rclk parent selection */ -#define sd_skip_first bit(2) /* skip first clock in sd table */ - - struct raw_notifier_head *notifiers) + struct raw_notifier_head *notifiers, bool skip_first) - if (cpg_quirks & sd_skip_first) { + if (skip_first) { +static u32 cpg_quirks __initdata; + +#define pll_errata bit(0) /* missing pll0/2/4 post-divider */ +#define rckcr_cksel bit(1) /* manual rclk parent selection */ +#define sd_skip_first bit(2) /* skip first clock in sd table */ + - __clk_get_name(parent), notifiers); + __clk_get_name(parent), notifiers, + cpg_quirks & sd_skip_first);
|
Clock
|
97af391a6fdca679aa9863b019137332167b3fa6
|
wolfram sang
|
drivers
|
clk
|
renesas
|
clk: renesas: rcar-gen3: factor out cpg library
|
r-car v3u has a cpg different enough to not be a generic gen3 cpg but similar enough to reuse code. introduce a new cpg library, factor out the sd clock handling and hook it to the generic gen3 cpg driver so we have an equal state. v3u will make use of it in the next patch then.
|
this release allows to map an uid to a different one in a mount; it also adds support for selecting the preemption model at runtime; support for a low-overhead memory error detector designed to be used in production; support for the acrn hypervisor designed for embedded systems; btrfs initial support for zoned devices, subpage blocks sizes and performance improvements; support for eager nfs writes; support for a thermal power management to control the surface temperature of embedded devices in an unified way; the napi polling can be moved to a kernel thread; and support for non-blocking path lookups. as always, there are many other features, new drivers, improvements and fixes.
|
add sdhi/mmc support
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'cryptography', 'security', 'networking', 'architectures x86 arm risc-v powerpc mips csky s390 pa-risc c6x']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'device voltage and frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'non-transparent bridge (ntb)', 'thunderbolt', 'clock', 'phy ("physical layer" framework)', 'cxl (compute express link)', 'various']
|
['renesas', 'v3u']
|
['h', 'kconfig', 'c', 'makefile']
| 5
| 309
| 251
|
--- diff --git a/drivers/clk/renesas/kconfig b/drivers/clk/renesas/kconfig --- a/drivers/clk/renesas/kconfig +++ b/drivers/clk/renesas/kconfig +config clk_rcar_cpg_lib + bool "cpg/mssr library functions" if compile_test + + select clk_rcar_cpg_lib diff --git a/drivers/clk/renesas/makefile b/drivers/clk/renesas/makefile --- a/drivers/clk/renesas/makefile +++ b/drivers/clk/renesas/makefile +obj-$(config_clk_rcar_cpg_lib) += rcar-cpg-lib.o diff --git a/drivers/clk/renesas/rcar-cpg-lib.c b/drivers/clk/renesas/rcar-cpg-lib.c --- /dev/null +++ b/drivers/clk/renesas/rcar-cpg-lib.c +// spdx-license-identifier: gpl-2.0 +/* + * r-car gen3 clock pulse generator library + * + * copyright (c) 2015-2018 glider bvba + * copyright (c) 2019 renesas electronics corp. + * + * based on clk-rcar-gen3.c + * + * copyright (c) 2015 renesas electronics corp. + */ + +#include <linux/clk.h> +#include <linux/clk-provider.h> +#include <linux/device.h> +#include <linux/err.h> +#include <linux/init.h> +#include <linux/io.h> +#include <linux/pm.h> +#include <linux/slab.h> +#include <linux/sys_soc.h> + +#include "rcar-cpg-lib.h" + +spinlock_t cpg_lock; + +void cpg_reg_modify(void __iomem *reg, u32 clear, u32 set) +{ + unsigned long flags; + u32 val; + + spin_lock_irqsave(&cpg_lock, flags); + val = readl(reg); + val &= ~clear; + val |= set; + writel(val, reg); + spin_unlock_irqrestore(&cpg_lock, flags); +}; + +static int cpg_simple_notifier_call(struct notifier_block *nb, + unsigned long action, void *data) +{ + struct cpg_simple_notifier *csn = + container_of(nb, struct cpg_simple_notifier, nb); + + switch (action) { + case pm_event_suspend: + csn->saved = readl(csn->reg); + return notify_ok; + + case pm_event_resume: + writel(csn->saved, csn->reg); + return notify_ok; + } + return notify_done; +} + +void cpg_simple_notifier_register(struct raw_notifier_head *notifiers, + struct cpg_simple_notifier *csn) +{ + csn->nb.notifier_call = cpg_simple_notifier_call; + raw_notifier_chain_register(notifiers, &csn->nb); +} + +/* + * sdn clock + */ +#define cpg_sd_stp_hck bit(9) +#define cpg_sd_stp_ck bit(8) + +#define cpg_sd_stp_mask (cpg_sd_stp_hck | cpg_sd_stp_ck) +#define cpg_sd_fc_mask (0x7 << 2 | 0x3 << 0) + +#define cpg_sd_div_table_data(stp_hck, sd_srcfc, sd_fc, sd_div) \ +{ \ + .val = ((stp_hck) ? cpg_sd_stp_hck : 0) | \ + ((sd_srcfc) << 2) | \ + ((sd_fc) << 0), \ + .div = (sd_div), \ +} + +struct sd_div_table { + u32 val; + unsigned int div; +}; + +struct sd_clock { + struct clk_hw hw; + const struct sd_div_table *div_table; + struct cpg_simple_notifier csn; + unsigned int div_num; + unsigned int cur_div_idx; +}; + +/* sdn divider + * sd_srcfc sd_fc div + * stp_hck (div) (div) = sd_srcfc x sd_fc + *--------------------------------------------------------- + * 0 0 (1) 1 (4) 4 : sdr104 / hs200 / hs400 (8 tap) + * 0 1 (2) 1 (4) 8 : sdr50 + * 1 2 (4) 1 (4) 16 : hs / sdr25 + * 1 3 (8) 1 (4) 32 : ns / sdr12 + * 1 4 (16) 1 (4) 64 + * 0 0 (1) 0 (2) 2 + * 0 1 (2) 0 (2) 4 : sdr104 / hs200 / hs400 (4 tap) + * 1 2 (4) 0 (2) 8 + * 1 3 (8) 0 (2) 16 + * 1 4 (16) 0 (2) 32 + * + * note: there is a quirk option to ignore the first row of the dividers + * table when searching for suitable settings. this is because hs400 on + * early es versions of h3 and m3-w requires a specific setting to work. + */ +static const struct sd_div_table cpg_sd_div_table[] = { +/* cpg_sd_div_table_data(stp_hck, sd_srcfc, sd_fc, sd_div) */ + cpg_sd_div_table_data(0, 0, 1, 4), + cpg_sd_div_table_data(0, 1, 1, 8), + cpg_sd_div_table_data(1, 2, 1, 16), + cpg_sd_div_table_data(1, 3, 1, 32), + cpg_sd_div_table_data(1, 4, 1, 64), + cpg_sd_div_table_data(0, 0, 0, 2), + cpg_sd_div_table_data(0, 1, 0, 4), + cpg_sd_div_table_data(1, 2, 0, 8), + cpg_sd_div_table_data(1, 3, 0, 16), + cpg_sd_div_table_data(1, 4, 0, 32), +}; + +#define to_sd_clock(_hw) container_of(_hw, struct sd_clock, hw) + +static int cpg_sd_clock_enable(struct clk_hw *hw) +{ + struct sd_clock *clock = to_sd_clock(hw); + + cpg_reg_modify(clock->csn.reg, cpg_sd_stp_mask, + clock->div_table[clock->cur_div_idx].val & + cpg_sd_stp_mask); + + return 0; +} + +static void cpg_sd_clock_disable(struct clk_hw *hw) +{ + struct sd_clock *clock = to_sd_clock(hw); + + cpg_reg_modify(clock->csn.reg, 0, cpg_sd_stp_mask); +} + +static int cpg_sd_clock_is_enabled(struct clk_hw *hw) +{ + struct sd_clock *clock = to_sd_clock(hw); + + return !(readl(clock->csn.reg) & cpg_sd_stp_mask); +} + +static unsigned long cpg_sd_clock_recalc_rate(struct clk_hw *hw, + unsigned long parent_rate) +{ + struct sd_clock *clock = to_sd_clock(hw); + + return div_round_closest(parent_rate, + clock->div_table[clock->cur_div_idx].div); +} + +static int cpg_sd_clock_determine_rate(struct clk_hw *hw, + struct clk_rate_request *req) +{ + unsigned long best_rate = ulong_max, diff_min = ulong_max; + struct sd_clock *clock = to_sd_clock(hw); + unsigned long calc_rate, diff; + unsigned int i; + + for (i = 0; i < clock->div_num; i++) { + calc_rate = div_round_closest(req->best_parent_rate, + clock->div_table[i].div); + if (calc_rate < req->min_rate || calc_rate > req->max_rate) + continue; + + diff = calc_rate > req->rate ? calc_rate - req->rate + : req->rate - calc_rate; + if (diff < diff_min) { + best_rate = calc_rate; + diff_min = diff; + } + } + + if (best_rate == ulong_max) + return -einval; + + req->rate = best_rate; + return 0; +} + +static int cpg_sd_clock_set_rate(struct clk_hw *hw, unsigned long rate, + unsigned long parent_rate) +{ + struct sd_clock *clock = to_sd_clock(hw); + unsigned int i; + + for (i = 0; i < clock->div_num; i++) + if (rate == div_round_closest(parent_rate, + clock->div_table[i].div)) + break; + + if (i >= clock->div_num) + return -einval; + + clock->cur_div_idx = i; + + cpg_reg_modify(clock->csn.reg, cpg_sd_stp_mask | cpg_sd_fc_mask, + clock->div_table[i].val & + (cpg_sd_stp_mask | cpg_sd_fc_mask)); + + return 0; +} + +static const struct clk_ops cpg_sd_clock_ops = { + .enable = cpg_sd_clock_enable, + .disable = cpg_sd_clock_disable, + .is_enabled = cpg_sd_clock_is_enabled, + .recalc_rate = cpg_sd_clock_recalc_rate, + .determine_rate = cpg_sd_clock_determine_rate, + .set_rate = cpg_sd_clock_set_rate, +}; + +struct clk * __init cpg_sd_clk_register(const char *name, + void __iomem *base, unsigned int offset, const char *parent_name, + struct raw_notifier_head *notifiers, bool skip_first) +{ + struct clk_init_data init; + struct sd_clock *clock; + struct clk *clk; + u32 val; + + clock = kzalloc(sizeof(*clock), gfp_kernel); + if (!clock) + return err_ptr(-enomem); + + init.name = name; + init.ops = &cpg_sd_clock_ops; + init.flags = clk_set_rate_parent; + init.parent_names = &parent_name; + init.num_parents = 1; + + clock->csn.reg = base + offset; + clock->hw.init = &init; + clock->div_table = cpg_sd_div_table; + clock->div_num = array_size(cpg_sd_div_table); + + if (skip_first) { + clock->div_table++; + clock->div_num--; + } + + val = readl(clock->csn.reg) & ~cpg_sd_fc_mask; + val |= cpg_sd_stp_mask | (clock->div_table[0].val & cpg_sd_fc_mask); + writel(val, clock->csn.reg); + + clk = clk_register(null, &clock->hw); + if (is_err(clk)) + goto free_clock; + + cpg_simple_notifier_register(notifiers, &clock->csn); + return clk; + +free_clock: + kfree(clock); + return clk; +} + + diff --git a/drivers/clk/renesas/rcar-cpg-lib.h b/drivers/clk/renesas/rcar-cpg-lib.h --- /dev/null +++ b/drivers/clk/renesas/rcar-cpg-lib.h +/* spdx-license-identifier: gpl-2.0 */ +/* + * r-car gen3 clock pulse generator library + * + * copyright (c) 2015-2018 glider bvba + * copyright (c) 2019 renesas electronics corp. + * + * based on clk-rcar-gen3.c + * + * copyright (c) 2015 renesas electronics corp. + */ + +#ifndef __clk_renesas_rcar_cpg_lib_h__ +#define __clk_renesas_rcar_cpg_lib_h__ + +extern spinlock_t cpg_lock; + +struct cpg_simple_notifier { + struct notifier_block nb; + void __iomem *reg; + u32 saved; +}; + +void cpg_simple_notifier_register(struct raw_notifier_head *notifiers, + struct cpg_simple_notifier *csn); + +void cpg_reg_modify(void __iomem *reg, u32 clear, u32 set); + +struct clk * __init cpg_sd_clk_register(const char *name, + void __iomem *base, unsigned int offset, const char *parent_name, + struct raw_notifier_head *notifiers, bool skip_first); + +#endif diff --git a/drivers/clk/renesas/rcar-gen3-cpg.c b/drivers/clk/renesas/rcar-gen3-cpg.c --- a/drivers/clk/renesas/rcar-gen3-cpg.c +++ b/drivers/clk/renesas/rcar-gen3-cpg.c +#include "rcar-cpg-lib.h" -static spinlock_t cpg_lock; - -static void cpg_reg_modify(void __iomem *reg, u32 clear, u32 set) -{ - unsigned long flags; - u32 val; - - spin_lock_irqsave(&cpg_lock, flags); - val = readl(reg); - val &= ~clear; - val |= set; - writel(val, reg); - spin_unlock_irqrestore(&cpg_lock, flags); -}; - -struct cpg_simple_notifier { - struct notifier_block nb; - void __iomem *reg; - u32 saved; -}; - -static int cpg_simple_notifier_call(struct notifier_block *nb, - unsigned long action, void *data) -{ - struct cpg_simple_notifier *csn = - container_of(nb, struct cpg_simple_notifier, nb); - - switch (action) { - case pm_event_suspend: - csn->saved = readl(csn->reg); - return notify_ok; - - case pm_event_resume: - writel(csn->saved, csn->reg); - return notify_ok; - } - return notify_done; -} - -static void cpg_simple_notifier_register(struct raw_notifier_head *notifiers, - struct cpg_simple_notifier *csn) -{ - csn->nb.notifier_call = cpg_simple_notifier_call; - raw_notifier_chain_register(notifiers, &csn->nb); -} - -/* - * sdn clock - */ -#define cpg_sd_stp_hck bit(9) -#define cpg_sd_stp_ck bit(8) - -#define cpg_sd_stp_mask (cpg_sd_stp_hck | cpg_sd_stp_ck) -#define cpg_sd_fc_mask (0x7 << 2 | 0x3 << 0) - -#define cpg_sd_div_table_data(stp_hck, sd_srcfc, sd_fc, sd_div) \ -{ \ - .val = ((stp_hck) ? cpg_sd_stp_hck : 0) | \ - ((sd_srcfc) << 2) | \ - ((sd_fc) << 0), \ - .div = (sd_div), \ -} - -struct sd_div_table { - u32 val; - unsigned int div; -}; - -struct sd_clock { - struct clk_hw hw; - const struct sd_div_table *div_table; - struct cpg_simple_notifier csn; - unsigned int div_num; - unsigned int cur_div_idx; -}; - -/* sdn divider - * sd_srcfc sd_fc div - * stp_hck (div) (div) = sd_srcfc x sd_fc - *--------------------------------------------------------- - * 0 0 (1) 1 (4) 4 : sdr104 / hs200 / hs400 (8 tap) - * 0 1 (2) 1 (4) 8 : sdr50 - * 1 2 (4) 1 (4) 16 : hs / sdr25 - * 1 3 (8) 1 (4) 32 : ns / sdr12 - * 1 4 (16) 1 (4) 64 - * 0 0 (1) 0 (2) 2 - * 0 1 (2) 0 (2) 4 : sdr104 / hs200 / hs400 (4 tap) - * 1 2 (4) 0 (2) 8 - * 1 3 (8) 0 (2) 16 - * 1 4 (16) 0 (2) 32 - * - * note: there is a quirk option to ignore the first row of the dividers - * table when searching for suitable settings. this is because hs400 on - * early es versions of h3 and m3-w requires a specific setting to work. - */ -static const struct sd_div_table cpg_sd_div_table[] = { -/* cpg_sd_div_table_data(stp_hck, sd_srcfc, sd_fc, sd_div) */ - cpg_sd_div_table_data(0, 0, 1, 4), - cpg_sd_div_table_data(0, 1, 1, 8), - cpg_sd_div_table_data(1, 2, 1, 16), - cpg_sd_div_table_data(1, 3, 1, 32), - cpg_sd_div_table_data(1, 4, 1, 64), - cpg_sd_div_table_data(0, 0, 0, 2), - cpg_sd_div_table_data(0, 1, 0, 4), - cpg_sd_div_table_data(1, 2, 0, 8), - cpg_sd_div_table_data(1, 3, 0, 16), - cpg_sd_div_table_data(1, 4, 0, 32), -}; - -#define to_sd_clock(_hw) container_of(_hw, struct sd_clock, hw) - -static int cpg_sd_clock_enable(struct clk_hw *hw) -{ - struct sd_clock *clock = to_sd_clock(hw); - - cpg_reg_modify(clock->csn.reg, cpg_sd_stp_mask, - clock->div_table[clock->cur_div_idx].val & - cpg_sd_stp_mask); - - return 0; -} - -static void cpg_sd_clock_disable(struct clk_hw *hw) -{ - struct sd_clock *clock = to_sd_clock(hw); - - cpg_reg_modify(clock->csn.reg, 0, cpg_sd_stp_mask); -} - -static int cpg_sd_clock_is_enabled(struct clk_hw *hw) -{ - struct sd_clock *clock = to_sd_clock(hw); - - return !(readl(clock->csn.reg) & cpg_sd_stp_mask); -} - -static unsigned long cpg_sd_clock_recalc_rate(struct clk_hw *hw, - unsigned long parent_rate) -{ - struct sd_clock *clock = to_sd_clock(hw); - - return div_round_closest(parent_rate, - clock->div_table[clock->cur_div_idx].div); -} - -static int cpg_sd_clock_determine_rate(struct clk_hw *hw, - struct clk_rate_request *req) -{ - unsigned long best_rate = ulong_max, diff_min = ulong_max; - struct sd_clock *clock = to_sd_clock(hw); - unsigned long calc_rate, diff; - unsigned int i; - - for (i = 0; i < clock->div_num; i++) { - calc_rate = div_round_closest(req->best_parent_rate, - clock->div_table[i].div); - if (calc_rate < req->min_rate || calc_rate > req->max_rate) - continue; - - diff = calc_rate > req->rate ? calc_rate - req->rate - : req->rate - calc_rate; - if (diff < diff_min) { - best_rate = calc_rate; - diff_min = diff; - } - } - - if (best_rate == ulong_max) - return -einval; - - req->rate = best_rate; - return 0; -} - -static int cpg_sd_clock_set_rate(struct clk_hw *hw, unsigned long rate, - unsigned long parent_rate) -{ - struct sd_clock *clock = to_sd_clock(hw); - unsigned int i; - - for (i = 0; i < clock->div_num; i++) - if (rate == div_round_closest(parent_rate, - clock->div_table[i].div)) - break; - - if (i >= clock->div_num) - return -einval; - - clock->cur_div_idx = i; - - cpg_reg_modify(clock->csn.reg, cpg_sd_stp_mask | cpg_sd_fc_mask, - clock->div_table[i].val & - (cpg_sd_stp_mask | cpg_sd_fc_mask)); - - return 0; -} - -static const struct clk_ops cpg_sd_clock_ops = { - .enable = cpg_sd_clock_enable, - .disable = cpg_sd_clock_disable, - .is_enabled = cpg_sd_clock_is_enabled, - .recalc_rate = cpg_sd_clock_recalc_rate, - .determine_rate = cpg_sd_clock_determine_rate, - .set_rate = cpg_sd_clock_set_rate, -}; - -static struct clk * __init cpg_sd_clk_register(const char *name, - void __iomem *base, unsigned int offset, const char *parent_name, - struct raw_notifier_head *notifiers, bool skip_first) -{ - struct clk_init_data init; - struct sd_clock *clock; - struct clk *clk; - u32 val; - - clock = kzalloc(sizeof(*clock), gfp_kernel); - if (!clock) - return err_ptr(-enomem); - - init.name = name; - init.ops = &cpg_sd_clock_ops; - init.flags = clk_set_rate_parent; - init.parent_names = &parent_name; - init.num_parents = 1; - - clock->csn.reg = base + offset; - clock->hw.init = &init; - clock->div_table = cpg_sd_div_table; - clock->div_num = array_size(cpg_sd_div_table); - - if (skip_first) { - clock->div_table++; - clock->div_num--; - } - - val = readl(clock->csn.reg) & ~cpg_sd_fc_mask; - val |= cpg_sd_stp_mask | (clock->div_table[0].val & cpg_sd_fc_mask); - writel(val, clock->csn.reg); - - clk = clk_register(null, &clock->hw); - if (is_err(clk)) - goto free_clock; - - cpg_simple_notifier_register(notifiers, &clock->csn); - return clk; - -free_clock: - kfree(clock); - return clk; -} -
|
Clock
|
8bb67d87346a36e174de4d7e5680155f627fd30d
|
wolfram sang
|
drivers
|
clk
|
renesas
|
clk: renesas: r8a779a0: add sdhi support
|
we use the shiny new cpg library for that.
|
this release allows to map an uid to a different one in a mount; it also adds support for selecting the preemption model at runtime; support for a low-overhead memory error detector designed to be used in production; support for the acrn hypervisor designed for embedded systems; btrfs initial support for zoned devices, subpage blocks sizes and performance improvements; support for eager nfs writes; support for a thermal power management to control the surface temperature of embedded devices in an unified way; the napi polling can be moved to a kernel thread; and support for non-blocking path lookups. as always, there are many other features, new drivers, improvements and fixes.
|
add sdhi/mmc support
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'cryptography', 'security', 'networking', 'architectures x86 arm risc-v powerpc mips csky s390 pa-risc c6x']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'device voltage and frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'non-transparent bridge (ntb)', 'thunderbolt', 'clock', 'phy ("physical layer" framework)', 'cxl (compute express link)', 'various']
|
['renesas', 'v3u']
|
['kconfig', 'c']
| 2
| 16
| 2
|
--- diff --git a/drivers/clk/renesas/kconfig b/drivers/clk/renesas/kconfig --- a/drivers/clk/renesas/kconfig +++ b/drivers/clk/renesas/kconfig + select clk_rcar_cpg_lib diff --git a/drivers/clk/renesas/r8a779a0-cpg-mssr.c b/drivers/clk/renesas/r8a779a0-cpg-mssr.c --- a/drivers/clk/renesas/r8a779a0-cpg-mssr.c +++ b/drivers/clk/renesas/r8a779a0-cpg-mssr.c +#include "rcar-cpg-lib.h" + clk_type_r8a779a0_sd, +#define def_sd(_name, _id, _parent, _offset) \ + def_base(_name, _id, clk_type_r8a779a0_sd, _parent, .offset = _offset) + + def_fixed(".sdsrc", clk_sdsrc, clk_pll5_div4, 1, 1), + def_sd("sd0", r8a779a0_clk_sd0, clk_sdsrc, 0x870), + + def_mod("sdhi0", 706, r8a779a0_clk_sd0), -static spinlock_t cpg_lock; - + case clk_type_r8a779a0_sd: + return cpg_sd_clk_register(core->name, base, core->offset, + __clk_get_name(parent), notifiers, + false); + break; +
|
Clock
|
792501727c2abf568f694c9c79b0da628c9dc4bb
|
wolfram sang
|
drivers
|
clk
|
renesas
|
arm64: dts: renesas: r8a779a0: add mmc node
|
add a device node for mmc.
|
this release allows to map an uid to a different one in a mount; it also adds support for selecting the preemption model at runtime; support for a low-overhead memory error detector designed to be used in production; support for the acrn hypervisor designed for embedded systems; btrfs initial support for zoned devices, subpage blocks sizes and performance improvements; support for eager nfs writes; support for a thermal power management to control the surface temperature of embedded devices in an unified way; the napi polling can be moved to a kernel thread; and support for non-blocking path lookups. as always, there are many other features, new drivers, improvements and fixes.
|
add sdhi/mmc support
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'cryptography', 'security', 'networking', 'architectures x86 arm risc-v powerpc mips csky s390 pa-risc c6x']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'device voltage and frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'non-transparent bridge (ntb)', 'thunderbolt', 'clock', 'phy ("physical layer" framework)', 'cxl (compute express link)', 'various']
|
['renesas', 'v3u']
|
['dtsi']
| 1
| 12
| 0
|
--- diff --git a/arch/arm64/boot/dts/renesas/r8a779a0.dtsi b/arch/arm64/boot/dts/renesas/r8a779a0.dtsi --- a/arch/arm64/boot/dts/renesas/r8a779a0.dtsi +++ b/arch/arm64/boot/dts/renesas/r8a779a0.dtsi + mmc0: mmc@ee140000 { + compatible = "renesas,sdhi-r8a779a0", + "renesas,rcar-gen3-sdhi"; + reg = <0 0xee140000 0 0x2000>; + interrupts = <gic_spi 236 irq_type_level_high>; + clocks = <&cpg cpg_mod 706>; + power-domains = <&sysc r8a779a0_pd_always_on>; + resets = <&cpg 706>; + max-frequency = <200000000>; + status = "disabled"; + }; +
|
Clock
|
6b159d547d462f4e47f1ae913f0c05e7071183ec
|
takeshi saito
|
arch
|
arm64
|
boot, dts, renesas
|
arm64: dts: renesas: falcon: enable mmc
|
enable mmc on the falcon board.
|
this release allows to map an uid to a different one in a mount; it also adds support for selecting the preemption model at runtime; support for a low-overhead memory error detector designed to be used in production; support for the acrn hypervisor designed for embedded systems; btrfs initial support for zoned devices, subpage blocks sizes and performance improvements; support for eager nfs writes; support for a thermal power management to control the surface temperature of embedded devices in an unified way; the napi polling can be moved to a kernel thread; and support for non-blocking path lookups. as always, there are many other features, new drivers, improvements and fixes.
|
add sdhi/mmc support
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'cryptography', 'security', 'networking', 'architectures x86 arm risc-v powerpc mips csky s390 pa-risc c6x']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'device voltage and frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'non-transparent bridge (ntb)', 'thunderbolt', 'clock', 'phy ("physical layer" framework)', 'cxl (compute express link)', 'various']
|
['renesas', 'v3u']
|
['dtsi']
| 1
| 41
| 0
|
--- diff --git a/arch/arm64/boot/dts/renesas/r8a779a0-falcon-cpu.dtsi b/arch/arm64/boot/dts/renesas/r8a779a0-falcon-cpu.dtsi --- a/arch/arm64/boot/dts/renesas/r8a779a0-falcon-cpu.dtsi +++ b/arch/arm64/boot/dts/renesas/r8a779a0-falcon-cpu.dtsi + + reg_1p8v: regulator-1p8v { + compatible = "regulator-fixed"; + regulator-name = "fixed-1.8v"; + regulator-min-microvolt = <1800000>; + regulator-max-microvolt = <1800000>; + regulator-boot-on; + regulator-always-on; + }; + + reg_3p3v: regulator-3p3v { + compatible = "regulator-fixed"; + regulator-name = "fixed-3.3v"; + regulator-min-microvolt = <3300000>; + regulator-max-microvolt = <3300000>; + regulator-boot-on; + regulator-always-on; + }; +&mmc0 { + pinctrl-0 = <&mmc_pins>; + pinctrl-1 = <&mmc_pins>; + pinctrl-names = "default", "state_uhs"; + + vmmc-supply = <®_3p3v>; + vqmmc-supply = <®_1p8v>; + mmc-hs200-1_8v; + mmc-hs400-1_8v; + bus-width = <8>; + no-sd; + no-sdio; + non-removable; + full-pwr-cycle-in-suspend; + status = "okay"; +}; + + mmc_pins: mmc { + groups = "mmc_data8", "mmc_ctrl", "mmc_ds"; + function = "mmc"; + power-source = <1800>; + }; +
|
Clock
|
ee33cd69344ff04f3b512eb9d74c16c412b07115
|
takeshi saito
|
arch
|
arm64
|
boot, dts, renesas
|
dt-bindings: watchdog: renesas,wdt: add r8a779a0 (v3u) support
|
signed-off-by: wolfram sang <wsa+renesas@sang-engineering.com> reviewed-by: geert uytterhoeven <geert+renesas@glider.be> acked-by: rob herring <robh@kernel.org> link: https://lore.kernel.org/r/20201218173731.12839-2-wsa+renesas@sang-engineering.com signed-off-by: guenter roeck <linux@roeck-us.net> signed-off-by: wim van sebroeck <wim@linux-watchdog.org>
|
this release allows to map an uid to a different one in a mount; it also adds support for selecting the preemption model at runtime; support for a low-overhead memory error detector designed to be used in production; support for the acrn hypervisor designed for embedded systems; btrfs initial support for zoned devices, subpage blocks sizes and performance improvements; support for eager nfs writes; support for a thermal power management to control the surface temperature of embedded devices in an unified way; the napi polling can be moved to a kernel thread; and support for non-blocking path lookups. as always, there are many other features, new drivers, improvements and fixes.
|
add support for rwdt
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'cryptography', 'security', 'networking', 'architectures x86 arm risc-v powerpc mips csky s390 pa-risc c6x']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'device voltage and frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'non-transparent bridge (ntb)', 'thunderbolt', 'clock', 'phy ("physical layer" framework)', 'cxl (compute express link)', 'various']
|
['renesas', 'v3u']
|
['yaml']
| 1
| 1
| 0
|
--- diff --git a/documentation/devicetree/bindings/watchdog/renesas,wdt.yaml b/documentation/devicetree/bindings/watchdog/renesas,wdt.yaml --- a/documentation/devicetree/bindings/watchdog/renesas,wdt.yaml +++ b/documentation/devicetree/bindings/watchdog/renesas,wdt.yaml - renesas,r8a77980-wdt # r-car v3h - renesas,r8a77990-wdt # r-car e3 - renesas,r8a77995-wdt # r-car d3 + - renesas,r8a779a0-wdt # r-car v3u - const: renesas,rcar-gen3-wdt # r-car gen3 and rz/g2
|
Clock
|
1ee5981da617190c41f7a019542ed4a85041ddbd
|
wolfram sang
|
documentation
|
devicetree
|
bindings, watchdog
|
clk: renesas: r8a779a0: add rwdt clocks
|
and introduce critical clocks, too, because rwdt is one.
|
this release allows to map an uid to a different one in a mount; it also adds support for selecting the preemption model at runtime; support for a low-overhead memory error detector designed to be used in production; support for the acrn hypervisor designed for embedded systems; btrfs initial support for zoned devices, subpage blocks sizes and performance improvements; support for eager nfs writes; support for a thermal power management to control the surface temperature of embedded devices in an unified way; the napi polling can be moved to a kernel thread; and support for non-blocking path lookups. as always, there are many other features, new drivers, improvements and fixes.
|
add support for rwdt
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'cryptography', 'security', 'networking', 'architectures x86 arm risc-v powerpc mips csky s390 pa-risc c6x']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'device voltage and frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'non-transparent bridge (ntb)', 'thunderbolt', 'clock', 'phy ("physical layer" framework)', 'cxl (compute express link)', 'various']
|
['renesas', 'v3u']
|
['c']
| 1
| 9
| 0
|
--- diff --git a/drivers/clk/renesas/r8a779a0-cpg-mssr.c b/drivers/clk/renesas/r8a779a0-cpg-mssr.c --- a/drivers/clk/renesas/r8a779a0-cpg-mssr.c +++ b/drivers/clk/renesas/r8a779a0-cpg-mssr.c + def_mod("rwdt", 907, r8a779a0_clk_r), +static const unsigned int r8a779a0_crit_mod_clks[] __initconst = { + mod_clk_id(907), /* rwdt */ +}; + + /* critical module clocks */ + .crit_mod_clks = r8a779a0_crit_mod_clks, + .num_crit_mod_clks = array_size(r8a779a0_crit_mod_clks), +
|
Clock
|
ab2ccacd73867c6be285ba4f3c1a3e10b96e9a1d
|
wolfram sang
|
drivers
|
clk
|
renesas
|
arm64: dts: renesas: r8a779a0: add rwdt node
|
add a device node for the watchdog timer (wdt) controller on the r8a779a0 soc.
|
this release allows to map an uid to a different one in a mount; it also adds support for selecting the preemption model at runtime; support for a low-overhead memory error detector designed to be used in production; support for the acrn hypervisor designed for embedded systems; btrfs initial support for zoned devices, subpage blocks sizes and performance improvements; support for eager nfs writes; support for a thermal power management to control the surface temperature of embedded devices in an unified way; the napi polling can be moved to a kernel thread; and support for non-blocking path lookups. as always, there are many other features, new drivers, improvements and fixes.
|
add support for rwdt
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'cryptography', 'security', 'networking', 'architectures x86 arm risc-v powerpc mips csky s390 pa-risc c6x']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'device voltage and frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'non-transparent bridge (ntb)', 'thunderbolt', 'clock', 'phy ("physical layer" framework)', 'cxl (compute express link)', 'various']
|
['renesas', 'v3u']
|
['dtsi']
| 1
| 10
| 0
|
--- diff --git a/arch/arm64/boot/dts/renesas/r8a779a0.dtsi b/arch/arm64/boot/dts/renesas/r8a779a0.dtsi --- a/arch/arm64/boot/dts/renesas/r8a779a0.dtsi +++ b/arch/arm64/boot/dts/renesas/r8a779a0.dtsi + rwdt: watchdog@e6020000 { + compatible = "renesas,r8a779a0-wdt", + "renesas,rcar-gen3-wdt"; + reg = <0 0xe6020000 0 0x0c>; + clocks = <&cpg cpg_mod 907>; + power-domains = <&sysc r8a779a0_pd_always_on>; + resets = <&cpg 907>; + status = "disabled"; + }; +
|
Clock
|
f4b30c0a03a9edb3e70cbd7abe65fc6c3033fb20
|
hoang vo
|
arch
|
arm64
|
boot, dts, renesas
|
arm64: dts: renesas: falcon: enable watchdog timer
|
enable the watchdog on the falcon board.
|
this release allows to map an uid to a different one in a mount; it also adds support for selecting the preemption model at runtime; support for a low-overhead memory error detector designed to be used in production; support for the acrn hypervisor designed for embedded systems; btrfs initial support for zoned devices, subpage blocks sizes and performance improvements; support for eager nfs writes; support for a thermal power management to control the surface temperature of embedded devices in an unified way; the napi polling can be moved to a kernel thread; and support for non-blocking path lookups. as always, there are many other features, new drivers, improvements and fixes.
|
add support for rwdt
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'cryptography', 'security', 'networking', 'architectures x86 arm risc-v powerpc mips csky s390 pa-risc c6x']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'device voltage and frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'non-transparent bridge (ntb)', 'thunderbolt', 'clock', 'phy ("physical layer" framework)', 'cxl (compute express link)', 'various']
|
['renesas', 'v3u']
|
['dts']
| 1
| 5
| 0
|
--- diff --git a/arch/arm64/boot/dts/renesas/r8a779a0-falcon.dts b/arch/arm64/boot/dts/renesas/r8a779a0-falcon.dts --- a/arch/arm64/boot/dts/renesas/r8a779a0-falcon.dts +++ b/arch/arm64/boot/dts/renesas/r8a779a0-falcon.dts + +&rwdt { + timeout-sec = <60>; + status = "okay"; +};
|
Clock
|
d207dc500bbcf8c6e1cbad375b08904f984f9602
|
hoang vo
|
arch
|
arm64
|
boot, dts, renesas
|
clk: renesas: r8a779a0: add fcpvd clock support
|
add clocks for the fcp for vsp-d module.
|
this release allows to map an uid to a different one in a mount; it also adds support for selecting the preemption model at runtime; support for a low-overhead memory error detector designed to be used in production; support for the acrn hypervisor designed for embedded systems; btrfs initial support for zoned devices, subpage blocks sizes and performance improvements; support for eager nfs writes; support for a thermal power management to control the surface temperature of embedded devices in an unified way; the napi polling can be moved to a kernel thread; and support for non-blocking path lookups. as always, there are many other features, new drivers, improvements and fixes.
|
add fcp and vsp support
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'cryptography', 'security', 'networking', 'architectures x86 arm risc-v powerpc mips csky s390 pa-risc c6x']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'device voltage and frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'non-transparent bridge (ntb)', 'thunderbolt', 'clock', 'phy ("physical layer" framework)', 'cxl (compute express link)', 'various']
|
['renesas', 'r8a779a0']
|
['c']
| 1
| 2
| 0
|
--- diff --git a/drivers/clk/renesas/r8a779a0-cpg-mssr.c b/drivers/clk/renesas/r8a779a0-cpg-mssr.c --- a/drivers/clk/renesas/r8a779a0-cpg-mssr.c +++ b/drivers/clk/renesas/r8a779a0-cpg-mssr.c + def_mod("fcpvd0", 508, r8a779a0_clk_s3d1), + def_mod("fcpvd1", 509, r8a779a0_clk_s3d1),
|
Clock
|
0177b5090effab70762c774b860df8d298e62ff4
|
kieran bingham
|
drivers
|
clk
|
renesas
|
clk: renesas: r8a779a0: add vspd clock support
|
add clocks for the vspd modules on the v3u.
|
this release allows to map an uid to a different one in a mount; it also adds support for selecting the preemption model at runtime; support for a low-overhead memory error detector designed to be used in production; support for the acrn hypervisor designed for embedded systems; btrfs initial support for zoned devices, subpage blocks sizes and performance improvements; support for eager nfs writes; support for a thermal power management to control the surface temperature of embedded devices in an unified way; the napi polling can be moved to a kernel thread; and support for non-blocking path lookups. as always, there are many other features, new drivers, improvements and fixes.
|
add fcp and vsp support
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'cryptography', 'security', 'networking', 'architectures x86 arm risc-v powerpc mips csky s390 pa-risc c6x']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'device voltage and frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'non-transparent bridge (ntb)', 'thunderbolt', 'clock', 'phy ("physical layer" framework)', 'cxl (compute express link)', 'various']
|
['renesas', 'r8a779a0']
|
['c']
| 1
| 2
| 0
|
--- diff --git a/drivers/clk/renesas/r8a779a0-cpg-mssr.c b/drivers/clk/renesas/r8a779a0-cpg-mssr.c --- a/drivers/clk/renesas/r8a779a0-cpg-mssr.c +++ b/drivers/clk/renesas/r8a779a0-cpg-mssr.c + def_mod("vspd0", 830, r8a779a0_clk_s3d1), + def_mod("vspd1", 831, r8a779a0_clk_s3d1),
|
Clock
|
ed447e7d60de9ee28763a9e0d215267db7498639
|
kieran bingham
|
drivers
|
clk
|
renesas
|
clk: renesas: r8a779a0: add vspx clock support
|
add clocks for the vspx.
|
this release allows to map an uid to a different one in a mount; it also adds support for selecting the preemption model at runtime; support for a low-overhead memory error detector designed to be used in production; support for the acrn hypervisor designed for embedded systems; btrfs initial support for zoned devices, subpage blocks sizes and performance improvements; support for eager nfs writes; support for a thermal power management to control the surface temperature of embedded devices in an unified way; the napi polling can be moved to a kernel thread; and support for non-blocking path lookups. as always, there are many other features, new drivers, improvements and fixes.
|
add fcp and vsp support
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'cryptography', 'security', 'networking', 'architectures x86 arm risc-v powerpc mips csky s390 pa-risc c6x']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'device voltage and frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'non-transparent bridge (ntb)', 'thunderbolt', 'clock', 'phy ("physical layer" framework)', 'cxl (compute express link)', 'various']
|
['renesas', 'r8a779a0']
|
['c']
| 1
| 4
| 0
|
--- diff --git a/drivers/clk/renesas/r8a779a0-cpg-mssr.c b/drivers/clk/renesas/r8a779a0-cpg-mssr.c --- a/drivers/clk/renesas/r8a779a0-cpg-mssr.c +++ b/drivers/clk/renesas/r8a779a0-cpg-mssr.c + def_mod("vspx0", 1028, r8a779a0_clk_s1d1), + def_mod("vspx1", 1029, r8a779a0_clk_s1d1), + def_mod("vspx2", 1030, r8a779a0_clk_s1d1), + def_mod("vspx3", 1031, r8a779a0_clk_s1d1),
|
Clock
|
57be2dc8d4cf4791993bd3e4caf586f3adfb7f6d
|
kieran bingham
|
drivers
|
clk
|
renesas
|
clk: socfpga: agilex: add clock driver for easic n5x platform
|
add support for intel's easic n5x platform. the clock manager driver for the n5x is very similar to the agilex platform, we can re-use most of the agilex clock driver.
|
this release allows to map an uid to a different one in a mount; it also adds support for selecting the preemption model at runtime; support for a low-overhead memory error detector designed to be used in production; support for the acrn hypervisor designed for embedded systems; btrfs initial support for zoned devices, subpage blocks sizes and performance improvements; support for eager nfs writes; support for a thermal power management to control the surface temperature of embedded devices in an unified way; the napi polling can be moved to a kernel thread; and support for non-blocking path lookups. as always, there are many other features, new drivers, improvements and fixes.
|
add clock driver for easic n5x platform
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'cryptography', 'security', 'networking', 'architectures x86 arm risc-v powerpc mips csky s390 pa-risc c6x']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'device voltage and frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'non-transparent bridge (ntb)', 'thunderbolt', 'clock', 'phy ("physical layer" framework)', 'cxl (compute express link)', 'various']
|
['socfpga', 'agilex']
|
['h', 'c']
| 4
| 238
| 3
|
--- diff --git a/drivers/clk/socfpga/clk-agilex.c b/drivers/clk/socfpga/clk-agilex.c --- a/drivers/clk/socfpga/clk-agilex.c +++ b/drivers/clk/socfpga/clk-agilex.c +static const struct n5x_perip_c_clock n5x_main_perip_c_clks[] = { + { agilex_main_pll_c0_clk, "main_pll_c0", "main_pll", null, 1, 0, 0x54, 0}, + { agilex_main_pll_c1_clk, "main_pll_c1", "main_pll", null, 1, 0, 0x54, 8}, + { agilex_main_pll_c2_clk, "main_pll_c2", "main_pll", null, 1, 0, 0x54, 16}, + { agilex_main_pll_c3_clk, "main_pll_c3", "main_pll", null, 1, 0, 0x54, 24}, + { agilex_periph_pll_c0_clk, "peri_pll_c0", "periph_pll", null, 1, 0, 0xa8, 0}, + { agilex_periph_pll_c1_clk, "peri_pll_c1", "periph_pll", null, 1, 0, 0xa8, 8}, + { agilex_periph_pll_c2_clk, "peri_pll_c2", "periph_pll", null, 1, 0, 0xa8, 16}, + { agilex_periph_pll_c3_clk, "peri_pll_c3", "periph_pll", null, 1, 0, 0xa8, 24}, +}; + +static int n5x_clk_register_c_perip(const struct n5x_perip_c_clock *clks, + int nums, struct stratix10_clock_data *data) +{ + struct clk *clk; + void __iomem *base = data->base; + int i; + + for (i = 0; i < nums; i++) { + clk = n5x_register_periph(&clks[i], base); + if (is_err(clk)) { + pr_err("%s: failed to register clock %s ", + __func__, clks[i].name); + continue; + } + data->clk_data.clks[clks[i].id] = clk; + } + return 0; +} + +static int n5x_clk_register_pll(const struct stratix10_pll_clock *clks, + int nums, struct stratix10_clock_data *data) +{ + struct clk *clk; + void __iomem *base = data->base; + int i; + + for (i = 0; i < nums; i++) { + clk = n5x_register_pll(&clks[i], base); + if (is_err(clk)) { + pr_err("%s: failed to register clock %s ", + __func__, clks[i].name); + continue; + } + data->clk_data.clks[clks[i].id] = clk; + } + + return 0; +} + -static int agilex_clkmgr_probe(struct platform_device *pdev) +static int agilex_clkmgr_init(struct platform_device *pdev) +static int n5x_clkmgr_init(struct platform_device *pdev) +{ + struct stratix10_clock_data *clk_data; + + clk_data = __socfpga_agilex_clk_init(pdev, agilex_num_clks); + if (is_err(clk_data)) + return ptr_err(clk_data); + + n5x_clk_register_pll(agilex_pll_clks, array_size(agilex_pll_clks), clk_data); + + n5x_clk_register_c_perip(n5x_main_perip_c_clks, + array_size(n5x_main_perip_c_clks), clk_data); + + agilex_clk_register_cnt_perip(agilex_main_perip_cnt_clks, + array_size(agilex_main_perip_cnt_clks), + clk_data); + + agilex_clk_register_gate(agilex_gate_clks, array_size(agilex_gate_clks), + clk_data); + return 0; +} + +static int agilex_clkmgr_probe(struct platform_device *pdev) +{ + int (*probe_func)(struct platform_device *init_func); + + probe_func = of_device_get_match_data(&pdev->dev); + if (!probe_func) + return -enodev; + return probe_func(pdev); +} + - .data = agilex_clkmgr_probe }, + .data = agilex_clkmgr_init }, + { .compatible = "intel,easic-n5x-clkmgr", + .data = n5x_clkmgr_init }, diff --git a/drivers/clk/socfpga/clk-periph-s10.c b/drivers/clk/socfpga/clk-periph-s10.c --- a/drivers/clk/socfpga/clk-periph-s10.c +++ b/drivers/clk/socfpga/clk-periph-s10.c +static unsigned long n5x_clk_peri_c_clk_recalc_rate(struct clk_hw *hwclk, + unsigned long parent_rate) +{ + struct socfpga_periph_clk *socfpgaclk = to_periph_clk(hwclk); + unsigned long div; + unsigned long shift = socfpgaclk->shift; + u32 val; + + val = readl(socfpgaclk->hw.reg); + val &= (0x1f << shift); + div = (val >> shift) + 1; + + return parent_rate / div; +} + +static const struct clk_ops n5x_peri_c_clk_ops = { + .recalc_rate = n5x_clk_peri_c_clk_recalc_rate, + .get_parent = clk_periclk_get_parent, +}; + +struct clk *n5x_register_periph(const struct n5x_perip_c_clock *clks, + void __iomem *regbase) +{ + struct clk *clk; + struct socfpga_periph_clk *periph_clk; + struct clk_init_data init; + const char *name = clks->name; + const char *parent_name = clks->parent_name; + + periph_clk = kzalloc(sizeof(*periph_clk), gfp_kernel); + if (warn_on(!periph_clk)) + return null; + + periph_clk->hw.reg = regbase + clks->offset; + periph_clk->shift = clks->shift; + + init.name = name; + init.ops = &n5x_peri_c_clk_ops; + init.flags = clks->flags; + + init.num_parents = clks->num_parents; + init.parent_names = parent_name ? &parent_name : null; + + periph_clk->hw.hw.init = &init; + + clk = clk_register(null, &periph_clk->hw.hw); + if (warn_on(is_err(clk))) { + kfree(periph_clk); + return null; + } + return clk; +} + diff --git a/drivers/clk/socfpga/clk-pll-s10.c b/drivers/clk/socfpga/clk-pll-s10.c --- a/drivers/clk/socfpga/clk-pll-s10.c +++ b/drivers/clk/socfpga/clk-pll-s10.c +#define socfpga_n5x_plldiv_fdiv_mask genmask(16, 8) +#define socfpga_n5x_plldiv_fdiv_shift 8 +#define socfpga_n5x_plldiv_rdiv_mask genmask(5, 0) +#define socfpga_n5x_plldiv_qdiv_mask genmask(26, 24) +#define socfpga_n5x_plldiv_qdiv_shift 24 + +static unsigned long n5x_clk_pll_recalc_rate(struct clk_hw *hwclk, + unsigned long parent_rate) +{ + struct socfpga_pll *socfpgaclk = to_socfpga_clk(hwclk); + unsigned long fdiv, reg, rdiv, qdiv; + u32 power = 1; + + /* read vco1 reg for numerator and denominator */ + reg = readl(socfpgaclk->hw.reg + 0x8); + fdiv = (reg & socfpga_n5x_plldiv_fdiv_mask) >> socfpga_n5x_plldiv_fdiv_shift; + rdiv = (reg & socfpga_n5x_plldiv_rdiv_mask); + qdiv = (reg & socfpga_n5x_plldiv_qdiv_mask) >> socfpga_n5x_plldiv_qdiv_shift; + + while (qdiv) { + power *= 2; + qdiv--; + } + + return ((parent_rate * 2 * (fdiv + 1)) / ((rdiv + 1) * power)); +} + +static int n5x_clk_pll_prepare(struct clk_hw *hwclk) +{ + struct socfpga_pll *socfpgaclk = to_socfpga_clk(hwclk); + u32 reg; + + /* bring pll out of reset */ + reg = readl(socfpgaclk->hw.reg + 0x4); + reg |= socfpga_pll_reset_mask; + writel(reg, socfpgaclk->hw.reg + 0x4); + + return 0; +} + +static const struct clk_ops n5x_clk_pll_ops = { + .recalc_rate = n5x_clk_pll_recalc_rate, + .get_parent = clk_pll_get_parent, + .prepare = n5x_clk_pll_prepare, +}; + + +struct clk *n5x_register_pll(const struct stratix10_pll_clock *clks, + void __iomem *reg) +{ + struct clk *clk; + struct socfpga_pll *pll_clk; + struct clk_init_data init; + const char *name = clks->name; + + pll_clk = kzalloc(sizeof(*pll_clk), gfp_kernel); + if (warn_on(!pll_clk)) + return null; + + pll_clk->hw.reg = reg + clks->offset; + + if (streq(name, socfpga_boot_clk)) + init.ops = &clk_boot_ops; + else + init.ops = &n5x_clk_pll_ops; + + init.name = name; + init.flags = clks->flags; + + init.num_parents = clks->num_parents; + init.parent_names = null; + init.parent_data = clks->parent_data; + pll_clk->hw.hw.init = &init; + + pll_clk->hw.bit_idx = socfpga_pll_power; + + clk = clk_register(null, &pll_clk->hw.hw); + if (warn_on(is_err(clk))) { + kfree(pll_clk); + return null; + } + return clk; +} diff --git a/drivers/clk/socfpga/stratix10-clk.h b/drivers/clk/socfpga/stratix10-clk.h --- a/drivers/clk/socfpga/stratix10-clk.h +++ b/drivers/clk/socfpga/stratix10-clk.h +struct n5x_perip_c_clock { + unsigned int id; + const char *name; + const char *parent_name; + const char *const *parent_names; + u8 num_parents; + unsigned long flags; + unsigned long offset; + unsigned long shift; +}; + +struct clk *n5x_register_pll(const struct stratix10_pll_clock *clks, + void __iomem *reg); - void __iomem *); + void __iomem *reg); +struct clk *n5x_register_periph(const struct n5x_perip_c_clock *clks, + void __iomem *reg);
|
Clock
|
a0f9819cbe995245477a09d4ca168a24f8e76583
|
dinh nguyen
|
drivers
|
clk
|
socfpga
|
clk: sunxi-ng: add support for the allwinner h616 ccu
|
while the clocks are fairly similar to the h6, many differ in tiny details, so a separate clock driver seems indicated.
|
this release allows to map an uid to a different one in a mount; it also adds support for selecting the preemption model at runtime; support for a low-overhead memory error detector designed to be used in production; support for the acrn hypervisor designed for embedded systems; btrfs initial support for zoned devices, subpage blocks sizes and performance improvements; support for eager nfs writes; support for a thermal power management to control the surface temperature of embedded devices in an unified way; the napi polling can be moved to a kernel thread; and support for non-blocking path lookups. as always, there are many other features, new drivers, improvements and fixes.
|
add support for the allwinner h616 ccu
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'cryptography', 'security', 'networking', 'architectures x86 arm risc-v powerpc mips csky s390 pa-risc c6x']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'device voltage and frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'non-transparent bridge (ntb)', 'thunderbolt', 'clock', 'phy ("physical layer" framework)', 'cxl (compute express link)', 'various']
|
['sunxi-ng']
|
['h', 'kconfig', 'c', 'makefile']
| 6
| 1,397
| 0
|
--- diff --git a/drivers/clk/sunxi-ng/kconfig b/drivers/clk/sunxi-ng/kconfig --- a/drivers/clk/sunxi-ng/kconfig +++ b/drivers/clk/sunxi-ng/kconfig +config sun50i_h616_ccu + bool "support for the allwinner h616 ccu" + default arm64 && arch_sunxi + depends on (arm64 && arch_sunxi) || compile_test + diff --git a/drivers/clk/sunxi-ng/makefile b/drivers/clk/sunxi-ng/makefile --- a/drivers/clk/sunxi-ng/makefile +++ b/drivers/clk/sunxi-ng/makefile +obj-$(config_sun50i_h616_ccu) += ccu-sun50i-h616.o diff --git a/drivers/clk/sunxi-ng/ccu-sun50i-h616.c b/drivers/clk/sunxi-ng/ccu-sun50i-h616.c --- /dev/null +++ b/drivers/clk/sunxi-ng/ccu-sun50i-h616.c +// spdx-license-identifier: gpl-2.0 +/* + * copyright (c) 2020 arm ltd. + * based on the h6 ccu driver, which is: + * copyright (c) 2017 icenowy zheng <icenowy@aosc.io> + */ + +#include <linux/clk-provider.h> +#include <linux/io.h> +#include <linux/of_address.h> +#include <linux/platform_device.h> + +#include "ccu_common.h" +#include "ccu_reset.h" + +#include "ccu_div.h" +#include "ccu_gate.h" +#include "ccu_mp.h" +#include "ccu_mult.h" +#include "ccu_nk.h" +#include "ccu_nkm.h" +#include "ccu_nkmp.h" +#include "ccu_nm.h" + +#include "ccu-sun50i-h616.h" + +/* + * the cpu pll is actually np clock, with p being /1, /2 or /4. however + * p should only be used for output frequencies lower than 288 mhz. + * + * for now we can just model it as a multiplier clock, and force p to /1. + * + * the m factor is present in the register's description, but not in the + * frequency formula, and it's documented as "m is only used for backdoor + * testing", so it's not modelled and then force to 0. + */ +#define sun50i_h616_pll_cpux_reg 0x000 +static struct ccu_mult pll_cpux_clk = { + .enable = bit(31), + .lock = bit(28), + .mult = _sunxi_ccu_mult_min(8, 8, 12), + .common = { + .reg = 0x000, + .hw.init = clk_hw_init("pll-cpux", "osc24m", + &ccu_mult_ops, + clk_set_rate_ungate), + }, +}; + +/* some plls are input * n / div1 / p. model them as nkmp with no k */ +#define sun50i_h616_pll_ddr0_reg 0x010 +static struct ccu_nkmp pll_ddr0_clk = { + .enable = bit(31), + .lock = bit(28), + .n = _sunxi_ccu_mult_min(8, 8, 12), + .m = _sunxi_ccu_div(1, 1), /* input divider */ + .p = _sunxi_ccu_div(0, 1), /* output divider */ + .common = { + .reg = 0x010, + .hw.init = clk_hw_init("pll-ddr0", "osc24m", + &ccu_nkmp_ops, + clk_set_rate_ungate), + }, +}; + +#define sun50i_h616_pll_ddr1_reg 0x018 +static struct ccu_nkmp pll_ddr1_clk = { + .enable = bit(31), + .lock = bit(28), + .n = _sunxi_ccu_mult_min(8, 8, 12), + .m = _sunxi_ccu_div(1, 1), /* input divider */ + .p = _sunxi_ccu_div(0, 1), /* output divider */ + .common = { + .reg = 0x018, + .hw.init = clk_hw_init("pll-ddr1", "osc24m", + &ccu_nkmp_ops, + clk_set_rate_ungate), + }, +}; + +#define sun50i_h616_pll_periph0_reg 0x020 +static struct ccu_nkmp pll_periph0_clk = { + .enable = bit(31), + .lock = bit(28), + .n = _sunxi_ccu_mult_min(8, 8, 12), + .m = _sunxi_ccu_div(1, 1), /* input divider */ + .p = _sunxi_ccu_div(0, 1), /* output divider */ + .fixed_post_div = 2, + .common = { + .reg = 0x020, + .features = ccu_feature_fixed_postdiv, + .hw.init = clk_hw_init("pll-periph0", "osc24m", + &ccu_nkmp_ops, + clk_set_rate_ungate), + }, +}; + +#define sun50i_h616_pll_periph1_reg 0x028 +static struct ccu_nkmp pll_periph1_clk = { + .enable = bit(31), + .lock = bit(28), + .n = _sunxi_ccu_mult_min(8, 8, 12), + .m = _sunxi_ccu_div(1, 1), /* input divider */ + .p = _sunxi_ccu_div(0, 1), /* output divider */ + .fixed_post_div = 2, + .common = { + .reg = 0x028, + .features = ccu_feature_fixed_postdiv, + .hw.init = clk_hw_init("pll-periph1", "osc24m", + &ccu_nkmp_ops, + clk_set_rate_ungate), + }, +}; + +#define sun50i_h616_pll_gpu_reg 0x030 +static struct ccu_nkmp pll_gpu_clk = { + .enable = bit(31), + .lock = bit(28), + .n = _sunxi_ccu_mult_min(8, 8, 12), + .m = _sunxi_ccu_div(1, 1), /* input divider */ + .p = _sunxi_ccu_div(0, 1), /* output divider */ + .common = { + .reg = 0x030, + .hw.init = clk_hw_init("pll-gpu", "osc24m", + &ccu_nkmp_ops, + clk_set_rate_ungate), + }, +}; + +/* + * for video plls, the output divider is described as "used for testing" + * in the user manual. so it's not modelled and forced to 0. + */ +#define sun50i_h616_pll_video0_reg 0x040 +static struct ccu_nm pll_video0_clk = { + .enable = bit(31), + .lock = bit(28), + .n = _sunxi_ccu_mult_min(8, 8, 12), + .m = _sunxi_ccu_div(1, 1), /* input divider */ + .fixed_post_div = 4, + .min_rate = 288000000, + .max_rate = 2400000000ul, + .common = { + .reg = 0x040, + .features = ccu_feature_fixed_postdiv, + .hw.init = clk_hw_init("pll-video0", "osc24m", + &ccu_nm_ops, + clk_set_rate_ungate), + }, +}; + +#define sun50i_h616_pll_video1_reg 0x048 +static struct ccu_nm pll_video1_clk = { + .enable = bit(31), + .lock = bit(28), + .n = _sunxi_ccu_mult_min(8, 8, 12), + .m = _sunxi_ccu_div(1, 1), /* input divider */ + .fixed_post_div = 4, + .min_rate = 288000000, + .max_rate = 2400000000ul, + .common = { + .reg = 0x048, + .features = ccu_feature_fixed_postdiv, + .hw.init = clk_hw_init("pll-video1", "osc24m", + &ccu_nm_ops, + clk_set_rate_ungate), + }, +}; + +#define sun50i_h616_pll_video2_reg 0x050 +static struct ccu_nm pll_video2_clk = { + .enable = bit(31), + .lock = bit(28), + .n = _sunxi_ccu_mult_min(8, 8, 12), + .m = _sunxi_ccu_div(1, 1), /* input divider */ + .fixed_post_div = 4, + .min_rate = 288000000, + .max_rate = 2400000000ul, + .common = { + .reg = 0x050, + .features = ccu_feature_fixed_postdiv, + .hw.init = clk_hw_init("pll-video2", "osc24m", + &ccu_nm_ops, + clk_set_rate_ungate), + }, +}; + +#define sun50i_h616_pll_ve_reg 0x058 +static struct ccu_nkmp pll_ve_clk = { + .enable = bit(31), + .lock = bit(28), + .n = _sunxi_ccu_mult_min(8, 8, 12), + .m = _sunxi_ccu_div(1, 1), /* input divider */ + .p = _sunxi_ccu_div(0, 1), /* output divider */ + .common = { + .reg = 0x058, + .hw.init = clk_hw_init("pll-ve", "osc24m", + &ccu_nkmp_ops, + clk_set_rate_ungate), + }, +}; + +#define sun50i_h616_pll_de_reg 0x060 +static struct ccu_nkmp pll_de_clk = { + .enable = bit(31), + .lock = bit(28), + .n = _sunxi_ccu_mult_min(8, 8, 12), + .m = _sunxi_ccu_div(1, 1), /* input divider */ + .p = _sunxi_ccu_div(0, 1), /* output divider */ + .common = { + .reg = 0x060, + .hw.init = clk_hw_init("pll-de", "osc24m", + &ccu_nkmp_ops, + clk_set_rate_ungate), + }, +}; + +/* + * todo: determine sdm settings for the audio pll. the manual suggests + * pll_factor_n=16, pll_post_div_p=2, output_div=2, pattern=0xe000c49b + * for 24.576 mhz, and pll_factor_n=22, pll_post_div_p=3, output_div=2, + * pattern=0xe001288c for 22.5792 mhz. + * this clashes with our fixed pll_post_div_p. + */ +#define sun50i_h616_pll_audio_reg 0x078 +static struct ccu_nm pll_audio_hs_clk = { + .enable = bit(31), + .lock = bit(28), + .n = _sunxi_ccu_mult_min(8, 8, 12), + .m = _sunxi_ccu_div(1, 1), /* input divider */ + .common = { + .reg = 0x078, + .hw.init = clk_hw_init("pll-audio-hs", "osc24m", + &ccu_nm_ops, + clk_set_rate_ungate), + }, +}; + +static const char * const cpux_parents[] = { "osc24m", "osc32k", + "iosc", "pll-cpux", "pll-periph0" }; +static sunxi_ccu_mux(cpux_clk, "cpux", cpux_parents, + 0x500, 24, 3, clk_set_rate_parent | clk_is_critical); +static sunxi_ccu_m(axi_clk, "axi", "cpux", 0x500, 0, 2, 0); +static sunxi_ccu_m(cpux_apb_clk, "cpux-apb", "cpux", 0x500, 8, 2, 0); + +static const char * const psi_ahb1_ahb2_parents[] = { "osc24m", "osc32k", + "iosc", "pll-periph0" }; +static sunxi_ccu_mp_with_mux(psi_ahb1_ahb2_clk, "psi-ahb1-ahb2", + psi_ahb1_ahb2_parents, + 0x510, + 0, 2, /* m */ + 8, 2, /* p */ + 24, 2, /* mux */ + 0); + +static const char * const ahb3_apb1_apb2_parents[] = { "osc24m", "osc32k", + "psi-ahb1-ahb2", + "pll-periph0" }; +static sunxi_ccu_mp_with_mux(ahb3_clk, "ahb3", ahb3_apb1_apb2_parents, 0x51c, + 0, 2, /* m */ + 8, 2, /* p */ + 24, 2, /* mux */ + 0); + +static sunxi_ccu_mp_with_mux(apb1_clk, "apb1", ahb3_apb1_apb2_parents, 0x520, + 0, 2, /* m */ + 8, 2, /* p */ + 24, 2, /* mux */ + 0); + +static sunxi_ccu_mp_with_mux(apb2_clk, "apb2", ahb3_apb1_apb2_parents, 0x524, + 0, 2, /* m */ + 8, 2, /* p */ + 24, 2, /* mux */ + 0); + +static const char * const mbus_parents[] = { "osc24m", "pll-periph0-2x", + "pll-ddr0", "pll-ddr1" }; +static sunxi_ccu_m_with_mux_gate(mbus_clk, "mbus", mbus_parents, 0x540, + 0, 3, /* m */ + 24, 2, /* mux */ + bit(31), /* gate */ + clk_is_critical); + +static const char * const de_parents[] = { "pll-de", "pll-periph0-2x" }; +static sunxi_ccu_m_with_mux_gate(de_clk, "de", de_parents, 0x600, + 0, 4, /* m */ + 24, 1, /* mux */ + bit(31), /* gate */ + clk_set_rate_parent); + +static sunxi_ccu_gate(bus_de_clk, "bus-de", "psi-ahb1-ahb2", + 0x60c, bit(0), 0); + +static sunxi_ccu_m_with_mux_gate(deinterlace_clk, "deinterlace", + de_parents, + 0x620, + 0, 4, /* m */ + 24, 1, /* mux */ + bit(31), /* gate */ + 0); + +static sunxi_ccu_gate(bus_deinterlace_clk, "bus-deinterlace", "psi-ahb1-ahb2", + 0x62c, bit(0), 0); + +static sunxi_ccu_m_with_mux_gate(g2d_clk, "g2d", de_parents, 0x630, + 0, 4, /* m */ + 24, 1, /* mux */ + bit(31), /* gate */ + 0); + +static sunxi_ccu_gate(bus_g2d_clk, "bus-g2d", "psi-ahb1-ahb2", + 0x63c, bit(0), 0); + +static const char * const gpu0_parents[] = { "pll-gpu", "gpu1" }; +static sunxi_ccu_m_with_mux_gate(gpu0_clk, "gpu0", gpu0_parents, 0x670, + 0, 2, /* m */ + 24, 1, /* mux */ + bit(31), /* gate */ + clk_set_rate_parent); +static sunxi_ccu_m_with_gate(gpu1_clk, "gpu1", "pll-periph0-2x", 0x674, + 0, 2, /* m */ + bit(31),/* gate */ + 0); + +static sunxi_ccu_gate(bus_gpu_clk, "bus-gpu", "psi-ahb1-ahb2", + 0x67c, bit(0), 0); + +static const char * const ce_parents[] = { "osc24m", "pll-periph0-2x" }; +static sunxi_ccu_mp_with_mux_gate(ce_clk, "ce", ce_parents, 0x680, + 0, 4, /* m */ + 8, 2, /* n */ + 24, 1, /* mux */ + bit(31),/* gate */ + 0); + +static sunxi_ccu_gate(bus_ce_clk, "bus-ce", "psi-ahb1-ahb2", + 0x68c, bit(0), 0); + +static const char * const ve_parents[] = { "pll-ve" }; +static sunxi_ccu_m_with_mux_gate(ve_clk, "ve", ve_parents, 0x690, + 0, 3, /* m */ + 24, 1, /* mux */ + bit(31), /* gate */ + clk_set_rate_parent); + +static sunxi_ccu_gate(bus_ve_clk, "bus-ve", "psi-ahb1-ahb2", + 0x69c, bit(0), 0); + +static sunxi_ccu_gate(bus_dma_clk, "bus-dma", "psi-ahb1-ahb2", + 0x70c, bit(0), 0); + +static sunxi_ccu_gate(bus_hstimer_clk, "bus-hstimer", "psi-ahb1-ahb2", + 0x73c, bit(0), 0); + +static sunxi_ccu_gate(avs_clk, "avs", "osc24m", 0x740, bit(31), 0); + +static sunxi_ccu_gate(bus_dbg_clk, "bus-dbg", "psi-ahb1-ahb2", + 0x78c, bit(0), 0); + +static sunxi_ccu_gate(bus_psi_clk, "bus-psi", "psi-ahb1-ahb2", + 0x79c, bit(0), 0); + +static sunxi_ccu_gate(bus_pwm_clk, "bus-pwm", "apb1", 0x7ac, bit(0), 0); + +static sunxi_ccu_gate(bus_iommu_clk, "bus-iommu", "apb1", 0x7bc, bit(0), 0); + +static const char * const dram_parents[] = { "pll-ddr0", "pll-ddr1" }; +static struct ccu_div dram_clk = { + .div = _sunxi_ccu_div(0, 2), + .mux = _sunxi_ccu_mux(24, 2), + .common = { + .reg = 0x800, + .hw.init = clk_hw_init_parents("dram", + dram_parents, + &ccu_div_ops, + clk_is_critical), + }, +}; + +static sunxi_ccu_gate(mbus_dma_clk, "mbus-dma", "mbus", + 0x804, bit(0), 0); +static sunxi_ccu_gate(mbus_ve_clk, "mbus-ve", "mbus", + 0x804, bit(1), 0); +static sunxi_ccu_gate(mbus_ce_clk, "mbus-ce", "mbus", + 0x804, bit(2), 0); +static sunxi_ccu_gate(mbus_ts_clk, "mbus-ts", "mbus", + 0x804, bit(3), 0); +static sunxi_ccu_gate(mbus_nand_clk, "mbus-nand", "mbus", + 0x804, bit(5), 0); +static sunxi_ccu_gate(mbus_g2d_clk, "mbus-g2d", "mbus", + 0x804, bit(10), 0); + +static sunxi_ccu_gate(bus_dram_clk, "bus-dram", "psi-ahb1-ahb2", + 0x80c, bit(0), clk_is_critical); + +static const char * const nand_spi_parents[] = { "osc24m", "pll-periph0", + "pll-periph1", "pll-periph0-2x", + "pll-periph1-2x" }; +static sunxi_ccu_mp_with_mux_gate(nand0_clk, "nand0", nand_spi_parents, 0x810, + 0, 4, /* m */ + 8, 2, /* n */ + 24, 3, /* mux */ + bit(31),/* gate */ + 0); + +static sunxi_ccu_mp_with_mux_gate(nand1_clk, "nand1", nand_spi_parents, 0x814, + 0, 4, /* m */ + 8, 2, /* n */ + 24, 3, /* mux */ + bit(31),/* gate */ + 0); + +static sunxi_ccu_gate(bus_nand_clk, "bus-nand", "ahb3", 0x82c, bit(0), 0); + +static const char * const mmc_parents[] = { "osc24m", "pll-periph0-2x", + "pll-periph1-2x" }; +static sunxi_ccu_mp_with_mux_gate_postdiv(mmc0_clk, "mmc0", mmc_parents, 0x830, + 0, 4, /* m */ + 8, 2, /* n */ + 24, 2, /* mux */ + bit(31), /* gate */ + 2, /* post-div */ + 0); + +static sunxi_ccu_mp_with_mux_gate_postdiv(mmc1_clk, "mmc1", mmc_parents, 0x834, + 0, 4, /* m */ + 8, 2, /* n */ + 24, 2, /* mux */ + bit(31), /* gate */ + 2, /* post-div */ + 0); + +static sunxi_ccu_mp_with_mux_gate_postdiv(mmc2_clk, "mmc2", mmc_parents, 0x838, + 0, 4, /* m */ + 8, 2, /* n */ + 24, 2, /* mux */ + bit(31), /* gate */ + 2, /* post-div */ + 0); + +static sunxi_ccu_gate(bus_mmc0_clk, "bus-mmc0", "ahb3", 0x84c, bit(0), 0); +static sunxi_ccu_gate(bus_mmc1_clk, "bus-mmc1", "ahb3", 0x84c, bit(1), 0); +static sunxi_ccu_gate(bus_mmc2_clk, "bus-mmc2", "ahb3", 0x84c, bit(2), 0); + +static sunxi_ccu_gate(bus_uart0_clk, "bus-uart0", "apb2", 0x90c, bit(0), 0); +static sunxi_ccu_gate(bus_uart1_clk, "bus-uart1", "apb2", 0x90c, bit(1), 0); +static sunxi_ccu_gate(bus_uart2_clk, "bus-uart2", "apb2", 0x90c, bit(2), 0); +static sunxi_ccu_gate(bus_uart3_clk, "bus-uart3", "apb2", 0x90c, bit(3), 0); +static sunxi_ccu_gate(bus_uart4_clk, "bus-uart4", "apb2", 0x90c, bit(4), 0); +static sunxi_ccu_gate(bus_uart5_clk, "bus-uart5", "apb2", 0x90c, bit(5), 0); + +static sunxi_ccu_gate(bus_i2c0_clk, "bus-i2c0", "apb2", 0x91c, bit(0), 0); +static sunxi_ccu_gate(bus_i2c1_clk, "bus-i2c1", "apb2", 0x91c, bit(1), 0); +static sunxi_ccu_gate(bus_i2c2_clk, "bus-i2c2", "apb2", 0x91c, bit(2), 0); +static sunxi_ccu_gate(bus_i2c3_clk, "bus-i2c3", "apb2", 0x91c, bit(3), 0); +static sunxi_ccu_gate(bus_i2c4_clk, "bus-i2c4", "apb2", 0x91c, bit(4), 0); + +static sunxi_ccu_mp_with_mux_gate(spi0_clk, "spi0", nand_spi_parents, 0x940, + 0, 4, /* m */ + 8, 2, /* n */ + 24, 3, /* mux */ + bit(31),/* gate */ + 0); + +static sunxi_ccu_mp_with_mux_gate(spi1_clk, "spi1", nand_spi_parents, 0x944, + 0, 4, /* m */ + 8, 2, /* n */ + 24, 3, /* mux */ + bit(31),/* gate */ + 0); + +static sunxi_ccu_gate(bus_spi0_clk, "bus-spi0", "ahb3", 0x96c, bit(0), 0); +static sunxi_ccu_gate(bus_spi1_clk, "bus-spi1", "ahb3", 0x96c, bit(1), 0); + +static sunxi_ccu_gate(emac_25m_clk, "emac-25m", "ahb3", 0x970, + bit(31) | bit(30), 0); + +static sunxi_ccu_gate(bus_emac0_clk, "bus-emac0", "ahb3", 0x97c, bit(0), 0); +static sunxi_ccu_gate(bus_emac1_clk, "bus-emac1", "ahb3", 0x97c, bit(1), 0); + +static const char * const ts_parents[] = { "osc24m", "pll-periph0" }; +static sunxi_ccu_mp_with_mux_gate(ts_clk, "ts", ts_parents, 0x9b0, + 0, 4, /* m */ + 8, 2, /* n */ + 24, 1, /* mux */ + bit(31),/* gate */ + 0); + +static sunxi_ccu_gate(bus_ts_clk, "bus-ts", "ahb3", 0x9bc, bit(0), 0); + +static sunxi_ccu_gate(bus_ths_clk, "bus-ths", "apb1", 0x9fc, bit(0), 0); + +static const char * const audio_parents[] = { "pll-audio-1x", "pll-audio-2x", + "pll-audio-4x", "pll-audio-hs" }; +static struct ccu_div spdif_clk = { + .enable = bit(31), + .div = _sunxi_ccu_div_flags(8, 2, clk_divider_power_of_two), + .mux = _sunxi_ccu_mux(24, 2), + .common = { + .reg = 0xa20, + .hw.init = clk_hw_init_parents("spdif", + audio_parents, + &ccu_div_ops, + 0), + }, +}; + +static sunxi_ccu_gate(bus_spdif_clk, "bus-spdif", "apb1", 0xa2c, bit(0), 0); + +static struct ccu_div dmic_clk = { + .enable = bit(31), + .div = _sunxi_ccu_div_flags(8, 2, clk_divider_power_of_two), + .mux = _sunxi_ccu_mux(24, 2), + .common = { + .reg = 0xa40, + .hw.init = clk_hw_init_parents("dmic", + audio_parents, + &ccu_div_ops, + 0), + }, +}; + +static sunxi_ccu_gate(bus_dmic_clk, "bus-dmic", "apb1", 0xa4c, bit(0), 0); + +static sunxi_ccu_m_with_mux_gate(audio_codec_1x_clk, "audio-codec-1x", + audio_parents, 0xa50, + 0, 4, /* m */ + 24, 2, /* mux */ + bit(31), /* gate */ + clk_set_rate_parent); +static sunxi_ccu_m_with_mux_gate(audio_codec_4x_clk, "audio-codec-4x", + audio_parents, 0xa54, + 0, 4, /* m */ + 24, 2, /* mux */ + bit(31), /* gate */ + clk_set_rate_parent); + +static sunxi_ccu_gate(bus_audio_codec_clk, "bus-audio-codec", "apb1", 0xa5c, + bit(0), 0); + +static struct ccu_div audio_hub_clk = { + .enable = bit(31), + .div = _sunxi_ccu_div_flags(8, 2, clk_divider_power_of_two), + .mux = _sunxi_ccu_mux(24, 2), + .common = { + .reg = 0xa60, + .hw.init = clk_hw_init_parents("audio-hub", + audio_parents, + &ccu_div_ops, + 0), + }, +}; + +static sunxi_ccu_gate(bus_audio_hub_clk, "bus-audio-hub", "apb1", 0xa6c, bit(0), 0); + +/* + * there are ohci 12m clock source selection bits for the four usb 2.0 ports. + * we will force them to 0 (12m divided from 48m). + */ +#define sun50i_h616_usb0_clk_reg 0xa70 +#define sun50i_h616_usb1_clk_reg 0xa74 +#define sun50i_h616_usb2_clk_reg 0xa78 +#define sun50i_h616_usb3_clk_reg 0xa7c + +static sunxi_ccu_gate(usb_ohci0_clk, "usb-ohci0", "osc12m", 0xa70, bit(31), 0); +static sunxi_ccu_gate(usb_phy0_clk, "usb-phy0", "osc24m", 0xa70, bit(29), 0); + +static sunxi_ccu_gate(usb_ohci1_clk, "usb-ohci1", "osc12m", 0xa74, bit(31), 0); +static sunxi_ccu_gate(usb_phy1_clk, "usb-phy1", "osc24m", 0xa74, bit(29), 0); + +static sunxi_ccu_gate(usb_ohci2_clk, "usb-ohci2", "osc12m", 0xa78, bit(31), 0); +static sunxi_ccu_gate(usb_phy2_clk, "usb-phy2", "osc24m", 0xa78, bit(29), 0); + +static sunxi_ccu_gate(usb_ohci3_clk, "usb-ohci3", "osc12m", 0xa7c, bit(31), 0); +static sunxi_ccu_gate(usb_phy3_clk, "usb-phy3", "osc24m", 0xa7c, bit(29), 0); + +static sunxi_ccu_gate(bus_ohci0_clk, "bus-ohci0", "ahb3", 0xa8c, bit(0), 0); +static sunxi_ccu_gate(bus_ohci1_clk, "bus-ohci1", "ahb3", 0xa8c, bit(1), 0); +static sunxi_ccu_gate(bus_ohci2_clk, "bus-ohci2", "ahb3", 0xa8c, bit(2), 0); +static sunxi_ccu_gate(bus_ohci3_clk, "bus-ohci3", "ahb3", 0xa8c, bit(3), 0); +static sunxi_ccu_gate(bus_ehci0_clk, "bus-ehci0", "ahb3", 0xa8c, bit(4), 0); +static sunxi_ccu_gate(bus_ehci1_clk, "bus-ehci1", "ahb3", 0xa8c, bit(5), 0); +static sunxi_ccu_gate(bus_ehci2_clk, "bus-ehci2", "ahb3", 0xa8c, bit(6), 0); +static sunxi_ccu_gate(bus_ehci3_clk, "bus-ehci3", "ahb3", 0xa8c, bit(7), 0); +static sunxi_ccu_gate(bus_otg_clk, "bus-otg", "ahb3", 0xa8c, bit(8), 0); + +static sunxi_ccu_gate(bus_keyadc_clk, "bus-keyadc", "apb1", 0xa9c, bit(0), 0); + +static const char * const hdmi_parents[] = { "pll-video0", "pll-video0-4x", + "pll-video2", "pll-video2-4x" }; +static sunxi_ccu_m_with_mux_gate(hdmi_clk, "hdmi", hdmi_parents, 0xb00, + 0, 4, /* m */ + 24, 2, /* mux */ + bit(31), /* gate */ + 0); + +static sunxi_ccu_gate(hdmi_slow_clk, "hdmi-slow", "osc24m", 0xb04, bit(31), 0); + +static const char * const hdmi_cec_parents[] = { "osc32k", "pll-periph0-2x" }; +static const struct ccu_mux_fixed_prediv hdmi_cec_predivs[] = { + { .index = 1, .div = 36621 }, +}; + +#define sun50i_h616_hdmi_cec_clk_reg 0xb10 +static struct ccu_mux hdmi_cec_clk = { + .enable = bit(31) | bit(30), + + .mux = { + .shift = 24, + .width = 2, + + .fixed_predivs = hdmi_cec_predivs, + .n_predivs = array_size(hdmi_cec_predivs), + }, + + .common = { + .reg = 0xb10, + .features = ccu_feature_fixed_prediv, + .hw.init = clk_hw_init_parents("hdmi-cec", + hdmi_cec_parents, + &ccu_mux_ops, + 0), + }, +}; + +static sunxi_ccu_gate(bus_hdmi_clk, "bus-hdmi", "ahb3", 0xb1c, bit(0), 0); + +static sunxi_ccu_gate(bus_tcon_top_clk, "bus-tcon-top", "ahb3", + 0xb5c, bit(0), 0); + +static const char * const tcon_tv_parents[] = { "pll-video0", + "pll-video0-4x", + "pll-video1", + "pll-video1-4x" }; +static sunxi_ccu_mp_with_mux_gate(tcon_tv0_clk, "tcon-tv0", + tcon_tv_parents, 0xb80, + 0, 4, /* m */ + 8, 2, /* p */ + 24, 3, /* mux */ + bit(31), /* gate */ + clk_set_rate_parent); +static sunxi_ccu_mp_with_mux_gate(tcon_tv1_clk, "tcon-tv1", + tcon_tv_parents, 0xb84, + 0, 4, /* m */ + 8, 2, /* p */ + 24, 3, /* mux */ + bit(31), /* gate */ + clk_set_rate_parent); + +static sunxi_ccu_gate(bus_tcon_tv0_clk, "bus-tcon-tv0", "ahb3", + 0xb9c, bit(0), 0); +static sunxi_ccu_gate(bus_tcon_tv1_clk, "bus-tcon-tv1", "ahb3", + 0xb9c, bit(1), 0); + +static sunxi_ccu_mp_with_mux_gate(tve0_clk, "tve0", + tcon_tv_parents, 0xbb0, + 0, 4, /* m */ + 8, 2, /* p */ + 24, 3, /* mux */ + bit(31), /* gate */ + clk_set_rate_parent); + +static sunxi_ccu_gate(bus_tve_top_clk, "bus-tve-top", "ahb3", + 0xbbc, bit(0), 0); +static sunxi_ccu_gate(bus_tve0_clk, "bus-tve0", "ahb3", + 0xbbc, bit(1), 0); + +static const char * const hdcp_parents[] = { "pll-periph0", "pll-periph1" }; +static sunxi_ccu_m_with_mux_gate(hdcp_clk, "hdcp", hdcp_parents, 0xc40, + 0, 4, /* m */ + 24, 2, /* mux */ + bit(31), /* gate */ + 0); + +static sunxi_ccu_gate(bus_hdcp_clk, "bus-hdcp", "ahb3", 0xc4c, bit(0), 0); + +/* fixed factor clocks */ +static clk_fixed_factor_fw_name(osc12m_clk, "osc12m", "hosc", 2, 1, 0); + +static const struct clk_hw *clk_parent_pll_audio[] = { + &pll_audio_hs_clk.common.hw +}; + +/* + * the divider of pll-audio is fixed to 24 for now, so 24576000 and 22579200 + * rates can be set exactly in conjunction with sigma-delta modulation. + */ +static clk_fixed_factor_hws(pll_audio_1x_clk, "pll-audio-1x", + clk_parent_pll_audio, + 96, 1, clk_set_rate_parent); +static clk_fixed_factor_hws(pll_audio_2x_clk, "pll-audio-2x", + clk_parent_pll_audio, + 48, 1, clk_set_rate_parent); +static clk_fixed_factor_hws(pll_audio_4x_clk, "pll-audio-4x", + clk_parent_pll_audio, + 24, 1, clk_set_rate_parent); + +static const struct clk_hw *pll_periph0_parents[] = { + &pll_periph0_clk.common.hw +}; + +static clk_fixed_factor_hws(pll_periph0_2x_clk, "pll-periph0-2x", + pll_periph0_parents, + 1, 2, 0); + +static const struct clk_hw *pll_periph1_parents[] = { + &pll_periph1_clk.common.hw +}; + +static clk_fixed_factor_hws(pll_periph1_2x_clk, "pll-periph1-2x", + pll_periph1_parents, + 1, 2, 0); + +static clk_fixed_factor_hw(pll_video0_4x_clk, "pll-video0-4x", + &pll_video0_clk.common.hw, + 1, 4, clk_set_rate_parent); +static clk_fixed_factor_hw(pll_video1_4x_clk, "pll-video1-4x", + &pll_video1_clk.common.hw, + 1, 4, clk_set_rate_parent); +static clk_fixed_factor_hw(pll_video2_4x_clk, "pll-video2-4x", + &pll_video2_clk.common.hw, + 1, 4, clk_set_rate_parent); + +static struct ccu_common *sun50i_h616_ccu_clks[] = { + &pll_cpux_clk.common, + &pll_ddr0_clk.common, + &pll_ddr1_clk.common, + &pll_periph0_clk.common, + &pll_periph1_clk.common, + &pll_gpu_clk.common, + &pll_video0_clk.common, + &pll_video1_clk.common, + &pll_video2_clk.common, + &pll_ve_clk.common, + &pll_de_clk.common, + &pll_audio_hs_clk.common, + &cpux_clk.common, + &axi_clk.common, + &cpux_apb_clk.common, + &psi_ahb1_ahb2_clk.common, + &ahb3_clk.common, + &apb1_clk.common, + &apb2_clk.common, + &mbus_clk.common, + &de_clk.common, + &bus_de_clk.common, + &deinterlace_clk.common, + &bus_deinterlace_clk.common, + &g2d_clk.common, + &bus_g2d_clk.common, + &gpu0_clk.common, + &bus_gpu_clk.common, + &gpu1_clk.common, + &ce_clk.common, + &bus_ce_clk.common, + &ve_clk.common, + &bus_ve_clk.common, + &bus_dma_clk.common, + &bus_hstimer_clk.common, + &avs_clk.common, + &bus_dbg_clk.common, + &bus_psi_clk.common, + &bus_pwm_clk.common, + &bus_iommu_clk.common, + &dram_clk.common, + &mbus_dma_clk.common, + &mbus_ve_clk.common, + &mbus_ce_clk.common, + &mbus_ts_clk.common, + &mbus_nand_clk.common, + &mbus_g2d_clk.common, + &bus_dram_clk.common, + &nand0_clk.common, + &nand1_clk.common, + &bus_nand_clk.common, + &mmc0_clk.common, + &mmc1_clk.common, + &mmc2_clk.common, + &bus_mmc0_clk.common, + &bus_mmc1_clk.common, + &bus_mmc2_clk.common, + &bus_uart0_clk.common, + &bus_uart1_clk.common, + &bus_uart2_clk.common, + &bus_uart3_clk.common, + &bus_uart4_clk.common, + &bus_uart5_clk.common, + &bus_i2c0_clk.common, + &bus_i2c1_clk.common, + &bus_i2c2_clk.common, + &bus_i2c3_clk.common, + &bus_i2c4_clk.common, + &spi0_clk.common, + &spi1_clk.common, + &bus_spi0_clk.common, + &bus_spi1_clk.common, + &emac_25m_clk.common, + &bus_emac0_clk.common, + &bus_emac1_clk.common, + &ts_clk.common, + &bus_ts_clk.common, + &bus_ths_clk.common, + &spdif_clk.common, + &bus_spdif_clk.common, + &dmic_clk.common, + &bus_dmic_clk.common, + &audio_codec_1x_clk.common, + &audio_codec_4x_clk.common, + &bus_audio_codec_clk.common, + &audio_hub_clk.common, + &bus_audio_hub_clk.common, + &usb_ohci0_clk.common, + &usb_phy0_clk.common, + &usb_ohci1_clk.common, + &usb_phy1_clk.common, + &usb_ohci2_clk.common, + &usb_phy2_clk.common, + &usb_ohci3_clk.common, + &usb_phy3_clk.common, + &bus_ohci0_clk.common, + &bus_ohci1_clk.common, + &bus_ohci2_clk.common, + &bus_ohci3_clk.common, + &bus_ehci0_clk.common, + &bus_ehci1_clk.common, + &bus_ehci2_clk.common, + &bus_ehci3_clk.common, + &bus_otg_clk.common, + &bus_keyadc_clk.common, + &hdmi_clk.common, + &hdmi_slow_clk.common, + &hdmi_cec_clk.common, + &bus_hdmi_clk.common, + &bus_tcon_top_clk.common, + &tcon_tv0_clk.common, + &tcon_tv1_clk.common, + &bus_tcon_tv0_clk.common, + &bus_tcon_tv1_clk.common, + &tve0_clk.common, + &bus_tve_top_clk.common, + &bus_tve0_clk.common, + &hdcp_clk.common, + &bus_hdcp_clk.common, +}; + +static struct clk_hw_onecell_data sun50i_h616_hw_clks = { + .hws = { + [clk_osc12m] = &osc12m_clk.hw, + [clk_pll_cpux] = &pll_cpux_clk.common.hw, + [clk_pll_ddr0] = &pll_ddr0_clk.common.hw, + [clk_pll_ddr1] = &pll_ddr1_clk.common.hw, + [clk_pll_periph0] = &pll_periph0_clk.common.hw, + [clk_pll_periph0_2x] = &pll_periph0_2x_clk.hw, + [clk_pll_periph1] = &pll_periph1_clk.common.hw, + [clk_pll_periph1_2x] = &pll_periph1_2x_clk.hw, + [clk_pll_gpu] = &pll_gpu_clk.common.hw, + [clk_pll_video0] = &pll_video0_clk.common.hw, + [clk_pll_video0_4x] = &pll_video0_4x_clk.hw, + [clk_pll_video1] = &pll_video1_clk.common.hw, + [clk_pll_video1_4x] = &pll_video1_4x_clk.hw, + [clk_pll_video2] = &pll_video2_clk.common.hw, + [clk_pll_video2_4x] = &pll_video2_4x_clk.hw, + [clk_pll_ve] = &pll_ve_clk.common.hw, + [clk_pll_de] = &pll_de_clk.common.hw, + [clk_pll_audio_hs] = &pll_audio_hs_clk.common.hw, + [clk_pll_audio_1x] = &pll_audio_1x_clk.hw, + [clk_pll_audio_2x] = &pll_audio_2x_clk.hw, + [clk_pll_audio_4x] = &pll_audio_4x_clk.hw, + [clk_cpux] = &cpux_clk.common.hw, + [clk_axi] = &axi_clk.common.hw, + [clk_cpux_apb] = &cpux_apb_clk.common.hw, + [clk_psi_ahb1_ahb2] = &psi_ahb1_ahb2_clk.common.hw, + [clk_ahb3] = &ahb3_clk.common.hw, + [clk_apb1] = &apb1_clk.common.hw, + [clk_apb2] = &apb2_clk.common.hw, + [clk_mbus] = &mbus_clk.common.hw, + [clk_de] = &de_clk.common.hw, + [clk_bus_de] = &bus_de_clk.common.hw, + [clk_deinterlace] = &deinterlace_clk.common.hw, + [clk_bus_deinterlace] = &bus_deinterlace_clk.common.hw, + [clk_g2d] = &g2d_clk.common.hw, + [clk_bus_g2d] = &bus_g2d_clk.common.hw, + [clk_gpu0] = &gpu0_clk.common.hw, + [clk_bus_gpu] = &bus_gpu_clk.common.hw, + [clk_gpu1] = &gpu1_clk.common.hw, + [clk_ce] = &ce_clk.common.hw, + [clk_bus_ce] = &bus_ce_clk.common.hw, + [clk_ve] = &ve_clk.common.hw, + [clk_bus_ve] = &bus_ve_clk.common.hw, + [clk_bus_dma] = &bus_dma_clk.common.hw, + [clk_bus_hstimer] = &bus_hstimer_clk.common.hw, + [clk_avs] = &avs_clk.common.hw, + [clk_bus_dbg] = &bus_dbg_clk.common.hw, + [clk_bus_psi] = &bus_psi_clk.common.hw, + [clk_bus_pwm] = &bus_pwm_clk.common.hw, + [clk_bus_iommu] = &bus_iommu_clk.common.hw, + [clk_dram] = &dram_clk.common.hw, + [clk_mbus_dma] = &mbus_dma_clk.common.hw, + [clk_mbus_ve] = &mbus_ve_clk.common.hw, + [clk_mbus_ce] = &mbus_ce_clk.common.hw, + [clk_mbus_ts] = &mbus_ts_clk.common.hw, + [clk_mbus_nand] = &mbus_nand_clk.common.hw, + [clk_mbus_g2d] = &mbus_g2d_clk.common.hw, + [clk_bus_dram] = &bus_dram_clk.common.hw, + [clk_nand0] = &nand0_clk.common.hw, + [clk_nand1] = &nand1_clk.common.hw, + [clk_bus_nand] = &bus_nand_clk.common.hw, + [clk_mmc0] = &mmc0_clk.common.hw, + [clk_mmc1] = &mmc1_clk.common.hw, + [clk_mmc2] = &mmc2_clk.common.hw, + [clk_bus_mmc0] = &bus_mmc0_clk.common.hw, + [clk_bus_mmc1] = &bus_mmc1_clk.common.hw, + [clk_bus_mmc2] = &bus_mmc2_clk.common.hw, + [clk_bus_uart0] = &bus_uart0_clk.common.hw, + [clk_bus_uart1] = &bus_uart1_clk.common.hw, + [clk_bus_uart2] = &bus_uart2_clk.common.hw, + [clk_bus_uart3] = &bus_uart3_clk.common.hw, + [clk_bus_uart4] = &bus_uart4_clk.common.hw, + [clk_bus_uart5] = &bus_uart5_clk.common.hw, + [clk_bus_i2c0] = &bus_i2c0_clk.common.hw, + [clk_bus_i2c1] = &bus_i2c1_clk.common.hw, + [clk_bus_i2c2] = &bus_i2c2_clk.common.hw, + [clk_bus_i2c3] = &bus_i2c3_clk.common.hw, + [clk_bus_i2c4] = &bus_i2c4_clk.common.hw, + [clk_spi0] = &spi0_clk.common.hw, + [clk_spi1] = &spi1_clk.common.hw, + [clk_bus_spi0] = &bus_spi0_clk.common.hw, + [clk_bus_spi1] = &bus_spi1_clk.common.hw, + [clk_emac_25m] = &emac_25m_clk.common.hw, + [clk_bus_emac0] = &bus_emac0_clk.common.hw, + [clk_bus_emac1] = &bus_emac1_clk.common.hw, + [clk_ts] = &ts_clk.common.hw, + [clk_bus_ts] = &bus_ts_clk.common.hw, + [clk_bus_ths] = &bus_ths_clk.common.hw, + [clk_spdif] = &spdif_clk.common.hw, + [clk_bus_spdif] = &bus_spdif_clk.common.hw, + [clk_dmic] = &dmic_clk.common.hw, + [clk_bus_dmic] = &bus_dmic_clk.common.hw, + [clk_audio_codec_1x] = &audio_codec_1x_clk.common.hw, + [clk_audio_codec_4x] = &audio_codec_4x_clk.common.hw, + [clk_bus_audio_codec] = &bus_audio_codec_clk.common.hw, + [clk_audio_hub] = &audio_hub_clk.common.hw, + [clk_bus_audio_hub] = &bus_audio_hub_clk.common.hw, + [clk_usb_ohci0] = &usb_ohci0_clk.common.hw, + [clk_usb_phy0] = &usb_phy0_clk.common.hw, + [clk_usb_ohci1] = &usb_ohci1_clk.common.hw, + [clk_usb_phy1] = &usb_phy1_clk.common.hw, + [clk_usb_ohci2] = &usb_ohci2_clk.common.hw, + [clk_usb_phy2] = &usb_phy2_clk.common.hw, + [clk_usb_ohci3] = &usb_ohci3_clk.common.hw, + [clk_usb_phy3] = &usb_phy3_clk.common.hw, + [clk_bus_ohci0] = &bus_ohci0_clk.common.hw, + [clk_bus_ohci1] = &bus_ohci1_clk.common.hw, + [clk_bus_ohci2] = &bus_ohci2_clk.common.hw, + [clk_bus_ohci3] = &bus_ohci3_clk.common.hw, + [clk_bus_ehci0] = &bus_ehci0_clk.common.hw, + [clk_bus_ehci1] = &bus_ehci1_clk.common.hw, + [clk_bus_ehci2] = &bus_ehci2_clk.common.hw, + [clk_bus_ehci3] = &bus_ehci3_clk.common.hw, + [clk_bus_otg] = &bus_otg_clk.common.hw, + [clk_bus_keyadc] = &bus_keyadc_clk.common.hw, + [clk_hdmi] = &hdmi_clk.common.hw, + [clk_hdmi_slow] = &hdmi_slow_clk.common.hw, + [clk_hdmi_cec] = &hdmi_cec_clk.common.hw, + [clk_bus_hdmi] = &bus_hdmi_clk.common.hw, + [clk_bus_tcon_top] = &bus_tcon_top_clk.common.hw, + [clk_tcon_tv0] = &tcon_tv0_clk.common.hw, + [clk_tcon_tv1] = &tcon_tv1_clk.common.hw, + [clk_bus_tcon_tv0] = &bus_tcon_tv0_clk.common.hw, + [clk_bus_tcon_tv1] = &bus_tcon_tv1_clk.common.hw, + [clk_tve0] = &tve0_clk.common.hw, + [clk_bus_tve_top] = &bus_tve_top_clk.common.hw, + [clk_bus_tve0] = &bus_tve0_clk.common.hw, + [clk_hdcp] = &hdcp_clk.common.hw, + [clk_bus_hdcp] = &bus_hdcp_clk.common.hw, + }, + .num = clk_number, +}; + +static struct ccu_reset_map sun50i_h616_ccu_resets[] = { + [rst_mbus] = { 0x540, bit(30) }, + + [rst_bus_de] = { 0x60c, bit(16) }, + [rst_bus_deinterlace] = { 0x62c, bit(16) }, + [rst_bus_gpu] = { 0x67c, bit(16) }, + [rst_bus_ce] = { 0x68c, bit(16) }, + [rst_bus_ve] = { 0x69c, bit(16) }, + [rst_bus_dma] = { 0x70c, bit(16) }, + [rst_bus_hstimer] = { 0x73c, bit(16) }, + [rst_bus_dbg] = { 0x78c, bit(16) }, + [rst_bus_psi] = { 0x79c, bit(16) }, + [rst_bus_pwm] = { 0x7ac, bit(16) }, + [rst_bus_iommu] = { 0x7bc, bit(16) }, + [rst_bus_dram] = { 0x80c, bit(16) }, + [rst_bus_nand] = { 0x82c, bit(16) }, + [rst_bus_mmc0] = { 0x84c, bit(16) }, + [rst_bus_mmc1] = { 0x84c, bit(17) }, + [rst_bus_mmc2] = { 0x84c, bit(18) }, + [rst_bus_uart0] = { 0x90c, bit(16) }, + [rst_bus_uart1] = { 0x90c, bit(17) }, + [rst_bus_uart2] = { 0x90c, bit(18) }, + [rst_bus_uart3] = { 0x90c, bit(19) }, + [rst_bus_uart4] = { 0x90c, bit(20) }, + [rst_bus_uart5] = { 0x90c, bit(21) }, + [rst_bus_i2c0] = { 0x91c, bit(16) }, + [rst_bus_i2c1] = { 0x91c, bit(17) }, + [rst_bus_i2c2] = { 0x91c, bit(18) }, + [rst_bus_i2c3] = { 0x91c, bit(19) }, + [rst_bus_i2c4] = { 0x91c, bit(20) }, + [rst_bus_spi0] = { 0x96c, bit(16) }, + [rst_bus_spi1] = { 0x96c, bit(17) }, + [rst_bus_emac0] = { 0x97c, bit(16) }, + [rst_bus_emac1] = { 0x97c, bit(17) }, + [rst_bus_ts] = { 0x9bc, bit(16) }, + [rst_bus_ths] = { 0x9fc, bit(16) }, + [rst_bus_spdif] = { 0xa2c, bit(16) }, + [rst_bus_dmic] = { 0xa4c, bit(16) }, + [rst_bus_audio_codec] = { 0xa5c, bit(16) }, + [rst_bus_audio_hub] = { 0xa6c, bit(16) }, + + [rst_usb_phy0] = { 0xa70, bit(30) }, + [rst_usb_phy1] = { 0xa74, bit(30) }, + [rst_usb_phy2] = { 0xa78, bit(30) }, + [rst_usb_phy3] = { 0xa7c, bit(30) }, + [rst_bus_ohci0] = { 0xa8c, bit(16) }, + [rst_bus_ohci1] = { 0xa8c, bit(17) }, + [rst_bus_ohci2] = { 0xa8c, bit(18) }, + [rst_bus_ohci3] = { 0xa8c, bit(19) }, + [rst_bus_ehci0] = { 0xa8c, bit(20) }, + [rst_bus_ehci1] = { 0xa8c, bit(21) }, + [rst_bus_ehci2] = { 0xa8c, bit(22) }, + [rst_bus_ehci3] = { 0xa8c, bit(23) }, + [rst_bus_otg] = { 0xa8c, bit(24) }, + [rst_bus_keyadc] = { 0xa9c, bit(16) }, + + [rst_bus_hdmi] = { 0xb1c, bit(16) }, + [rst_bus_hdmi_sub] = { 0xb1c, bit(17) }, + [rst_bus_tcon_top] = { 0xb5c, bit(16) }, + [rst_bus_tcon_tv0] = { 0xb9c, bit(16) }, + [rst_bus_tcon_tv1] = { 0xb9c, bit(17) }, + [rst_bus_tve_top] = { 0xbbc, bit(16) }, + [rst_bus_tve0] = { 0xbbc, bit(17) }, + [rst_bus_hdcp] = { 0xc4c, bit(16) }, +}; + +static const struct sunxi_ccu_desc sun50i_h616_ccu_desc = { + .ccu_clks = sun50i_h616_ccu_clks, + .num_ccu_clks = array_size(sun50i_h616_ccu_clks), + + .hw_clks = &sun50i_h616_hw_clks, + + .resets = sun50i_h616_ccu_resets, + .num_resets = array_size(sun50i_h616_ccu_resets), +}; + +static const u32 pll_regs[] = { + sun50i_h616_pll_cpux_reg, + sun50i_h616_pll_ddr0_reg, + sun50i_h616_pll_ddr1_reg, + sun50i_h616_pll_periph0_reg, + sun50i_h616_pll_periph1_reg, + sun50i_h616_pll_gpu_reg, + sun50i_h616_pll_video0_reg, + sun50i_h616_pll_video1_reg, + sun50i_h616_pll_video2_reg, + sun50i_h616_pll_ve_reg, + sun50i_h616_pll_de_reg, + sun50i_h616_pll_audio_reg, +}; + +static const u32 pll_video_regs[] = { + sun50i_h616_pll_video0_reg, + sun50i_h616_pll_video1_reg, + sun50i_h616_pll_video2_reg, +}; + +static const u32 usb2_clk_regs[] = { + sun50i_h616_usb0_clk_reg, + sun50i_h616_usb1_clk_reg, + sun50i_h616_usb2_clk_reg, + sun50i_h616_usb3_clk_reg, +}; + +static void __init sun50i_h616_ccu_setup(struct device_node *node) +{ + void __iomem *reg; + u32 val; + int i; + + reg = of_io_request_and_map(node, 0, of_node_full_name(node)); + if (is_err(reg)) { + pr_err("%pof: could not map clock registers ", node); + return; + } + + /* enable the lock bits and the output enable bits on all plls */ + for (i = 0; i < array_size(pll_regs); i++) { + val = readl(reg + pll_regs[i]); + val |= bit(29) | bit(27); + writel(val, reg + pll_regs[i]); + } + + /* + * force the output divider of video plls to 0. + * + * see the comment before pll-video0 definition for the reason. + */ + for (i = 0; i < array_size(pll_video_regs); i++) { + val = readl(reg + pll_video_regs[i]); + val &= ~bit(0); + writel(val, reg + pll_video_regs[i]); + } + + /* + * force ohci 12m clock sources to 00 (12mhz divided from 48mhz) + * + * this clock mux is still mysterious, and the code just enforces + * it to have a valid clock parent. + */ + for (i = 0; i < array_size(usb2_clk_regs); i++) { + val = readl(reg + usb2_clk_regs[i]); + val &= ~genmask(25, 24); + writel(val, reg + usb2_clk_regs[i]); + } + + /* + * force the post-divider of pll-audio to 12 and the output divider + * of it to 2, so 24576000 and 22579200 rates can be set exactly. + */ + val = readl(reg + sun50i_h616_pll_audio_reg); + val &= ~(genmask(21, 16) | bit(0)); + writel(val | (11 << 16) | bit(0), reg + sun50i_h616_pll_audio_reg); + + /* + * first clock parent (osc32k) is unusable for cec. but since there + * is no good way to force parent switch (both run with same frequency), + * just set second clock parent here. + */ + val = readl(reg + sun50i_h616_hdmi_cec_clk_reg); + val |= bit(24); + writel(val, reg + sun50i_h616_hdmi_cec_clk_reg); + + i = sunxi_ccu_probe(node, reg, &sun50i_h616_ccu_desc); + if (i) + pr_err("%pof: probing clocks fails: %d ", node, i); +} + +clk_of_declare(sun50i_h616_ccu, "allwinner,sun50i-h616-ccu", + sun50i_h616_ccu_setup); diff --git a/drivers/clk/sunxi-ng/ccu-sun50i-h616.h b/drivers/clk/sunxi-ng/ccu-sun50i-h616.h --- /dev/null +++ b/drivers/clk/sunxi-ng/ccu-sun50i-h616.h +/* spdx-license-identifier: gpl-2.0 */ +/* + * copyright 2020 arm ltd. + */ + +#ifndef _ccu_sun50i_h616_h_ +#define _ccu_sun50i_h616_h_ + +#include <dt-bindings/clock/sun50i-h616-ccu.h> +#include <dt-bindings/reset/sun50i-h616-ccu.h> + +#define clk_osc12m 0 +#define clk_pll_cpux 1 +#define clk_pll_ddr0 2 +#define clk_pll_ddr1 3 + +/* pll_periph0 exported for prcm */ + +#define clk_pll_periph0_2x 5 +#define clk_pll_periph1 6 +#define clk_pll_periph1_2x 7 +#define clk_pll_gpu 8 +#define clk_pll_video0 9 +#define clk_pll_video0_4x 10 +#define clk_pll_video1 11 +#define clk_pll_video1_4x 12 +#define clk_pll_video2 13 +#define clk_pll_video2_4x 14 +#define clk_pll_ve 15 +#define clk_pll_de 16 +#define clk_pll_audio_hs 17 +#define clk_pll_audio_1x 18 +#define clk_pll_audio_2x 19 +#define clk_pll_audio_4x 20 + +/* cpux clock exported for dvfs */ + +#define clk_axi 22 +#define clk_cpux_apb 23 +#define clk_psi_ahb1_ahb2 24 +#define clk_ahb3 25 + +/* apb1 clock exported for pio */ + +#define clk_apb2 27 +#define clk_mbus 28 + +/* all module clocks and bus gates are exported except dram */ + +#define clk_dram 49 + +#define clk_bus_dram 56 + +#define clk_number (clk_bus_hdcp + 1) + +#endif /* _ccu_sun50i_h616_h_ */ diff --git a/include/dt-bindings/clock/sun50i-h616-ccu.h b/include/dt-bindings/clock/sun50i-h616-ccu.h --- /dev/null +++ b/include/dt-bindings/clock/sun50i-h616-ccu.h +/* spdx-license-identifier: (gpl-2.0+ or mit) */ +/* + * copyright (c) 2020 arm ltd. + */ + +#ifndef _dt_bindings_clk_sun50i_h616_h_ +#define _dt_bindings_clk_sun50i_h616_h_ + +#define clk_pll_periph0 4 + +#define clk_cpux 21 + +#define clk_apb1 26 + +#define clk_de 29 +#define clk_bus_de 30 +#define clk_deinterlace 31 +#define clk_bus_deinterlace 32 +#define clk_g2d 33 +#define clk_bus_g2d 34 +#define clk_gpu0 35 +#define clk_bus_gpu 36 +#define clk_gpu1 37 +#define clk_ce 38 +#define clk_bus_ce 39 +#define clk_ve 40 +#define clk_bus_ve 41 +#define clk_bus_dma 42 +#define clk_bus_hstimer 43 +#define clk_avs 44 +#define clk_bus_dbg 45 +#define clk_bus_psi 46 +#define clk_bus_pwm 47 +#define clk_bus_iommu 48 + +#define clk_mbus_dma 50 +#define clk_mbus_ve 51 +#define clk_mbus_ce 52 +#define clk_mbus_ts 53 +#define clk_mbus_nand 54 +#define clk_mbus_g2d 55 + +#define clk_nand0 57 +#define clk_nand1 58 +#define clk_bus_nand 59 +#define clk_mmc0 60 +#define clk_mmc1 61 +#define clk_mmc2 62 +#define clk_bus_mmc0 63 +#define clk_bus_mmc1 64 +#define clk_bus_mmc2 65 +#define clk_bus_uart0 66 +#define clk_bus_uart1 67 +#define clk_bus_uart2 68 +#define clk_bus_uart3 69 +#define clk_bus_uart4 70 +#define clk_bus_uart5 71 +#define clk_bus_i2c0 72 +#define clk_bus_i2c1 73 +#define clk_bus_i2c2 74 +#define clk_bus_i2c3 75 +#define clk_bus_i2c4 76 +#define clk_spi0 77 +#define clk_spi1 78 +#define clk_bus_spi0 79 +#define clk_bus_spi1 80 +#define clk_emac_25m 81 +#define clk_bus_emac0 82 +#define clk_bus_emac1 83 +#define clk_ts 84 +#define clk_bus_ts 85 +#define clk_bus_ths 86 +#define clk_spdif 87 +#define clk_bus_spdif 88 +#define clk_dmic 89 +#define clk_bus_dmic 90 +#define clk_audio_codec_1x 91 +#define clk_audio_codec_4x 92 +#define clk_bus_audio_codec 93 +#define clk_audio_hub 94 +#define clk_bus_audio_hub 95 +#define clk_usb_ohci0 96 +#define clk_usb_phy0 97 +#define clk_usb_ohci1 98 +#define clk_usb_phy1 99 +#define clk_usb_ohci2 100 +#define clk_usb_phy2 101 +#define clk_usb_ohci3 102 +#define clk_usb_phy3 103 +#define clk_bus_ohci0 104 +#define clk_bus_ohci1 105 +#define clk_bus_ohci2 106 +#define clk_bus_ohci3 107 +#define clk_bus_ehci0 108 +#define clk_bus_ehci1 109 +#define clk_bus_ehci2 110 +#define clk_bus_ehci3 111 +#define clk_bus_otg 112 +#define clk_bus_keyadc 113 +#define clk_hdmi 114 +#define clk_hdmi_slow 115 +#define clk_hdmi_cec 116 +#define clk_bus_hdmi 117 +#define clk_bus_tcon_top 118 +#define clk_tcon_tv0 119 +#define clk_tcon_tv1 120 +#define clk_bus_tcon_tv0 121 +#define clk_bus_tcon_tv1 122 +#define clk_tve0 123 +#define clk_bus_tve_top 124 +#define clk_bus_tve0 125 +#define clk_hdcp 126 +#define clk_bus_hdcp 127 + +#endif /* _dt_bindings_clk_sun50i_h616_h_ */ diff --git a/include/dt-bindings/reset/sun50i-h616-ccu.h b/include/dt-bindings/reset/sun50i-h616-ccu.h --- /dev/null +++ b/include/dt-bindings/reset/sun50i-h616-ccu.h +/* spdx-license-identifier: (gpl-2.0+ or mit) */ +/* + * copyright (c) 2020 arm ltd. + */ + +#ifndef _dt_bindings_reset_sun50i_h616_h_ +#define _dt_bindings_reset_sun50i_h616_h_ + +#define rst_mbus 0 +#define rst_bus_de 1 +#define rst_bus_deinterlace 2 +#define rst_bus_gpu 3 +#define rst_bus_ce 4 +#define rst_bus_ve 5 +#define rst_bus_dma 6 +#define rst_bus_hstimer 7 +#define rst_bus_dbg 8 +#define rst_bus_psi 9 +#define rst_bus_pwm 10 +#define rst_bus_iommu 11 +#define rst_bus_dram 12 +#define rst_bus_nand 13 +#define rst_bus_mmc0 14 +#define rst_bus_mmc1 15 +#define rst_bus_mmc2 16 +#define rst_bus_uart0 17 +#define rst_bus_uart1 18 +#define rst_bus_uart2 19 +#define rst_bus_uart3 20 +#define rst_bus_uart4 21 +#define rst_bus_uart5 22 +#define rst_bus_i2c0 23 +#define rst_bus_i2c1 24 +#define rst_bus_i2c2 25 +#define rst_bus_i2c3 26 +#define rst_bus_i2c4 27 +#define rst_bus_spi0 28 +#define rst_bus_spi1 29 +#define rst_bus_emac0 30 +#define rst_bus_emac1 31 +#define rst_bus_ts 32 +#define rst_bus_ths 33 +#define rst_bus_spdif 34 +#define rst_bus_dmic 35 +#define rst_bus_audio_codec 36 +#define rst_bus_audio_hub 37 +#define rst_usb_phy0 38 +#define rst_usb_phy1 39 +#define rst_usb_phy2 40 +#define rst_usb_phy3 41 +#define rst_bus_ohci0 42 +#define rst_bus_ohci1 43 +#define rst_bus_ohci2 44 +#define rst_bus_ohci3 45 +#define rst_bus_ehci0 46 +#define rst_bus_ehci1 47 +#define rst_bus_ehci2 48 +#define rst_bus_ehci3 49 +#define rst_bus_otg 50 +#define rst_bus_hdmi 51 +#define rst_bus_hdmi_sub 52 +#define rst_bus_tcon_top 53 +#define rst_bus_tcon_tv0 54 +#define rst_bus_tcon_tv1 55 +#define rst_bus_tve_top 56 +#define rst_bus_tve0 57 +#define rst_bus_hdcp 58 +#define rst_bus_keyadc 59 + +#endif /* _dt_bindings_reset_sun50i_h616_h_ */
|
Clock
|
88dde5e23da1a16fe9a417171e6c941736b8d3a6
|
andre przywara
|
include
|
dt-bindings
|
clock, reset, sunxi-ng
|
clk: sunxi-ng: add support for the allwinner h616 r-ccu
|
the clocks itself are identical to the h6 r-ccu, it's just that the h616 has not all of them implemented (or connected).
|
this release allows to map an uid to a different one in a mount; it also adds support for selecting the preemption model at runtime; support for a low-overhead memory error detector designed to be used in production; support for the acrn hypervisor designed for embedded systems; btrfs initial support for zoned devices, subpage blocks sizes and performance improvements; support for eager nfs writes; support for a thermal power management to control the surface temperature of embedded devices in an unified way; the napi polling can be moved to a kernel thread; and support for non-blocking path lookups. as always, there are many other features, new drivers, improvements and fixes.
|
add support for the allwinner h616 r-ccu
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'cryptography', 'security', 'networking', 'architectures x86 arm risc-v powerpc mips csky s390 pa-risc c6x']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'device voltage and frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'non-transparent bridge (ntb)', 'thunderbolt', 'clock', 'phy ("physical layer" framework)', 'cxl (compute express link)', 'various']
|
['sunxi-ng']
|
['kconfig', 'c']
| 2
| 49
| 1
|
--- diff --git a/drivers/clk/sunxi-ng/kconfig b/drivers/clk/sunxi-ng/kconfig --- a/drivers/clk/sunxi-ng/kconfig +++ b/drivers/clk/sunxi-ng/kconfig - bool "support for the allwinner h6 prcm ccu" + bool "support for the allwinner h6 and h616 prcm ccu" diff --git a/drivers/clk/sunxi-ng/ccu-sun50i-h6-r.c b/drivers/clk/sunxi-ng/ccu-sun50i-h6-r.c --- a/drivers/clk/sunxi-ng/ccu-sun50i-h6-r.c +++ b/drivers/clk/sunxi-ng/ccu-sun50i-h6-r.c +static struct ccu_common *sun50i_h616_r_ccu_clks[] = { + &r_apb1_clk.common, + &r_apb2_clk.common, + &r_apb1_twd_clk.common, + &r_apb2_i2c_clk.common, + &r_apb2_rsb_clk.common, + &r_apb1_ir_clk.common, + &ir_clk.common, +}; + +static struct clk_hw_onecell_data sun50i_h616_r_hw_clks = { + .hws = { + [clk_r_ahb] = &r_ahb_clk.hw, + [clk_r_apb1] = &r_apb1_clk.common.hw, + [clk_r_apb2] = &r_apb2_clk.common.hw, + [clk_r_apb1_twd] = &r_apb1_twd_clk.common.hw, + [clk_r_apb2_i2c] = &r_apb2_i2c_clk.common.hw, + [clk_r_apb2_rsb] = &r_apb2_rsb_clk.common.hw, + [clk_r_apb1_ir] = &r_apb1_ir_clk.common.hw, + [clk_ir] = &ir_clk.common.hw, + }, + .num = clk_number, +}; + +static struct ccu_reset_map sun50i_h616_r_ccu_resets[] = { + [rst_r_apb1_twd] = { 0x12c, bit(16) }, + [rst_r_apb2_i2c] = { 0x19c, bit(16) }, + [rst_r_apb2_rsb] = { 0x1bc, bit(16) }, + [rst_r_apb1_ir] = { 0x1cc, bit(16) }, +}; + +static const struct sunxi_ccu_desc sun50i_h616_r_ccu_desc = { + .ccu_clks = sun50i_h616_r_ccu_clks, + .num_ccu_clks = array_size(sun50i_h616_r_ccu_clks), + + .hw_clks = &sun50i_h616_r_hw_clks, + + .resets = sun50i_h616_r_ccu_resets, + .num_resets = array_size(sun50i_h616_r_ccu_resets), +}; + + +static void __init sun50i_h616_r_ccu_setup(struct device_node *node) +{ + sunxi_r_ccu_init(node, &sun50i_h616_r_ccu_desc); +} +clk_of_declare(sun50i_h616_r_ccu, "allwinner,sun50i-h616-r-ccu", + sun50i_h616_r_ccu_setup);
|
Clock
|
394a36dd9dec7fd48b75dab23432632a30f241ea
|
andre przywara maxime ripard mripard kernel org
|
drivers
|
clk
|
sunxi-ng
|
clk: vc5: add support for optional load capacitance
|
there are two registers which can set the load capacitance for xtal1 and xtal2. these are optional registers when using an external crystal. parse the device tree and set the corresponding registers accordingly.
|
this release allows to map an uid to a different one in a mount; it also adds support for selecting the preemption model at runtime; support for a low-overhead memory error detector designed to be used in production; support for the acrn hypervisor designed for embedded systems; btrfs initial support for zoned devices, subpage blocks sizes and performance improvements; support for eager nfs writes; support for a thermal power management to control the surface temperature of embedded devices in an unified way; the napi polling can be moved to a kernel thread; and support for non-blocking path lookups. as always, there are many other features, new drivers, improvements and fixes.
|
add support for optional load capacitance
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'cryptography', 'security', 'networking', 'architectures x86 arm risc-v powerpc mips csky s390 pa-risc c6x']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'device voltage and frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'non-transparent bridge (ntb)', 'thunderbolt', 'clock', 'phy ("physical layer" framework)', 'cxl (compute express link)', 'various']
|
['vc5']
|
['c']
| 1
| 64
| 0
|
--- diff --git a/drivers/clk/clk-versaclock5.c b/drivers/clk/clk-versaclock5.c --- a/drivers/clk/clk-versaclock5.c +++ b/drivers/clk/clk-versaclock5.c +static int vc5_map_cap_value(u32 femtofarads) +{ + int mapped_value; + + /* + * the datasheet explicitly states 9000 - 25000 with 0.5pf + * steps, but the programmer's guide shows the steps are 0.430pf. + * after getting feedback from renesas, the .5pf steps were the + * goal, but 430nf was the actual values. + * because of this, the actual range goes to 22760 instead of 25000 + */ + if (femtofarads < 9000 || femtofarads > 22760) + return -einval; + + /* + * the programmer's guide shows xtal[5:0] but in reality, + * xtal[0] and xtal[1] are both lsb which makes the math + * strange. with clarfication from renesas, setting the + * values should be simpler by ignoring xtal[0] + */ + mapped_value = div_round_closest(femtofarads - 9000, 430); + + /* + * since the calculation ignores xtal[0], there is one + * special case where mapped_value = 32. in reality, this means + * the real mapped value should be 111111b. in other cases, + * the mapped_value needs to be shifted 1 to the left. + */ + if (mapped_value > 31) + mapped_value = 0x3f; + else + mapped_value <<= 1; + + return mapped_value; +} +static int vc5_update_cap_load(struct device_node *node, struct vc5_driver_data *vc5) +{ + u32 value; + int mapped_value; + + if (!of_property_read_u32(node, "idt,xtal-load-femtofarads", &value)) { + mapped_value = vc5_map_cap_value(value); + if (mapped_value < 0) + return mapped_value; + + /* + * the mapped_value is really the high 6 bits of + * vc5_xtal_x1_load_cap and vc5_xtal_x2_load_cap, so + * shift the value 2 places. + */ + regmap_update_bits(vc5->regmap, vc5_xtal_x1_load_cap, ~0x03, mapped_value << 2); + regmap_update_bits(vc5->regmap, vc5_xtal_x2_load_cap, ~0x03, mapped_value << 2); + } + + return 0; +} + + /* configure optional loading capacitance for external xtal */ + if (!(vc5->chip_info->flags & vc5_has_internal_xtal)) { + ret = vc5_update_cap_load(client->dev.of_node, vc5); + if (ret) + goto err_clk_register; + } +
|
Clock
|
f3d661d6b4412c9d5f60d0566554fab83f9db381
|
adam ford luca ceresoli luca lucaceresoli net
|
drivers
|
clk
| |
clk: drop unused efm32gg driver
|
support for this machine was just removed, so drop the now unused clk driver, too.
|
this release allows to map an uid to a different one in a mount; it also adds support for selecting the preemption model at runtime; support for a low-overhead memory error detector designed to be used in production; support for the acrn hypervisor designed for embedded systems; btrfs initial support for zoned devices, subpage blocks sizes and performance improvements; support for eager nfs writes; support for a thermal power management to control the surface temperature of embedded devices in an unified way; the napi polling can be moved to a kernel thread; and support for non-blocking path lookups. as always, there are many other features, new drivers, improvements and fixes.
|
drop unused efm32gg driver
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'cryptography', 'security', 'networking', 'architectures x86 arm risc-v powerpc mips csky s390 pa-risc c6x']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'device voltage and frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'non-transparent bridge (ntb)', 'thunderbolt', 'clock', 'phy ("physical layer" framework)', 'cxl (compute express link)', 'various']
|
[]
|
['c', 'makefile']
| 2
| 0
| 85
|
--- diff --git a/drivers/clk/makefile b/drivers/clk/makefile --- a/drivers/clk/makefile +++ b/drivers/clk/makefile -obj-$(config_arch_efm32) += clk-efm32gg.o diff --git a/drivers/clk/clk-efm32gg.c b/drivers/clk/clk-efm32gg.c --- a/drivers/clk/clk-efm32gg.c +++ /dev/null -// spdx-license-identifier: gpl-2.0-only -/* - * copyright (c) 2013 pengutronix - * uwe kleine-koenig <u.kleine-koenig@pengutronix.de> - */ -#include <linux/io.h> -#include <linux/clk-provider.h> -#include <linux/of.h> -#include <linux/of_address.h> -#include <linux/slab.h> - -#include <dt-bindings/clock/efm32-cmu.h> - -#define cmu_hfperclken0 0x44 -#define cmu_max_clks 37 - -static struct clk_hw_onecell_data *clk_data; - -static void __init efm32gg_cmu_init(struct device_node *np) -{ - int i; - void __iomem *base; - struct clk_hw **hws; - - clk_data = kzalloc(struct_size(clk_data, hws, cmu_max_clks), - gfp_kernel); - - if (!clk_data) - return; - - hws = clk_data->hws; - - for (i = 0; i < cmu_max_clks; ++i) - hws[i] = err_ptr(-enoent); - - base = of_iomap(np, 0); - if (!base) { - pr_warn("failed to map address range for efm32gg,cmu node "); - return; - } - - hws[clk_hfxo] = clk_hw_register_fixed_rate(null, "hfxo", null, 0, - 48000000); - - hws[clk_hfperclkusart0] = clk_hw_register_gate(null, "hfperclk.usart0", - "hfxo", 0, base + cmu_hfperclken0, 0, 0, null); - hws[clk_hfperclkusart1] = clk_hw_register_gate(null, "hfperclk.usart1", - "hfxo", 0, base + cmu_hfperclken0, 1, 0, null); - hws[clk_hfperclkusart2] = clk_hw_register_gate(null, "hfperclk.usart2", - "hfxo", 0, base + cmu_hfperclken0, 2, 0, null); - hws[clk_hfperclkuart0] = clk_hw_register_gate(null, "hfperclk.uart0", - "hfxo", 0, base + cmu_hfperclken0, 3, 0, null); - hws[clk_hfperclkuart1] = clk_hw_register_gate(null, "hfperclk.uart1", - "hfxo", 0, base + cmu_hfperclken0, 4, 0, null); - hws[clk_hfperclktimer0] = clk_hw_register_gate(null, "hfperclk.timer0", - "hfxo", 0, base + cmu_hfperclken0, 5, 0, null); - hws[clk_hfperclktimer1] = clk_hw_register_gate(null, "hfperclk.timer1", - "hfxo", 0, base + cmu_hfperclken0, 6, 0, null); - hws[clk_hfperclktimer2] = clk_hw_register_gate(null, "hfperclk.timer2", - "hfxo", 0, base + cmu_hfperclken0, 7, 0, null); - hws[clk_hfperclktimer3] = clk_hw_register_gate(null, "hfperclk.timer3", - "hfxo", 0, base + cmu_hfperclken0, 8, 0, null); - hws[clk_hfperclkacmp0] = clk_hw_register_gate(null, "hfperclk.acmp0", - "hfxo", 0, base + cmu_hfperclken0, 9, 0, null); - hws[clk_hfperclkacmp1] = clk_hw_register_gate(null, "hfperclk.acmp1", - "hfxo", 0, base + cmu_hfperclken0, 10, 0, null); - hws[clk_hfperclki2c0] = clk_hw_register_gate(null, "hfperclk.i2c0", - "hfxo", 0, base + cmu_hfperclken0, 11, 0, null); - hws[clk_hfperclki2c1] = clk_hw_register_gate(null, "hfperclk.i2c1", - "hfxo", 0, base + cmu_hfperclken0, 12, 0, null); - hws[clk_hfperclkgpio] = clk_hw_register_gate(null, "hfperclk.gpio", - "hfxo", 0, base + cmu_hfperclken0, 13, 0, null); - hws[clk_hfperclkvcmp] = clk_hw_register_gate(null, "hfperclk.vcmp", - "hfxo", 0, base + cmu_hfperclken0, 14, 0, null); - hws[clk_hfperclkprs] = clk_hw_register_gate(null, "hfperclk.prs", - "hfxo", 0, base + cmu_hfperclken0, 15, 0, null); - hws[clk_hfperclkadc0] = clk_hw_register_gate(null, "hfperclk.adc0", - "hfxo", 0, base + cmu_hfperclken0, 16, 0, null); - hws[clk_hfperclkdac0] = clk_hw_register_gate(null, "hfperclk.dac0", - "hfxo", 0, base + cmu_hfperclken0, 17, 0, null); - - of_clk_add_hw_provider(np, of_clk_hw_onecell_get, clk_data); -} -clk_of_declare(efm32ggcmu, "efm32gg,cmu", efm32gg_cmu_init);
|
Clock
|
33034d7422db6fd85795fd4b1ef5780efa99a8af
|
uwe kleine k nig
|
drivers
|
clk
| |
clocksource/drivers/atlas: remove sirf atlas driver
|
the csr sirf prima2/atlas platforms are getting removed, so this driver is no longer needed.
|
this release allows to map an uid to a different one in a mount; it also adds support for selecting the preemption model at runtime; support for a low-overhead memory error detector designed to be used in production; support for the acrn hypervisor designed for embedded systems; btrfs initial support for zoned devices, subpage blocks sizes and performance improvements; support for eager nfs writes; support for a thermal power management to control the surface temperature of embedded devices in an unified way; the napi polling can be moved to a kernel thread; and support for non-blocking path lookups. as always, there are many other features, new drivers, improvements and fixes.
|
remove sirf atlas driver
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'cryptography', 'security', 'networking', 'architectures x86 arm risc-v powerpc mips csky s390 pa-risc c6x']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'device voltage and frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'non-transparent bridge (ntb)', 'thunderbolt', 'clock', 'phy ("physical layer" framework)', 'cxl (compute express link)', 'various']
|
['clocksource', 'atlas']
|
['kconfig', 'c', 'makefile']
| 3
| 0
| 288
|
--- diff --git a/drivers/clocksource/kconfig b/drivers/clocksource/kconfig --- a/drivers/clocksource/kconfig +++ b/drivers/clocksource/kconfig -config atlas7_timer - bool "atlas7 timer driver" if compile_test - select clksrc_mmio - help - enables support for the atlas7 timer. - diff --git a/drivers/clocksource/makefile b/drivers/clocksource/makefile --- a/drivers/clocksource/makefile +++ b/drivers/clocksource/makefile -obj-$(config_atlas7_timer) += timer-atlas7.o diff --git a/drivers/clocksource/timer-atlas7.c b/drivers/clocksource/timer-atlas7.c --- a/drivers/clocksource/timer-atlas7.c +++ /dev/null -// spdx-license-identifier: gpl-2.0-or-later -/* - * system timer for csr sirfprimaii - * - * copyright (c) 2011 cambridge silicon radio limited, a csr plc group company. - */ - -#include <linux/kernel.h> -#include <linux/interrupt.h> -#include <linux/clockchips.h> -#include <linux/clocksource.h> -#include <linux/cpu.h> -#include <linux/bitops.h> -#include <linux/irq.h> -#include <linux/clk.h> -#include <linux/slab.h> -#include <linux/of.h> -#include <linux/of_irq.h> -#include <linux/of_address.h> -#include <linux/sched_clock.h> - -#define sirfsoc_timer_32counter_0_ctrl 0x0000 -#define sirfsoc_timer_32counter_1_ctrl 0x0004 -#define sirfsoc_timer_match_0 0x0018 -#define sirfsoc_timer_match_1 0x001c -#define sirfsoc_timer_counter_0 0x0048 -#define sirfsoc_timer_counter_1 0x004c -#define sirfsoc_timer_intr_status 0x0060 -#define sirfsoc_timer_watchdog_en 0x0064 -#define sirfsoc_timer_64counter_ctrl 0x0068 -#define sirfsoc_timer_64counter_lo 0x006c -#define sirfsoc_timer_64counter_hi 0x0070 -#define sirfsoc_timer_64counter_load_lo 0x0074 -#define sirfsoc_timer_64counter_load_hi 0x0078 -#define sirfsoc_timer_64counter_rlatched_lo 0x007c -#define sirfsoc_timer_64counter_rlatched_hi 0x0080 - -#define sirfsoc_timer_reg_cnt 6 - -static unsigned long atlas7_timer_rate; - -static const u32 sirfsoc_timer_reg_list[sirfsoc_timer_reg_cnt] = { - sirfsoc_timer_watchdog_en, - sirfsoc_timer_32counter_0_ctrl, - sirfsoc_timer_32counter_1_ctrl, - sirfsoc_timer_64counter_ctrl, - sirfsoc_timer_64counter_rlatched_lo, - sirfsoc_timer_64counter_rlatched_hi, -}; - -static u32 sirfsoc_timer_reg_val[sirfsoc_timer_reg_cnt]; - -static void __iomem *sirfsoc_timer_base; - -/* disable count and interrupt */ -static inline void sirfsoc_timer_count_disable(int idx) -{ - writel_relaxed(readl_relaxed(sirfsoc_timer_base + sirfsoc_timer_32counter_0_ctrl + 4 * idx) & ~0x7, - sirfsoc_timer_base + sirfsoc_timer_32counter_0_ctrl + 4 * idx); -} - -/* enable count and interrupt */ -static inline void sirfsoc_timer_count_enable(int idx) -{ - writel_relaxed(readl_relaxed(sirfsoc_timer_base + sirfsoc_timer_32counter_0_ctrl + 4 * idx) | 0x3, - sirfsoc_timer_base + sirfsoc_timer_32counter_0_ctrl + 4 * idx); -} - -/* timer interrupt handler */ -static irqreturn_t sirfsoc_timer_interrupt(int irq, void *dev_id) -{ - struct clock_event_device *ce = dev_id; - int cpu = smp_processor_id(); - - /* clear timer interrupt */ - writel_relaxed(bit(cpu), sirfsoc_timer_base + sirfsoc_timer_intr_status); - - if (clockevent_state_oneshot(ce)) - sirfsoc_timer_count_disable(cpu); - - ce->event_handler(ce); - - return irq_handled; -} - -/* read 64-bit timer counter */ -static u64 sirfsoc_timer_read(struct clocksource *cs) -{ - u64 cycles; - - writel_relaxed((readl_relaxed(sirfsoc_timer_base + sirfsoc_timer_64counter_ctrl) | - bit(0)) & ~bit(1), sirfsoc_timer_base + sirfsoc_timer_64counter_ctrl); - - cycles = readl_relaxed(sirfsoc_timer_base + sirfsoc_timer_64counter_rlatched_hi); - cycles = (cycles << 32) | readl_relaxed(sirfsoc_timer_base + sirfsoc_timer_64counter_rlatched_lo); - - return cycles; -} - -static int sirfsoc_timer_set_next_event(unsigned long delta, - struct clock_event_device *ce) -{ - int cpu = smp_processor_id(); - - /* disable timer first, then modify the related registers */ - sirfsoc_timer_count_disable(cpu); - - writel_relaxed(0, sirfsoc_timer_base + sirfsoc_timer_counter_0 + - 4 * cpu); - writel_relaxed(delta, sirfsoc_timer_base + sirfsoc_timer_match_0 + - 4 * cpu); - - /* enable the tick */ - sirfsoc_timer_count_enable(cpu); - - return 0; -} - -/* oneshot is enabled in set_next_event */ -static int sirfsoc_timer_shutdown(struct clock_event_device *evt) -{ - sirfsoc_timer_count_disable(smp_processor_id()); - return 0; -} - -static void sirfsoc_clocksource_suspend(struct clocksource *cs) -{ - int i; - - for (i = 0; i < sirfsoc_timer_reg_cnt; i++) - sirfsoc_timer_reg_val[i] = readl_relaxed(sirfsoc_timer_base + sirfsoc_timer_reg_list[i]); -} - -static void sirfsoc_clocksource_resume(struct clocksource *cs) -{ - int i; - - for (i = 0; i < sirfsoc_timer_reg_cnt - 2; i++) - writel_relaxed(sirfsoc_timer_reg_val[i], sirfsoc_timer_base + sirfsoc_timer_reg_list[i]); - - writel_relaxed(sirfsoc_timer_reg_val[sirfsoc_timer_reg_cnt - 2], - sirfsoc_timer_base + sirfsoc_timer_64counter_load_lo); - writel_relaxed(sirfsoc_timer_reg_val[sirfsoc_timer_reg_cnt - 1], - sirfsoc_timer_base + sirfsoc_timer_64counter_load_hi); - - writel_relaxed(readl_relaxed(sirfsoc_timer_base + sirfsoc_timer_64counter_ctrl) | - bit(1) | bit(0), sirfsoc_timer_base + sirfsoc_timer_64counter_ctrl); -} - -static struct clock_event_device __percpu *sirfsoc_clockevent; - -static struct clocksource sirfsoc_clocksource = { - .name = "sirfsoc_clocksource", - .rating = 200, - .mask = clocksource_mask(64), - .flags = clock_source_is_continuous, - .read = sirfsoc_timer_read, - .suspend = sirfsoc_clocksource_suspend, - .resume = sirfsoc_clocksource_resume, -}; - -static unsigned int sirfsoc_timer_irq, sirfsoc_timer1_irq; - -static int sirfsoc_local_timer_starting_cpu(unsigned int cpu) -{ - struct clock_event_device *ce = per_cpu_ptr(sirfsoc_clockevent, cpu); - unsigned int irq; - const char *name; - - if (cpu == 0) { - irq = sirfsoc_timer_irq; - name = "sirfsoc_timer0"; - } else { - irq = sirfsoc_timer1_irq; - name = "sirfsoc_timer1"; - } - - ce->irq = irq; - ce->name = "local_timer"; - ce->features = clock_evt_feat_oneshot; - ce->rating = 200; - ce->set_state_shutdown = sirfsoc_timer_shutdown; - ce->set_state_oneshot = sirfsoc_timer_shutdown; - ce->tick_resume = sirfsoc_timer_shutdown; - ce->set_next_event = sirfsoc_timer_set_next_event; - clockevents_calc_mult_shift(ce, atlas7_timer_rate, 60); - ce->max_delta_ns = clockevent_delta2ns(-2, ce); - ce->max_delta_ticks = (unsigned long)-2; - ce->min_delta_ns = clockevent_delta2ns(2, ce); - ce->min_delta_ticks = 2; - ce->cpumask = cpumask_of(cpu); - - bug_on(request_irq(ce->irq, sirfsoc_timer_interrupt, - irqf_timer | irqf_nobalancing, name, ce)); - irq_force_affinity(ce->irq, cpumask_of(cpu)); - - clockevents_register_device(ce); - return 0; -} - -static int sirfsoc_local_timer_dying_cpu(unsigned int cpu) -{ - struct clock_event_device *ce = per_cpu_ptr(sirfsoc_clockevent, cpu); - - sirfsoc_timer_count_disable(1); - - if (cpu == 0) - free_irq(sirfsoc_timer_irq, ce); - else - free_irq(sirfsoc_timer1_irq, ce); - return 0; -} - -static int __init sirfsoc_clockevent_init(void) -{ - sirfsoc_clockevent = alloc_percpu(struct clock_event_device); - bug_on(!sirfsoc_clockevent); - - /* install and invoke hotplug callbacks */ - return cpuhp_setup_state(cpuhp_ap_marco_timer_starting, - "clockevents/marco:starting", - sirfsoc_local_timer_starting_cpu, - sirfsoc_local_timer_dying_cpu); -} - -/* initialize the kernel jiffy timer source */ -static int __init sirfsoc_atlas7_timer_init(struct device_node *np) -{ - struct clk *clk; - - clk = of_clk_get(np, 0); - bug_on(is_err(clk)); - - bug_on(clk_prepare_enable(clk)); - - atlas7_timer_rate = clk_get_rate(clk); - - /* timer dividers: 0, not divided */ - writel_relaxed(0, sirfsoc_timer_base + sirfsoc_timer_64counter_ctrl); - writel_relaxed(0, sirfsoc_timer_base + sirfsoc_timer_32counter_0_ctrl); - writel_relaxed(0, sirfsoc_timer_base + sirfsoc_timer_32counter_1_ctrl); - - /* initialize timer counters to 0 */ - writel_relaxed(0, sirfsoc_timer_base + sirfsoc_timer_64counter_load_lo); - writel_relaxed(0, sirfsoc_timer_base + sirfsoc_timer_64counter_load_hi); - writel_relaxed(readl_relaxed(sirfsoc_timer_base + sirfsoc_timer_64counter_ctrl) | - bit(1) | bit(0), sirfsoc_timer_base + sirfsoc_timer_64counter_ctrl); - writel_relaxed(0, sirfsoc_timer_base + sirfsoc_timer_counter_0); - writel_relaxed(0, sirfsoc_timer_base + sirfsoc_timer_counter_1); - - /* clear all interrupts */ - writel_relaxed(0xffff, sirfsoc_timer_base + sirfsoc_timer_intr_status); - - bug_on(clocksource_register_hz(&sirfsoc_clocksource, atlas7_timer_rate)); - - return sirfsoc_clockevent_init(); -} - -static int __init sirfsoc_of_timer_init(struct device_node *np) -{ - sirfsoc_timer_base = of_iomap(np, 0); - if (!sirfsoc_timer_base) { - pr_err("unable to map timer cpu registers "); - return -enxio; - } - - sirfsoc_timer_irq = irq_of_parse_and_map(np, 0); - if (!sirfsoc_timer_irq) { - pr_err("no irq passed for timer0 via dt "); - return -einval; - } - - sirfsoc_timer1_irq = irq_of_parse_and_map(np, 1); - if (!sirfsoc_timer1_irq) { - pr_err("no irq passed for timer1 via dt "); - return -einval; - } - - return sirfsoc_atlas7_timer_init(np); -} -timer_of_declare(sirfsoc_atlas7_timer, "sirf,atlas7-tick", sirfsoc_of_timer_init);
|
Clock
|
446262b27285e86bfc078d5602d7e047a351d536
|
arnd bergmann barry song baohua kernel org
|
drivers
|
clocksource
| |
clocksource/drivers/prima: remove sirf prima driver
|
the csr sirf prima2/atlas platforms are getting removed, so this driver is no longer needed.
|
this release allows to map an uid to a different one in a mount; it also adds support for selecting the preemption model at runtime; support for a low-overhead memory error detector designed to be used in production; support for the acrn hypervisor designed for embedded systems; btrfs initial support for zoned devices, subpage blocks sizes and performance improvements; support for eager nfs writes; support for a thermal power management to control the surface temperature of embedded devices in an unified way; the napi polling can be moved to a kernel thread; and support for non-blocking path lookups. as always, there are many other features, new drivers, improvements and fixes.
|
remove sirf prima driver
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'cryptography', 'security', 'networking', 'architectures x86 arm risc-v powerpc mips csky s390 pa-risc c6x']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'device voltage and frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'non-transparent bridge (ntb)', 'thunderbolt', 'clock', 'phy ("physical layer" framework)', 'cxl (compute express link)', 'various']
|
['clocksource', 'prima']
|
['kconfig', 'c', 'makefile']
| 3
| 0
| 249
|
--- diff --git a/drivers/clocksource/kconfig b/drivers/clocksource/kconfig --- a/drivers/clocksource/kconfig +++ b/drivers/clocksource/kconfig -config prima2_timer - bool "prima2 timer driver" if compile_test - select clksrc_mmio - help - enables support for the prima2 timer. - diff --git a/drivers/clocksource/makefile b/drivers/clocksource/makefile --- a/drivers/clocksource/makefile +++ b/drivers/clocksource/makefile -obj-$(config_prima2_timer) += timer-prima2.o diff --git a/drivers/clocksource/timer-prima2.c b/drivers/clocksource/timer-prima2.c --- a/drivers/clocksource/timer-prima2.c +++ /dev/null -// spdx-license-identifier: gpl-2.0-or-later -/* - * system timer for csr sirfprimaii - * - * copyright (c) 2011 cambridge silicon radio limited, a csr plc group company. - */ - -#include <linux/kernel.h> -#include <linux/interrupt.h> -#include <linux/clockchips.h> -#include <linux/clocksource.h> -#include <linux/bitops.h> -#include <linux/irq.h> -#include <linux/clk.h> -#include <linux/err.h> -#include <linux/slab.h> -#include <linux/of.h> -#include <linux/of_irq.h> -#include <linux/of_address.h> -#include <linux/sched_clock.h> - -#define prima2_clock_freq 1000000 - -#define sirfsoc_timer_counter_lo 0x0000 -#define sirfsoc_timer_counter_hi 0x0004 -#define sirfsoc_timer_match_0 0x0008 -#define sirfsoc_timer_match_1 0x000c -#define sirfsoc_timer_match_2 0x0010 -#define sirfsoc_timer_match_3 0x0014 -#define sirfsoc_timer_match_4 0x0018 -#define sirfsoc_timer_match_5 0x001c -#define sirfsoc_timer_status 0x0020 -#define sirfsoc_timer_int_en 0x0024 -#define sirfsoc_timer_watchdog_en 0x0028 -#define sirfsoc_timer_div 0x002c -#define sirfsoc_timer_latch 0x0030 -#define sirfsoc_timer_latched_lo 0x0034 -#define sirfsoc_timer_latched_hi 0x0038 - -#define sirfsoc_timer_wdt_index 5 - -#define sirfsoc_timer_latch_bit bit(0) - -#define sirfsoc_timer_reg_cnt 11 - -static const u32 sirfsoc_timer_reg_list[sirfsoc_timer_reg_cnt] = { - sirfsoc_timer_match_0, sirfsoc_timer_match_1, sirfsoc_timer_match_2, - sirfsoc_timer_match_3, sirfsoc_timer_match_4, sirfsoc_timer_match_5, - sirfsoc_timer_int_en, sirfsoc_timer_watchdog_en, sirfsoc_timer_div, - sirfsoc_timer_latched_lo, sirfsoc_timer_latched_hi, -}; - -static u32 sirfsoc_timer_reg_val[sirfsoc_timer_reg_cnt]; - -static void __iomem *sirfsoc_timer_base; - -/* timer0 interrupt handler */ -static irqreturn_t sirfsoc_timer_interrupt(int irq, void *dev_id) -{ - struct clock_event_device *ce = dev_id; - - warn_on(!(readl_relaxed(sirfsoc_timer_base + sirfsoc_timer_status) & - bit(0))); - - /* clear timer0 interrupt */ - writel_relaxed(bit(0), sirfsoc_timer_base + sirfsoc_timer_status); - - ce->event_handler(ce); - - return irq_handled; -} - -/* read 64-bit timer counter */ -static u64 notrace sirfsoc_timer_read(struct clocksource *cs) -{ - u64 cycles; - - /* latch the 64-bit timer counter */ - writel_relaxed(sirfsoc_timer_latch_bit, - sirfsoc_timer_base + sirfsoc_timer_latch); - cycles = readl_relaxed(sirfsoc_timer_base + sirfsoc_timer_latched_hi); - cycles = (cycles << 32) | - readl_relaxed(sirfsoc_timer_base + sirfsoc_timer_latched_lo); - - return cycles; -} - -static int sirfsoc_timer_set_next_event(unsigned long delta, - struct clock_event_device *ce) -{ - unsigned long now, next; - - writel_relaxed(sirfsoc_timer_latch_bit, - sirfsoc_timer_base + sirfsoc_timer_latch); - now = readl_relaxed(sirfsoc_timer_base + sirfsoc_timer_latched_lo); - next = now + delta; - writel_relaxed(next, sirfsoc_timer_base + sirfsoc_timer_match_0); - writel_relaxed(sirfsoc_timer_latch_bit, - sirfsoc_timer_base + sirfsoc_timer_latch); - now = readl_relaxed(sirfsoc_timer_base + sirfsoc_timer_latched_lo); - - return next - now > delta ? -etime : 0; -} - -static int sirfsoc_timer_shutdown(struct clock_event_device *evt) -{ - u32 val = readl_relaxed(sirfsoc_timer_base + sirfsoc_timer_int_en); - - writel_relaxed(val & ~bit(0), - sirfsoc_timer_base + sirfsoc_timer_int_en); - return 0; -} - -static int sirfsoc_timer_set_oneshot(struct clock_event_device *evt) -{ - u32 val = readl_relaxed(sirfsoc_timer_base + sirfsoc_timer_int_en); - - writel_relaxed(val | bit(0), sirfsoc_timer_base + sirfsoc_timer_int_en); - return 0; -} - -static void sirfsoc_clocksource_suspend(struct clocksource *cs) -{ - int i; - - writel_relaxed(sirfsoc_timer_latch_bit, - sirfsoc_timer_base + sirfsoc_timer_latch); - - for (i = 0; i < sirfsoc_timer_reg_cnt; i++) - sirfsoc_timer_reg_val[i] = - readl_relaxed(sirfsoc_timer_base + - sirfsoc_timer_reg_list[i]); -} - -static void sirfsoc_clocksource_resume(struct clocksource *cs) -{ - int i; - - for (i = 0; i < sirfsoc_timer_reg_cnt - 2; i++) - writel_relaxed(sirfsoc_timer_reg_val[i], - sirfsoc_timer_base + sirfsoc_timer_reg_list[i]); - - writel_relaxed(sirfsoc_timer_reg_val[sirfsoc_timer_reg_cnt - 2], - sirfsoc_timer_base + sirfsoc_timer_counter_lo); - writel_relaxed(sirfsoc_timer_reg_val[sirfsoc_timer_reg_cnt - 1], - sirfsoc_timer_base + sirfsoc_timer_counter_hi); -} - -static struct clock_event_device sirfsoc_clockevent = { - .name = "sirfsoc_clockevent", - .rating = 200, - .features = clock_evt_feat_oneshot, - .set_state_shutdown = sirfsoc_timer_shutdown, - .set_state_oneshot = sirfsoc_timer_set_oneshot, - .set_next_event = sirfsoc_timer_set_next_event, -}; - -static struct clocksource sirfsoc_clocksource = { - .name = "sirfsoc_clocksource", - .rating = 200, - .mask = clocksource_mask(64), - .flags = clock_source_is_continuous, - .read = sirfsoc_timer_read, - .suspend = sirfsoc_clocksource_suspend, - .resume = sirfsoc_clocksource_resume, -}; - -/* overwrite weak default sched_clock with more precise one */ -static u64 notrace sirfsoc_read_sched_clock(void) -{ - return sirfsoc_timer_read(null); -} - -static void __init sirfsoc_clockevent_init(void) -{ - sirfsoc_clockevent.cpumask = cpumask_of(0); - clockevents_config_and_register(&sirfsoc_clockevent, prima2_clock_freq, - 2, -2); -} - -/* initialize the kernel jiffy timer source */ -static int __init sirfsoc_prima2_timer_init(struct device_node *np) -{ - unsigned long rate; - unsigned int irq; - struct clk *clk; - int ret; - - clk = of_clk_get(np, 0); - if (is_err(clk)) { - pr_err("failed to get clock "); - return ptr_err(clk); - } - - ret = clk_prepare_enable(clk); - if (ret) { - pr_err("failed to enable clock "); - return ret; - } - - rate = clk_get_rate(clk); - - if (rate < prima2_clock_freq || rate % prima2_clock_freq) { - pr_err("invalid clock rate "); - return -einval; - } - - sirfsoc_timer_base = of_iomap(np, 0); - if (!sirfsoc_timer_base) { - pr_err("unable to map timer cpu registers "); - return -enxio; - } - - irq = irq_of_parse_and_map(np, 0); - - writel_relaxed(rate / prima2_clock_freq / 2 - 1, - sirfsoc_timer_base + sirfsoc_timer_div); - writel_relaxed(0, sirfsoc_timer_base + sirfsoc_timer_counter_lo); - writel_relaxed(0, sirfsoc_timer_base + sirfsoc_timer_counter_hi); - writel_relaxed(bit(0), sirfsoc_timer_base + sirfsoc_timer_status); - - ret = clocksource_register_hz(&sirfsoc_clocksource, prima2_clock_freq); - if (ret) { - pr_err("failed to register clocksource "); - return ret; - } - - sched_clock_register(sirfsoc_read_sched_clock, 64, prima2_clock_freq); - - ret = request_irq(irq, sirfsoc_timer_interrupt, irqf_timer, - "sirfsoc_timer0", &sirfsoc_clockevent); - if (ret) { - pr_err("failed to setup irq "); - return ret; - } - - sirfsoc_clockevent_init(); - - return 0; -} -timer_of_declare(sirfsoc_prima2_timer, - "sirf,prima2-tick", sirfsoc_prima2_timer_init);
|
Clock
|
a8d80235808c8359b614412da76dc10518ea9090
|
arnd bergmann barry song baohua kernel org
|
drivers
|
clocksource
| |
clocksource/drivers/tango: remove tango driver
|
the tango platform is getting removed, so the driver is no longer needed.
|
this release allows to map an uid to a different one in a mount; it also adds support for selecting the preemption model at runtime; support for a low-overhead memory error detector designed to be used in production; support for the acrn hypervisor designed for embedded systems; btrfs initial support for zoned devices, subpage blocks sizes and performance improvements; support for eager nfs writes; support for a thermal power management to control the surface temperature of embedded devices in an unified way; the napi polling can be moved to a kernel thread; and support for non-blocking path lookups. as always, there are many other features, new drivers, improvements and fixes.
|
remove tango driver
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'cryptography', 'security', 'networking', 'architectures x86 arm risc-v powerpc mips csky s390 pa-risc c6x']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'device voltage and frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'non-transparent bridge (ntb)', 'thunderbolt', 'clock', 'phy ("physical layer" framework)', 'cxl (compute express link)', 'various']
|
['clocksource', 'tango']
|
['kconfig', 'c', 'makefile']
| 3
| 0
| 66
|
--- diff --git a/drivers/clocksource/kconfig b/drivers/clocksource/kconfig --- a/drivers/clocksource/kconfig +++ b/drivers/clocksource/kconfig -config clksrc_tango_xtal - bool "clocksource for tango soc" if compile_test - depends on arm - select timer_of - select clksrc_mmio - help - this enables the clocksource for tango soc. - diff --git a/drivers/clocksource/makefile b/drivers/clocksource/makefile --- a/drivers/clocksource/makefile +++ b/drivers/clocksource/makefile -obj-$(config_clksrc_tango_xtal) += timer-tango-xtal.o diff --git a/drivers/clocksource/timer-tango-xtal.c b/drivers/clocksource/timer-tango-xtal.c --- a/drivers/clocksource/timer-tango-xtal.c +++ /dev/null -// spdx-license-identifier: gpl-2.0 -#include <linux/clocksource.h> -#include <linux/sched_clock.h> -#include <linux/of_address.h> -#include <linux/printk.h> -#include <linux/delay.h> -#include <linux/init.h> -#include <linux/clk.h> - -static void __iomem *xtal_in_cnt; -static struct delay_timer delay_timer; - -static unsigned long notrace read_xtal_counter(void) -{ - return readl_relaxed(xtal_in_cnt); -} - -static u64 notrace read_sched_clock(void) -{ - return read_xtal_counter(); -} - -static int __init tango_clocksource_init(struct device_node *np) -{ - struct clk *clk; - int xtal_freq, ret; - - xtal_in_cnt = of_iomap(np, 0); - if (xtal_in_cnt == null) { - pr_err("%pof: invalid address ", np); - return -enxio; - } - - clk = of_clk_get(np, 0); - if (is_err(clk)) { - pr_err("%pof: invalid clock ", np); - return ptr_err(clk); - } - - xtal_freq = clk_get_rate(clk); - delay_timer.freq = xtal_freq; - delay_timer.read_current_timer = read_xtal_counter; - - ret = clocksource_mmio_init(xtal_in_cnt, "tango-xtal", xtal_freq, 350, - 32, clocksource_mmio_readl_up); - if (ret) { - pr_err("%pof: registration failed ", np); - return ret; - } - - sched_clock_register(read_sched_clock, 32, xtal_freq); - register_current_timer_delay(&delay_timer); - - return 0; -} - -timer_of_declare(tango, "sigma,tick-counter", tango_clocksource_init);
|
Clock
|
8fdb44176928fb3ef3e10d97eaf1aed82c90bd58
|
arnd bergmann mans rullgard mans mansr com
|
drivers
|
clocksource
| |
clocksource/drivers/u300: remove the u300 driver
|
the st-ericsson u300 platform is getting removed, so this driver is no longer needed.
|
this release allows to map an uid to a different one in a mount; it also adds support for selecting the preemption model at runtime; support for a low-overhead memory error detector designed to be used in production; support for the acrn hypervisor designed for embedded systems; btrfs initial support for zoned devices, subpage blocks sizes and performance improvements; support for eager nfs writes; support for a thermal power management to control the surface temperature of embedded devices in an unified way; the napi polling can be moved to a kernel thread; and support for non-blocking path lookups. as always, there are many other features, new drivers, improvements and fixes.
|
remove the u300 driver
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'cryptography', 'security', 'networking', 'architectures x86 arm risc-v powerpc mips csky s390 pa-risc c6x']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'device voltage and frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'non-transparent bridge (ntb)', 'thunderbolt', 'clock', 'phy ("physical layer" framework)', 'cxl (compute express link)', 'various']
|
['clocksource', 'u300']
|
['txt', 'kconfig', 'c', 'makefile']
| 4
| 0
| 483
|
--- diff --git a/documentation/devicetree/bindings/timer/stericsson-u300-apptimer.txt b/documentation/devicetree/bindings/timer/stericsson-u300-apptimer.txt --- a/documentation/devicetree/bindings/timer/stericsson-u300-apptimer.txt +++ /dev/null -st-ericsson u300 apptimer - -required properties: - -- compatible : should be "stericsson,u300-apptimer" -- reg : specifies base physical address and size of the registers. -- interrupts : a list of 4 interrupts; one for each subtimer. these - are, in order: os (operating system), dd (device driver) both - adopted for epoc/symbian with two specific irqs for these tasks, - then gp1 and gp2, which are general-purpose timers. - -example: - -timer { - compatible = "stericsson,u300-apptimer"; - reg = <0xc0014000 0x1000>; - interrupts = <24 25 26 27>; -}; diff --git a/drivers/clocksource/kconfig b/drivers/clocksource/kconfig --- a/drivers/clocksource/kconfig +++ b/drivers/clocksource/kconfig -config u300_timer - bool "u300 timer driver" if compile_test - depends on arm - select clksrc_mmio - help - enables support for the u300 timer. - diff --git a/drivers/clocksource/makefile b/drivers/clocksource/makefile --- a/drivers/clocksource/makefile +++ b/drivers/clocksource/makefile -obj-$(config_u300_timer) += timer-u300.o diff --git a/drivers/clocksource/timer-u300.c b/drivers/clocksource/timer-u300.c --- a/drivers/clocksource/timer-u300.c +++ /dev/null -// spdx-license-identifier: gpl-2.0-only -/* - * copyright (c) 2007-2009 st-ericsson ab - * timer coh 901 328, runs the os timer interrupt. - * author: linus walleij <linus.walleij@stericsson.com> - */ -#include <linux/interrupt.h> -#include <linux/time.h> -#include <linux/timex.h> -#include <linux/clockchips.h> -#include <linux/clocksource.h> -#include <linux/types.h> -#include <linux/io.h> -#include <linux/clk.h> -#include <linux/err.h> -#include <linux/irq.h> -#include <linux/delay.h> -#include <linux/of_address.h> -#include <linux/of_irq.h> -#include <linux/sched_clock.h> - -/* generic stuff */ -#include <asm/mach/map.h> -#include <asm/mach/time.h> - -/* - * app side special timer registers - * this timer contains four timers which can fire an interrupt each. - * os (operating system) timer @ 32768 hz - * dd (device driver) timer @ 1 khz - * gp1 (general purpose 1) timer @ 1mhz - * gp2 (general purpose 2) timer @ 1mhz - */ - -/* reset os timer 32bit (-/w) */ -#define u300_timer_app_rost (0x0000) -#define u300_timer_app_rost_timer_reset (0x00000000) -/* enable os timer 32bit (-/w) */ -#define u300_timer_app_eost (0x0004) -#define u300_timer_app_eost_timer_enable (0x00000000) -/* disable os timer 32bit (-/w) */ -#define u300_timer_app_dost (0x0008) -#define u300_timer_app_dost_timer_disable (0x00000000) -/* os timer mode register 32bit (-/w) */ -#define u300_timer_app_sostm (0x000c) -#define u300_timer_app_sostm_mode_continuous (0x00000000) -#define u300_timer_app_sostm_mode_one_shot (0x00000001) -/* os timer status register 32bit (r/-) */ -#define u300_timer_app_osts (0x0010) -#define u300_timer_app_osts_timer_state_mask (0x0000000f) -#define u300_timer_app_osts_timer_state_idle (0x00000001) -#define u300_timer_app_osts_timer_state_active (0x00000002) -#define u300_timer_app_osts_enable_ind (0x00000010) -#define u300_timer_app_osts_mode_mask (0x00000020) -#define u300_timer_app_osts_mode_continuous (0x00000000) -#define u300_timer_app_osts_mode_one_shot (0x00000020) -#define u300_timer_app_osts_irq_enabled_ind (0x00000040) -#define u300_timer_app_osts_irq_pending_ind (0x00000080) -/* os timer current count register 32bit (r/-) */ -#define u300_timer_app_ostcc (0x0014) -/* os timer terminal count register 32bit (r/w) */ -#define u300_timer_app_osttc (0x0018) -/* os timer interrupt enable register 32bit (-/w) */ -#define u300_timer_app_ostie (0x001c) -#define u300_timer_app_ostie_irq_disable (0x00000000) -#define u300_timer_app_ostie_irq_enable (0x00000001) -/* os timer interrupt acknowledge register 32bit (-/w) */ -#define u300_timer_app_ostia (0x0020) -#define u300_timer_app_ostia_irq_ack (0x00000080) - -/* reset dd timer 32bit (-/w) */ -#define u300_timer_app_rddt (0x0040) -#define u300_timer_app_rddt_timer_reset (0x00000000) -/* enable dd timer 32bit (-/w) */ -#define u300_timer_app_eddt (0x0044) -#define u300_timer_app_eddt_timer_enable (0x00000000) -/* disable dd timer 32bit (-/w) */ -#define u300_timer_app_dddt (0x0048) -#define u300_timer_app_dddt_timer_disable (0x00000000) -/* dd timer mode register 32bit (-/w) */ -#define u300_timer_app_sddtm (0x004c) -#define u300_timer_app_sddtm_mode_continuous (0x00000000) -#define u300_timer_app_sddtm_mode_one_shot (0x00000001) -/* dd timer status register 32bit (r/-) */ -#define u300_timer_app_ddts (0x0050) -#define u300_timer_app_ddts_timer_state_mask (0x0000000f) -#define u300_timer_app_ddts_timer_state_idle (0x00000001) -#define u300_timer_app_ddts_timer_state_active (0x00000002) -#define u300_timer_app_ddts_enable_ind (0x00000010) -#define u300_timer_app_ddts_mode_mask (0x00000020) -#define u300_timer_app_ddts_mode_continuous (0x00000000) -#define u300_timer_app_ddts_mode_one_shot (0x00000020) -#define u300_timer_app_ddts_irq_enabled_ind (0x00000040) -#define u300_timer_app_ddts_irq_pending_ind (0x00000080) -/* dd timer current count register 32bit (r/-) */ -#define u300_timer_app_ddtcc (0x0054) -/* dd timer terminal count register 32bit (r/w) */ -#define u300_timer_app_ddttc (0x0058) -/* dd timer interrupt enable register 32bit (-/w) */ -#define u300_timer_app_ddtie (0x005c) -#define u300_timer_app_ddtie_irq_disable (0x00000000) -#define u300_timer_app_ddtie_irq_enable (0x00000001) -/* dd timer interrupt acknowledge register 32bit (-/w) */ -#define u300_timer_app_ddtia (0x0060) -#define u300_timer_app_ddtia_irq_ack (0x00000080) - -/* reset gp1 timer 32bit (-/w) */ -#define u300_timer_app_rgpt1 (0x0080) -#define u300_timer_app_rgpt1_timer_reset (0x00000000) -/* enable gp1 timer 32bit (-/w) */ -#define u300_timer_app_egpt1 (0x0084) -#define u300_timer_app_egpt1_timer_enable (0x00000000) -/* disable gp1 timer 32bit (-/w) */ -#define u300_timer_app_dgpt1 (0x0088) -#define u300_timer_app_dgpt1_timer_disable (0x00000000) -/* gp1 timer mode register 32bit (-/w) */ -#define u300_timer_app_sgpt1m (0x008c) -#define u300_timer_app_sgpt1m_mode_continuous (0x00000000) -#define u300_timer_app_sgpt1m_mode_one_shot (0x00000001) -/* gp1 timer status register 32bit (r/-) */ -#define u300_timer_app_gpt1s (0x0090) -#define u300_timer_app_gpt1s_timer_state_mask (0x0000000f) -#define u300_timer_app_gpt1s_timer_state_idle (0x00000001) -#define u300_timer_app_gpt1s_timer_state_active (0x00000002) -#define u300_timer_app_gpt1s_enable_ind (0x00000010) -#define u300_timer_app_gpt1s_mode_mask (0x00000020) -#define u300_timer_app_gpt1s_mode_continuous (0x00000000) -#define u300_timer_app_gpt1s_mode_one_shot (0x00000020) -#define u300_timer_app_gpt1s_irq_enabled_ind (0x00000040) -#define u300_timer_app_gpt1s_irq_pending_ind (0x00000080) -/* gp1 timer current count register 32bit (r/-) */ -#define u300_timer_app_gpt1cc (0x0094) -/* gp1 timer terminal count register 32bit (r/w) */ -#define u300_timer_app_gpt1tc (0x0098) -/* gp1 timer interrupt enable register 32bit (-/w) */ -#define u300_timer_app_gpt1ie (0x009c) -#define u300_timer_app_gpt1ie_irq_disable (0x00000000) -#define u300_timer_app_gpt1ie_irq_enable (0x00000001) -/* gp1 timer interrupt acknowledge register 32bit (-/w) */ -#define u300_timer_app_gpt1ia (0x00a0) -#define u300_timer_app_gpt1ia_irq_ack (0x00000080) - -/* reset gp2 timer 32bit (-/w) */ -#define u300_timer_app_rgpt2 (0x00c0) -#define u300_timer_app_rgpt2_timer_reset (0x00000000) -/* enable gp2 timer 32bit (-/w) */ -#define u300_timer_app_egpt2 (0x00c4) -#define u300_timer_app_egpt2_timer_enable (0x00000000) -/* disable gp2 timer 32bit (-/w) */ -#define u300_timer_app_dgpt2 (0x00c8) -#define u300_timer_app_dgpt2_timer_disable (0x00000000) -/* gp2 timer mode register 32bit (-/w) */ -#define u300_timer_app_sgpt2m (0x00cc) -#define u300_timer_app_sgpt2m_mode_continuous (0x00000000) -#define u300_timer_app_sgpt2m_mode_one_shot (0x00000001) -/* gp2 timer status register 32bit (r/-) */ -#define u300_timer_app_gpt2s (0x00d0) -#define u300_timer_app_gpt2s_timer_state_mask (0x0000000f) -#define u300_timer_app_gpt2s_timer_state_idle (0x00000001) -#define u300_timer_app_gpt2s_timer_state_active (0x00000002) -#define u300_timer_app_gpt2s_enable_ind (0x00000010) -#define u300_timer_app_gpt2s_mode_mask (0x00000020) -#define u300_timer_app_gpt2s_mode_continuous (0x00000000) -#define u300_timer_app_gpt2s_mode_one_shot (0x00000020) -#define u300_timer_app_gpt2s_irq_enabled_ind (0x00000040) -#define u300_timer_app_gpt2s_irq_pending_ind (0x00000080) -/* gp2 timer current count register 32bit (r/-) */ -#define u300_timer_app_gpt2cc (0x00d4) -/* gp2 timer terminal count register 32bit (r/w) */ -#define u300_timer_app_gpt2tc (0x00d8) -/* gp2 timer interrupt enable register 32bit (-/w) */ -#define u300_timer_app_gpt2ie (0x00dc) -#define u300_timer_app_gpt2ie_irq_disable (0x00000000) -#define u300_timer_app_gpt2ie_irq_enable (0x00000001) -/* gp2 timer interrupt acknowledge register 32bit (-/w) */ -#define u300_timer_app_gpt2ia (0x00e0) -#define u300_timer_app_gpt2ia_irq_ack (0x00000080) - -/* clock request control register - all four timers */ -#define u300_timer_app_crc (0x100) -#define u300_timer_app_crc_clock_request_enable (0x00000001) - -static void __iomem *u300_timer_base; - -struct u300_clockevent_data { - struct clock_event_device cevd; - unsigned ticks_per_jiffy; -}; - -static int u300_shutdown(struct clock_event_device *evt) -{ - /* disable interrupts on gp1 */ - writel(u300_timer_app_gpt1ie_irq_disable, - u300_timer_base + u300_timer_app_gpt1ie); - /* disable gp1 */ - writel(u300_timer_app_dgpt1_timer_disable, - u300_timer_base + u300_timer_app_dgpt1); - return 0; -} - -/* - * if we have oneshot timer active, the oneshot scheduling function - * u300_set_next_event() is called immediately after. - */ -static int u300_set_oneshot(struct clock_event_device *evt) -{ - /* just return; here? */ - /* - * the actual event will be programmed by the next event hook, - * so we just set a dummy value somewhere at the end of the - * universe here. - */ - /* disable interrupts on gpt1 */ - writel(u300_timer_app_gpt1ie_irq_disable, - u300_timer_base + u300_timer_app_gpt1ie); - /* disable gp1 while we're reprogramming it. */ - writel(u300_timer_app_dgpt1_timer_disable, - u300_timer_base + u300_timer_app_dgpt1); - /* - * expire far in the future, u300_set_next_event() will be - * called soon... - */ - writel(0xffffffff, u300_timer_base + u300_timer_app_gpt1tc); - /* we run one shot per tick here! */ - writel(u300_timer_app_sgpt1m_mode_one_shot, - u300_timer_base + u300_timer_app_sgpt1m); - /* enable interrupts for this timer */ - writel(u300_timer_app_gpt1ie_irq_enable, - u300_timer_base + u300_timer_app_gpt1ie); - /* enable timer */ - writel(u300_timer_app_egpt1_timer_enable, - u300_timer_base + u300_timer_app_egpt1); - return 0; -} - -static int u300_set_periodic(struct clock_event_device *evt) -{ - struct u300_clockevent_data *cevdata = - container_of(evt, struct u300_clockevent_data, cevd); - - /* disable interrupts on gpt1 */ - writel(u300_timer_app_gpt1ie_irq_disable, - u300_timer_base + u300_timer_app_gpt1ie); - /* disable gp1 while we're reprogramming it. */ - writel(u300_timer_app_dgpt1_timer_disable, - u300_timer_base + u300_timer_app_dgpt1); - /* - * set the periodic mode to a certain number of ticks per - * jiffy. - */ - writel(cevdata->ticks_per_jiffy, - u300_timer_base + u300_timer_app_gpt1tc); - /* - * set continuous mode, so the timer keeps triggering - * interrupts. - */ - writel(u300_timer_app_sgpt1m_mode_continuous, - u300_timer_base + u300_timer_app_sgpt1m); - /* enable timer interrupts */ - writel(u300_timer_app_gpt1ie_irq_enable, - u300_timer_base + u300_timer_app_gpt1ie); - /* then enable the os timer again */ - writel(u300_timer_app_egpt1_timer_enable, - u300_timer_base + u300_timer_app_egpt1); - return 0; -} - -/* - * the app timer in one shot mode obviously has to be reprogrammed - * in exactly this sequence to work properly. do not try to e.g. replace - * the interrupt disable + timer disable commands with a reset command, - * it will fail miserably. apparently (and i found this the hard way) - * the timer is very sensitive to the instruction order, though you don't - * get that impression from the data sheet. - */ -static int u300_set_next_event(unsigned long cycles, - struct clock_event_device *evt) - -{ - /* disable interrupts on gpt1 */ - writel(u300_timer_app_gpt1ie_irq_disable, - u300_timer_base + u300_timer_app_gpt1ie); - /* disable gp1 while we're reprogramming it. */ - writel(u300_timer_app_dgpt1_timer_disable, - u300_timer_base + u300_timer_app_dgpt1); - /* reset the general purpose timer 1. */ - writel(u300_timer_app_rgpt1_timer_reset, - u300_timer_base + u300_timer_app_rgpt1); - /* irq in n * cycles */ - writel(cycles, u300_timer_base + u300_timer_app_gpt1tc); - /* - * we run one shot per tick here! (this is necessary to reconfigure, - * the timer will tilt if you don't!) - */ - writel(u300_timer_app_sgpt1m_mode_one_shot, - u300_timer_base + u300_timer_app_sgpt1m); - /* enable timer interrupts */ - writel(u300_timer_app_gpt1ie_irq_enable, - u300_timer_base + u300_timer_app_gpt1ie); - /* then enable the os timer again */ - writel(u300_timer_app_egpt1_timer_enable, - u300_timer_base + u300_timer_app_egpt1); - return 0; -} - -static struct u300_clockevent_data u300_clockevent_data = { - /* use general purpose timer 1 as clock event */ - .cevd = { - .name = "gpt1", - /* reasonably fast and accurate clock event */ - .rating = 300, - .features = clock_evt_feat_periodic | - clock_evt_feat_oneshot, - .set_next_event = u300_set_next_event, - .set_state_shutdown = u300_shutdown, - .set_state_periodic = u300_set_periodic, - .set_state_oneshot = u300_set_oneshot, - }, -}; - -/* clock event timer interrupt handler */ -static irqreturn_t u300_timer_interrupt(int irq, void *dev_id) -{ - struct clock_event_device *evt = &u300_clockevent_data.cevd; - /* ack/clear timer irq for the app gpt1 timer */ - - writel(u300_timer_app_gpt1ia_irq_ack, - u300_timer_base + u300_timer_app_gpt1ia); - evt->event_handler(evt); - return irq_handled; -} - -/* - * override the global weak sched_clock symbol with this - * local implementation which uses the clocksource to get some - * better resolution when scheduling the kernel. we accept that - * this wraps around for now, since it is just a relative time - * stamp. (inspired by omap implementation.) - */ - -static u64 notrace u300_read_sched_clock(void) -{ - return readl(u300_timer_base + u300_timer_app_gpt2cc); -} - -static unsigned long u300_read_current_timer(void) -{ - return readl(u300_timer_base + u300_timer_app_gpt2cc); -} - -static struct delay_timer u300_delay_timer; - -/* - * this sets up the system timers, clock source and clock event. - */ -static int __init u300_timer_init_of(struct device_node *np) -{ - unsigned int irq; - struct clk *clk; - unsigned long rate; - int ret; - - u300_timer_base = of_iomap(np, 0); - if (!u300_timer_base) { - pr_err("could not ioremap system timer "); - return -enxio; - } - - /* get the irq for the gp1 timer */ - irq = irq_of_parse_and_map(np, 2); - if (!irq) { - pr_err("no irq for system timer "); - return -einval; - } - - pr_info("u300 gp1 timer @ base: %p, irq: %u ", u300_timer_base, irq); - - /* clock the interrupt controller */ - clk = of_clk_get(np, 0); - if (is_err(clk)) - return ptr_err(clk); - - ret = clk_prepare_enable(clk); - if (ret) - return ret; - - rate = clk_get_rate(clk); - - u300_clockevent_data.ticks_per_jiffy = div_round_closest(rate, hz); - - sched_clock_register(u300_read_sched_clock, 32, rate); - - u300_delay_timer.read_current_timer = &u300_read_current_timer; - u300_delay_timer.freq = rate; - register_current_timer_delay(&u300_delay_timer); - - /* - * disable the "os" and "dd" timers - these are designed for symbian! - * example usage in cnh1601578 cpu subsystem pd_timer_app.c - */ - writel(u300_timer_app_crc_clock_request_enable, - u300_timer_base + u300_timer_app_crc); - writel(u300_timer_app_rost_timer_reset, - u300_timer_base + u300_timer_app_rost); - writel(u300_timer_app_dost_timer_disable, - u300_timer_base + u300_timer_app_dost); - writel(u300_timer_app_rddt_timer_reset, - u300_timer_base + u300_timer_app_rddt); - writel(u300_timer_app_dddt_timer_disable, - u300_timer_base + u300_timer_app_dddt); - - /* reset the general purpose timer 1. */ - writel(u300_timer_app_rgpt1_timer_reset, - u300_timer_base + u300_timer_app_rgpt1); - - /* set up the irq handler */ - ret = request_irq(irq, u300_timer_interrupt, - irqf_timer | irqf_irqpoll, "u300 timer tick", null); - if (ret) - return ret; - - /* reset the general purpose timer 2 */ - writel(u300_timer_app_rgpt2_timer_reset, - u300_timer_base + u300_timer_app_rgpt2); - /* set this timer to run around forever */ - writel(0xffffffffu, u300_timer_base + u300_timer_app_gpt2tc); - /* set continuous mode so it wraps around */ - writel(u300_timer_app_sgpt2m_mode_continuous, - u300_timer_base + u300_timer_app_sgpt2m); - /* disable timer interrupts */ - writel(u300_timer_app_gpt2ie_irq_disable, - u300_timer_base + u300_timer_app_gpt2ie); - /* then enable the gp2 timer to use as a free running us counter */ - writel(u300_timer_app_egpt2_timer_enable, - u300_timer_base + u300_timer_app_egpt2); - - /* use general purpose timer 2 as clock source */ - ret = clocksource_mmio_init(u300_timer_base + u300_timer_app_gpt2cc, - "gpt2", rate, 300, 32, clocksource_mmio_readl_up); - if (ret) { - pr_err("timer: failed to initialize u300 clock source "); - return ret; - } - - /* configure and register the clockevent */ - clockevents_config_and_register(&u300_clockevent_data.cevd, rate, - 1, 0xffffffff); - - /* - * todo: init and register the rest of the timers too, they can be - * used by hrtimers! - */ - return 0; -} - -timer_of_declare(u300_timer, "stericsson,u300-apptimer", - u300_timer_init_of);
|
Clock
|
33105406764f7f13c5e7279826f77342c82c41b5
|
arnd bergmann linus walleij linus walleij linaro org
|
documentation
|
devicetree
|
bindings, timer
|
clk: remove sirf prima2/atlas drivers
|
the csr sirf prima2/atlas platforms are getting removed, so this driver is no longer needed.
|
this release allows to map an uid to a different one in a mount; it also adds support for selecting the preemption model at runtime; support for a low-overhead memory error detector designed to be used in production; support for the acrn hypervisor designed for embedded systems; btrfs initial support for zoned devices, subpage blocks sizes and performance improvements; support for eager nfs writes; support for a thermal power management to control the surface temperature of embedded devices in an unified way; the napi polling can be moved to a kernel thread; and support for non-blocking path lookups. as always, there are many other features, new drivers, improvements and fixes.
|
remove sirf prima2/atlas drivers
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'cryptography', 'security', 'networking', 'architectures x86 arm risc-v powerpc mips csky s390 pa-risc c6x']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'device voltage and frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'non-transparent bridge (ntb)', 'thunderbolt', 'clock', 'phy ("physical layer" framework)', 'cxl (compute express link)', 'various']
|
[]
|
['txt', 'h', 'c', 'makefile']
| 10
| 0
| 3,211
|
--- diff --git a/documentation/devicetree/bindings/clock/csr,atlas7-car.txt b/documentation/devicetree/bindings/clock/csr,atlas7-car.txt --- a/documentation/devicetree/bindings/clock/csr,atlas7-car.txt +++ /dev/null -* clock and reset bindings for csr atlas7 - -required properties: -- compatible: should be "sirf,atlas7-car" -- reg: address and length of the register set -- #clock-cells: should be <1> -- #reset-cells: should be <1> - -the clock consumer should specify the desired clock by having the clock -id in its "clocks" phandle cell. -the id list atlas7_clks defined in drivers/clk/sirf/clk-atlas7.c - -the reset consumer should specify the desired reset by having the reset -id in its "reset" phandle cell. -the id list atlas7_reset_unit defined in drivers/clk/sirf/clk-atlas7.c - -examples: clock and reset controller node: - -car: clock-controller@18620000 { - compatible = "sirf,atlas7-car"; - reg = <0x18620000 0x1000>; - #clock-cells = <1>; - #reset-cells = <1>; -}; - -examples: consumers using clock or reset: - -timer@10dc0000 { - compatible = "sirf,macro-tick"; - reg = <0x10dc0000 0x1000>; - clocks = <&car 54>; - interrupts = <0 0 0>, - <0 1 0>, - <0 2 0>, - <0 49 0>, - <0 50 0>, - <0 51 0>; -}; - -uart1: uart@18020000 { - cell-index = <1>; - compatible = "sirf,macro-uart"; - reg = <0x18020000 0x1000>; - clocks = <&clks 95>; - interrupts = <0 18 0>; - fifosize = <32>; -}; - -vpp@13110000 { - compatible = "sirf,prima2-vpp"; - reg = <0x13110000 0x10000>; - interrupts = <0 31 0>; - clocks = <&car 85>; - resets = <&car 29>; -}; diff --git a/documentation/devicetree/bindings/clock/prima2-clock.txt b/documentation/devicetree/bindings/clock/prima2-clock.txt --- a/documentation/devicetree/bindings/clock/prima2-clock.txt +++ /dev/null -* clock bindings for csr sirfprimaii - -required properties: -- compatible: should be "sirf,prima2-clkc" -- reg: address and length of the register set -- interrupts: should contain clock controller interrupt -- #clock-cells: should be <1> - -the clock consumer should specify the desired clock by having the clock -id in its "clocks" phandle cell. the following is a full list of prima2 -clocks and ids. - - clock id - --------------------------- - rtc 0 - osc 1 - pll1 2 - pll2 3 - pll3 4 - mem 5 - sys 6 - security 7 - dsp 8 - gps 9 - mf 10 - io 11 - cpu 12 - uart0 13 - uart1 14 - uart2 15 - tsc 16 - i2c0 17 - i2c1 18 - spi0 19 - spi1 20 - pwmc 21 - efuse 22 - pulse 23 - dmac0 24 - dmac1 25 - nand 26 - audio 27 - usp0 28 - usp1 29 - usp2 30 - vip 31 - gfx 32 - mm 33 - lcd 34 - vpp 35 - mmc01 36 - mmc23 37 - mmc45 38 - usbpll 39 - usb0 40 - usb1 41 - -examples: - -clks: clock-controller@88000000 { - compatible = "sirf,prima2-clkc"; - reg = <0x88000000 0x1000>; - interrupts = <3>; - #clock-cells = <1>; -}; - -i2c0: i2c@b00e0000 { - cell-index = <0>; - compatible = "sirf,prima2-i2c"; - reg = <0xb00e0000 0x10000>; - interrupts = <24>; - clocks = <&clks 17>; -}; diff --git a/drivers/clk/makefile b/drivers/clk/makefile --- a/drivers/clk/makefile +++ b/drivers/clk/makefile -obj-$(config_arch_sirf) += sirf/ diff --git a/drivers/clk/sirf/makefile b/drivers/clk/sirf/makefile --- a/drivers/clk/sirf/makefile +++ /dev/null -# spdx-license-identifier: gpl-2.0-only -# -# makefile for sirf specific clk -# - -obj-$(config_arch_sirf) += clk-prima2.o clk-atlas6.o clk-atlas7.o diff --git a/drivers/clk/sirf/atlas6.h b/drivers/clk/sirf/atlas6.h --- a/drivers/clk/sirf/atlas6.h +++ /dev/null -/* spdx-license-identifier: gpl-2.0 */ -#define sirfsoc_clkc_clk_en0 0x0000 -#define sirfsoc_clkc_clk_en1 0x0004 -#define sirfsoc_clkc_ref_cfg 0x0020 -#define sirfsoc_clkc_cpu_cfg 0x0024 -#define sirfsoc_clkc_mem_cfg 0x0028 -#define sirfsoc_clkc_memdiv_cfg 0x002c -#define sirfsoc_clkc_sys_cfg 0x0030 -#define sirfsoc_clkc_io_cfg 0x0034 -#define sirfsoc_clkc_dsp_cfg 0x0038 -#define sirfsoc_clkc_gfx_cfg 0x003c -#define sirfsoc_clkc_mm_cfg 0x0040 -#define sirfsoc_clkc_gfx2d_cfg 0x0040 -#define sirfsoc_clkc_lcd_cfg 0x0044 -#define sirfsoc_clkc_mmc01_cfg 0x0048 -#define sirfsoc_clkc_mmc23_cfg 0x004c -#define sirfsoc_clkc_mmc45_cfg 0x0050 -#define sirfsoc_clkc_nand_cfg 0x0054 -#define sirfsoc_clkc_nanddiv_cfg 0x0058 -#define sirfsoc_clkc_pll1_cfg0 0x0080 -#define sirfsoc_clkc_pll2_cfg0 0x0084 -#define sirfsoc_clkc_pll3_cfg0 0x0088 -#define sirfsoc_clkc_pll1_cfg1 0x008c -#define sirfsoc_clkc_pll2_cfg1 0x0090 -#define sirfsoc_clkc_pll3_cfg1 0x0094 -#define sirfsoc_clkc_pll1_cfg2 0x0098 -#define sirfsoc_clkc_pll2_cfg2 0x009c -#define sirfsoc_clkc_pll3_cfg2 0x00a0 -#define sirfsoc_usbphy_pll_ctrl 0x0008 -#define sirfsoc_usbphy_pll_powerdown bit(1) -#define sirfsoc_usbphy_pll_bypass bit(2) -#define sirfsoc_usbphy_pll_lock bit(3) diff --git a/drivers/clk/sirf/clk-atlas6.c b/drivers/clk/sirf/clk-atlas6.c --- a/drivers/clk/sirf/clk-atlas6.c +++ /dev/null -// spdx-license-identifier: gpl-2.0-or-later -/* - * clock tree for csr sirfatlasvi - * - * copyright (c) 2011 - 2014 cambridge silicon radio limited, a csr plc group - * company. - */ - -#include <linux/module.h> -#include <linux/bitops.h> -#include <linux/io.h> -#include <linux/clkdev.h> -#include <linux/clk-provider.h> -#include <linux/of_address.h> -#include <linux/syscore_ops.h> - -#include "atlas6.h" -#include "clk-common.c" - -static struct clk_dmn clk_mmc01 = { - .regofs = sirfsoc_clkc_mmc01_cfg, - .enable_bit = 59, - .hw = { - .init = &clk_mmc01_init, - }, -}; - -static struct clk_dmn clk_mmc23 = { - .regofs = sirfsoc_clkc_mmc23_cfg, - .enable_bit = 60, - .hw = { - .init = &clk_mmc23_init, - }, -}; - -static struct clk_dmn clk_mmc45 = { - .regofs = sirfsoc_clkc_mmc45_cfg, - .enable_bit = 61, - .hw = { - .init = &clk_mmc45_init, - }, -}; - -static const struct clk_init_data clk_nand_init = { - .name = "nand", - .ops = &dmn_ops, - .parent_names = dmn_clk_parents, - .num_parents = array_size(dmn_clk_parents), -}; - -static struct clk_dmn clk_nand = { - .regofs = sirfsoc_clkc_nand_cfg, - .enable_bit = 34, - .hw = { - .init = &clk_nand_init, - }, -}; - -enum atlas6_clk_index { - /* 0 1 2 3 4 5 6 7 8 9 */ - rtc, osc, pll1, pll2, pll3, mem, sys, security, dsp, gps, - mf, io, cpu, uart0, uart1, uart2, tsc, i2c0, i2c1, spi0, - spi1, pwmc, efuse, pulse, dmac0, dmac1, nand, audio, usp0, usp1, - usp2, vip, gfx, gfx2d, lcd, vpp, mmc01, mmc23, mmc45, usbpll, - usb0, usb1, cphif, maxclk, -}; - -static __initdata struct clk_hw *atlas6_clk_hw_array[maxclk] = { - null, /* dummy */ - null, - &clk_pll1.hw, - &clk_pll2.hw, - &clk_pll3.hw, - &clk_mem.hw, - &clk_sys.hw, - &clk_security.hw, - &clk_dsp.hw, - &clk_gps.hw, - &clk_mf.hw, - &clk_io.hw, - &clk_cpu.hw, - &clk_uart0.hw, - &clk_uart1.hw, - &clk_uart2.hw, - &clk_tsc.hw, - &clk_i2c0.hw, - &clk_i2c1.hw, - &clk_spi0.hw, - &clk_spi1.hw, - &clk_pwmc.hw, - &clk_efuse.hw, - &clk_pulse.hw, - &clk_dmac0.hw, - &clk_dmac1.hw, - &clk_nand.hw, - &clk_audio.hw, - &clk_usp0.hw, - &clk_usp1.hw, - &clk_usp2.hw, - &clk_vip.hw, - &clk_gfx.hw, - &clk_gfx2d.hw, - &clk_lcd.hw, - &clk_vpp.hw, - &clk_mmc01.hw, - &clk_mmc23.hw, - &clk_mmc45.hw, - &usb_pll_clk_hw, - &clk_usb0.hw, - &clk_usb1.hw, - &clk_cphif.hw, -}; - -static struct clk *atlas6_clks[maxclk]; - -static void __init atlas6_clk_init(struct device_node *np) -{ - struct device_node *rscnp; - int i; - - rscnp = of_find_compatible_node(null, null, "sirf,prima2-rsc"); - sirfsoc_rsc_vbase = of_iomap(rscnp, 0); - if (!sirfsoc_rsc_vbase) - panic("unable to map rsc registers "); - of_node_put(rscnp); - - sirfsoc_clk_vbase = of_iomap(np, 0); - if (!sirfsoc_clk_vbase) - panic("unable to map clkc registers "); - - /* these are always available (rtc and 26mhz osc)*/ - atlas6_clks[rtc] = clk_register_fixed_rate(null, "rtc", null, 0, 32768); - atlas6_clks[osc] = clk_register_fixed_rate(null, "osc", null, 0, - 26000000); - - for (i = pll1; i < maxclk; i++) { - atlas6_clks[i] = clk_register(null, atlas6_clk_hw_array[i]); - bug_on(is_err(atlas6_clks[i])); - } - clk_register_clkdev(atlas6_clks[cpu], null, "cpu"); - clk_register_clkdev(atlas6_clks[io], null, "io"); - clk_register_clkdev(atlas6_clks[mem], null, "mem"); - clk_register_clkdev(atlas6_clks[mem], null, "osc"); - - clk_data.clks = atlas6_clks; - clk_data.clk_num = maxclk; - - of_clk_add_provider(np, of_clk_src_onecell_get, &clk_data); -} -clk_of_declare(atlas6_clk, "sirf,atlas6-clkc", atlas6_clk_init); diff --git a/drivers/clk/sirf/clk-atlas7.c b/drivers/clk/sirf/clk-atlas7.c --- a/drivers/clk/sirf/clk-atlas7.c +++ /dev/null -// spdx-license-identifier: gpl-2.0-or-later -/* - * clock tree for csr sirfatlas7 - * - * copyright (c) 2014 cambridge silicon radio limited, a csr plc group company. - */ - -#include <linux/bitops.h> -#include <linux/io.h> -#include <linux/clk-provider.h> -#include <linux/delay.h> -#include <linux/of_address.h> -#include <linux/reset-controller.h> -#include <linux/slab.h> - -#define sirfsoc_clkc_mempll_ab_freq 0x0000 -#define sirfsoc_clkc_mempll_ab_ssc 0x0004 -#define sirfsoc_clkc_mempll_ab_ctrl0 0x0008 -#define sirfsoc_clkc_mempll_ab_ctrl1 0x000c -#define sirfsoc_clkc_mempll_ab_status 0x0010 -#define sirfsoc_clkc_mempll_ab_ssram_addr 0x0014 -#define sirfsoc_clkc_mempll_ab_ssram_data 0x0018 - -#define sirfsoc_clkc_cpupll_ab_freq 0x001c -#define sirfsoc_clkc_cpupll_ab_ssc 0x0020 -#define sirfsoc_clkc_cpupll_ab_ctrl0 0x0024 -#define sirfsoc_clkc_cpupll_ab_ctrl1 0x0028 -#define sirfsoc_clkc_cpupll_ab_status 0x002c - -#define sirfsoc_clkc_sys0pll_ab_freq 0x0030 -#define sirfsoc_clkc_sys0pll_ab_ssc 0x0034 -#define sirfsoc_clkc_sys0pll_ab_ctrl0 0x0038 -#define sirfsoc_clkc_sys0pll_ab_ctrl1 0x003c -#define sirfsoc_clkc_sys0pll_ab_status 0x0040 - -#define sirfsoc_clkc_sys1pll_ab_freq 0x0044 -#define sirfsoc_clkc_sys1pll_ab_ssc 0x0048 -#define sirfsoc_clkc_sys1pll_ab_ctrl0 0x004c -#define sirfsoc_clkc_sys1pll_ab_ctrl1 0x0050 -#define sirfsoc_clkc_sys1pll_ab_status 0x0054 - -#define sirfsoc_clkc_sys2pll_ab_freq 0x0058 -#define sirfsoc_clkc_sys2pll_ab_ssc 0x005c -#define sirfsoc_clkc_sys2pll_ab_ctrl0 0x0060 -#define sirfsoc_clkc_sys2pll_ab_ctrl1 0x0064 -#define sirfsoc_clkc_sys2pll_ab_status 0x0068 - -#define sirfsoc_clkc_sys3pll_ab_freq 0x006c -#define sirfsoc_clkc_sys3pll_ab_ssc 0x0070 -#define sirfsoc_clkc_sys3pll_ab_ctrl0 0x0074 -#define sirfsoc_clkc_sys3pll_ab_ctrl1 0x0078 -#define sirfsoc_clkc_sys3pll_ab_status 0x007c - -#define sirfsoc_abpll_ctrl0_ssen 0x00001000 -#define sirfsoc_abpll_ctrl0_bypass 0x00000010 -#define sirfsoc_abpll_ctrl0_reset 0x00000001 - -#define sirfsoc_clkc_audio_dto_inc 0x0088 -#define sirfsoc_clkc_disp0_dto_inc 0x008c -#define sirfsoc_clkc_disp1_dto_inc 0x0090 - -#define sirfsoc_clkc_audio_dto_src 0x0094 -#define sirfsoc_clkc_audio_dto_ena 0x0098 -#define sirfsoc_clkc_audio_dto_droff 0x009c - -#define sirfsoc_clkc_disp0_dto_src 0x00a0 -#define sirfsoc_clkc_disp0_dto_ena 0x00a4 -#define sirfsoc_clkc_disp0_dto_droff 0x00a8 - -#define sirfsoc_clkc_disp1_dto_src 0x00ac -#define sirfsoc_clkc_disp1_dto_ena 0x00b0 -#define sirfsoc_clkc_disp1_dto_droff 0x00b4 - -#define sirfsoc_clkc_i2s_clk_sel 0x00b8 -#define sirfsoc_clkc_i2s_sel_stat 0x00bc - -#define sirfsoc_clkc_usbphy_clkdiv_cfg 0x00c0 -#define sirfsoc_clkc_usbphy_clkdiv_ena 0x00c4 -#define sirfsoc_clkc_usbphy_clk_sel 0x00c8 -#define sirfsoc_clkc_usbphy_clk_sel_stat 0x00cc - -#define sirfsoc_clkc_btss_clkdiv_cfg 0x00d0 -#define sirfsoc_clkc_btss_clkdiv_ena 0x00d4 -#define sirfsoc_clkc_btss_clk_sel 0x00d8 -#define sirfsoc_clkc_btss_clk_sel_stat 0x00dc - -#define sirfsoc_clkc_rgmii_clkdiv_cfg 0x00e0 -#define sirfsoc_clkc_rgmii_clkdiv_ena 0x00e4 -#define sirfsoc_clkc_rgmii_clk_sel 0x00e8 -#define sirfsoc_clkc_rgmii_clk_sel_stat 0x00ec - -#define sirfsoc_clkc_cpu_clkdiv_cfg 0x00f0 -#define sirfsoc_clkc_cpu_clkdiv_ena 0x00f4 -#define sirfsoc_clkc_cpu_clk_sel 0x00f8 -#define sirfsoc_clkc_cpu_clk_sel_stat 0x00fc - -#define sirfsoc_clkc_sdphy01_clkdiv_cfg 0x0100 -#define sirfsoc_clkc_sdphy01_clkdiv_ena 0x0104 -#define sirfsoc_clkc_sdphy01_clk_sel 0x0108 -#define sirfsoc_clkc_sdphy01_clk_sel_stat 0x010c - -#define sirfsoc_clkc_sdphy23_clkdiv_cfg 0x0110 -#define sirfsoc_clkc_sdphy23_clkdiv_ena 0x0114 -#define sirfsoc_clkc_sdphy23_clk_sel 0x0118 -#define sirfsoc_clkc_sdphy23_clk_sel_stat 0x011c - -#define sirfsoc_clkc_sdphy45_clkdiv_cfg 0x0120 -#define sirfsoc_clkc_sdphy45_clkdiv_ena 0x0124 -#define sirfsoc_clkc_sdphy45_clk_sel 0x0128 -#define sirfsoc_clkc_sdphy45_clk_sel_stat 0x012c - -#define sirfsoc_clkc_sdphy67_clkdiv_cfg 0x0130 -#define sirfsoc_clkc_sdphy67_clkdiv_ena 0x0134 -#define sirfsoc_clkc_sdphy67_clk_sel 0x0138 -#define sirfsoc_clkc_sdphy67_clk_sel_stat 0x013c - -#define sirfsoc_clkc_can_clkdiv_cfg 0x0140 -#define sirfsoc_clkc_can_clkdiv_ena 0x0144 -#define sirfsoc_clkc_can_clk_sel 0x0148 -#define sirfsoc_clkc_can_clk_sel_stat 0x014c - -#define sirfsoc_clkc_deint_clkdiv_cfg 0x0150 -#define sirfsoc_clkc_deint_clkdiv_ena 0x0154 -#define sirfsoc_clkc_deint_clk_sel 0x0158 -#define sirfsoc_clkc_deint_clk_sel_stat 0x015c - -#define sirfsoc_clkc_nand_clkdiv_cfg 0x0160 -#define sirfsoc_clkc_nand_clkdiv_ena 0x0164 -#define sirfsoc_clkc_nand_clk_sel 0x0168 -#define sirfsoc_clkc_nand_clk_sel_stat 0x016c - -#define sirfsoc_clkc_disp0_clkdiv_cfg 0x0170 -#define sirfsoc_clkc_disp0_clkdiv_ena 0x0174 -#define sirfsoc_clkc_disp0_clk_sel 0x0178 -#define sirfsoc_clkc_disp0_clk_sel_stat 0x017c - -#define sirfsoc_clkc_disp1_clkdiv_cfg 0x0180 -#define sirfsoc_clkc_disp1_clkdiv_ena 0x0184 -#define sirfsoc_clkc_disp1_clk_sel 0x0188 -#define sirfsoc_clkc_disp1_clk_sel_stat 0x018c - -#define sirfsoc_clkc_gpu_clkdiv_cfg 0x0190 -#define sirfsoc_clkc_gpu_clkdiv_ena 0x0194 -#define sirfsoc_clkc_gpu_clk_sel 0x0198 -#define sirfsoc_clkc_gpu_clk_sel_stat 0x019c - -#define sirfsoc_clkc_gnss_clkdiv_cfg 0x01a0 -#define sirfsoc_clkc_gnss_clkdiv_ena 0x01a4 -#define sirfsoc_clkc_gnss_clk_sel 0x01a8 -#define sirfsoc_clkc_gnss_clk_sel_stat 0x01ac - -#define sirfsoc_clkc_shared_divider_cfg0 0x01b0 -#define sirfsoc_clkc_shared_divider_cfg1 0x01b4 -#define sirfsoc_clkc_shared_divider_ena 0x01b8 - -#define sirfsoc_clkc_sys_clk_sel 0x01bc -#define sirfsoc_clkc_sys_clk_sel_stat 0x01c0 -#define sirfsoc_clkc_io_clk_sel 0x01c4 -#define sirfsoc_clkc_io_clk_sel_stat 0x01c8 -#define sirfsoc_clkc_g2d_clk_sel 0x01cc -#define sirfsoc_clkc_g2d_clk_sel_stat 0x01d0 -#define sirfsoc_clkc_jpenc_clk_sel 0x01d4 -#define sirfsoc_clkc_jpenc_clk_sel_stat 0x01d8 -#define sirfsoc_clkc_vdec_clk_sel 0x01dc -#define sirfsoc_clkc_vdec_clk_sel_stat 0x01e0 -#define sirfsoc_clkc_gmac_clk_sel 0x01e4 -#define sirfsoc_clkc_gmac_clk_sel_stat 0x01e8 -#define sirfsoc_clkc_usb_clk_sel 0x01ec -#define sirfsoc_clkc_usb_clk_sel_stat 0x01f0 -#define sirfsoc_clkc_kas_clk_sel 0x01f4 -#define sirfsoc_clkc_kas_clk_sel_stat 0x01f8 -#define sirfsoc_clkc_sec_clk_sel 0x01fc -#define sirfsoc_clkc_sec_clk_sel_stat 0x0200 -#define sirfsoc_clkc_sdr_clk_sel 0x0204 -#define sirfsoc_clkc_sdr_clk_sel_stat 0x0208 -#define sirfsoc_clkc_vip_clk_sel 0x020c -#define sirfsoc_clkc_vip_clk_sel_stat 0x0210 -#define sirfsoc_clkc_nocd_clk_sel 0x0214 -#define sirfsoc_clkc_nocd_clk_sel_stat 0x0218 -#define sirfsoc_clkc_nocr_clk_sel 0x021c -#define sirfsoc_clkc_nocr_clk_sel_stat 0x0220 -#define sirfsoc_clkc_tpiu_clk_sel 0x0224 -#define sirfsoc_clkc_tpiu_clk_sel_stat 0x0228 - -#define sirfsoc_clkc_root_clk_en0_set 0x022c -#define sirfsoc_clkc_root_clk_en0_clr 0x0230 -#define sirfsoc_clkc_root_clk_en0_stat 0x0234 -#define sirfsoc_clkc_root_clk_en1_set 0x0238 -#define sirfsoc_clkc_root_clk_en1_clr 0x023c -#define sirfsoc_clkc_root_clk_en1_stat 0x0240 - -#define sirfsoc_clkc_leaf_clk_en0_set 0x0244 -#define sirfsoc_clkc_leaf_clk_en0_clr 0x0248 -#define sirfsoc_clkc_leaf_clk_en0_stat 0x024c - -#define sirfsoc_clkc_rstc_a7_sw_rst 0x0308 - -#define sirfsoc_clkc_leaf_clk_en1_set 0x04a0 -#define sirfsoc_clkc_leaf_clk_en2_set 0x04b8 -#define sirfsoc_clkc_leaf_clk_en3_set 0x04d0 -#define sirfsoc_clkc_leaf_clk_en4_set 0x04e8 -#define sirfsoc_clkc_leaf_clk_en5_set 0x0500 -#define sirfsoc_clkc_leaf_clk_en6_set 0x0518 -#define sirfsoc_clkc_leaf_clk_en7_set 0x0530 -#define sirfsoc_clkc_leaf_clk_en8_set 0x0548 - -#define sirfsoc_noc_clk_idlereq_set 0x02d0 -#define sirfsoc_noc_clk_idlereq_clr 0x02d4 -#define sirfsoc_noc_clk_slvrdy_set 0x02e8 -#define sirfsoc_noc_clk_slvrdy_clr 0x02ec -#define sirfsoc_noc_clk_idle_status 0x02f4 - -struct clk_pll { - struct clk_hw hw; - u16 regofs; /* register offset */ -}; -#define to_pllclk(_hw) container_of(_hw, struct clk_pll, hw) - -struct clk_dto { - struct clk_hw hw; - u16 inc_offset; /* dto increment offset */ - u16 src_offset; /* dto src offset */ -}; -#define to_dtoclk(_hw) container_of(_hw, struct clk_dto, hw) - -enum clk_unit_type { - clk_unit_noc_other, - clk_unit_noc_clock, - clk_unit_noc_socket, -}; - -struct clk_unit { - struct clk_hw hw; - u16 regofs; - u16 bit; - u32 type; - u8 idle_bit; - spinlock_t *lock; -}; -#define to_unitclk(_hw) container_of(_hw, struct clk_unit, hw) - -struct atlas7_div_init_data { - const char *div_name; - const char *parent_name; - const char *gate_name; - unsigned long flags; - u8 divider_flags; - u8 gate_flags; - u32 div_offset; - u8 shift; - u8 width; - u32 gate_offset; - u8 gate_bit; - spinlock_t *lock; -}; - -struct atlas7_mux_init_data { - const char *mux_name; - const char * const *parent_names; - u8 parent_num; - unsigned long flags; - u8 mux_flags; - u32 mux_offset; - u8 shift; - u8 width; -}; - -struct atlas7_unit_init_data { - u32 index; - const char *unit_name; - const char *parent_name; - unsigned long flags; - u32 regofs; - u8 bit; - u32 type; - u8 idle_bit; - spinlock_t *lock; -}; - -struct atlas7_reset_desc { - const char *name; - u32 clk_ofs; - u8 clk_bit; - u32 rst_ofs; - u8 rst_bit; - spinlock_t *lock; -}; - -static void __iomem *sirfsoc_clk_vbase; -static struct clk_onecell_data clk_data; - -static const struct clk_div_table pll_div_table[] = { - { .val = 0, .div = 1 }, - { .val = 1, .div = 2 }, - { .val = 2, .div = 4 }, - { .val = 3, .div = 8 }, - { .val = 4, .div = 16 }, - { .val = 5, .div = 32 }, -}; - -static define_spinlock(cpupll_ctrl1_lock); -static define_spinlock(mempll_ctrl1_lock); -static define_spinlock(sys0pll_ctrl1_lock); -static define_spinlock(sys1pll_ctrl1_lock); -static define_spinlock(sys2pll_ctrl1_lock); -static define_spinlock(sys3pll_ctrl1_lock); -static define_spinlock(usbphy_div_lock); -static define_spinlock(btss_div_lock); -static define_spinlock(rgmii_div_lock); -static define_spinlock(cpu_div_lock); -static define_spinlock(sdphy01_div_lock); -static define_spinlock(sdphy23_div_lock); -static define_spinlock(sdphy45_div_lock); -static define_spinlock(sdphy67_div_lock); -static define_spinlock(can_div_lock); -static define_spinlock(deint_div_lock); -static define_spinlock(nand_div_lock); -static define_spinlock(disp0_div_lock); -static define_spinlock(disp1_div_lock); -static define_spinlock(gpu_div_lock); -static define_spinlock(gnss_div_lock); -/* gate register shared */ -static define_spinlock(share_div_lock); -static define_spinlock(root0_gate_lock); -static define_spinlock(root1_gate_lock); -static define_spinlock(leaf0_gate_lock); -static define_spinlock(leaf1_gate_lock); -static define_spinlock(leaf2_gate_lock); -static define_spinlock(leaf3_gate_lock); -static define_spinlock(leaf4_gate_lock); -static define_spinlock(leaf5_gate_lock); -static define_spinlock(leaf6_gate_lock); -static define_spinlock(leaf7_gate_lock); -static define_spinlock(leaf8_gate_lock); - -static inline unsigned long clkc_readl(unsigned reg) -{ - return readl(sirfsoc_clk_vbase + reg); -} - -static inline void clkc_writel(u32 val, unsigned reg) -{ - writel(val, sirfsoc_clk_vbase + reg); -} - -/* -* abpll -* integer mode: fvco = fin * 2 * nf / nr -* spread spectrum mode: fvco = fin * ssn / nr -* ssn = 2^24 / (256 * ((ssdiv >> ssdepth) << ssdepth) + (ssmod << ssdepth)) -*/ -static unsigned long pll_clk_recalc_rate(struct clk_hw *hw, - unsigned long parent_rate) -{ - unsigned long fin = parent_rate; - struct clk_pll *clk = to_pllclk(hw); - u64 rate; - u32 regctrl0 = clkc_readl(clk->regofs + sirfsoc_clkc_mempll_ab_ctrl0 - - sirfsoc_clkc_mempll_ab_freq); - u32 regfreq = clkc_readl(clk->regofs); - u32 regssc = clkc_readl(clk->regofs + sirfsoc_clkc_mempll_ab_ssc - - sirfsoc_clkc_mempll_ab_freq); - u32 nr = (regfreq >> 16 & (bit(3) - 1)) + 1; - u32 nf = (regfreq & (bit(9) - 1)) + 1; - u32 ssdiv = regssc >> 8 & (bit(12) - 1); - u32 ssdepth = regssc >> 20 & (bit(2) - 1); - u32 ssmod = regssc & (bit(8) - 1); - - if (regctrl0 & sirfsoc_abpll_ctrl0_bypass) - return fin; - - if (regctrl0 & sirfsoc_abpll_ctrl0_ssen) { - rate = fin; - rate *= 1 << 24; - do_div(rate, nr); - do_div(rate, (256 * ((ssdiv >> ssdepth) << ssdepth) - + (ssmod << ssdepth))); - } else { - rate = 2 * fin; - rate *= nf; - do_div(rate, nr); - } - return rate; -} - -static const struct clk_ops ab_pll_ops = { - .recalc_rate = pll_clk_recalc_rate, -}; - -static const char * const pll_clk_parents[] = { - "xin", -}; - -static const struct clk_init_data clk_cpupll_init = { - .name = "cpupll_vco", - .ops = &ab_pll_ops, - .parent_names = pll_clk_parents, - .num_parents = array_size(pll_clk_parents), -}; - -static struct clk_pll clk_cpupll = { - .regofs = sirfsoc_clkc_cpupll_ab_freq, - .hw = { - .init = &clk_cpupll_init, - }, -}; - -static const struct clk_init_data clk_mempll_init = { - .name = "mempll_vco", - .ops = &ab_pll_ops, - .parent_names = pll_clk_parents, - .num_parents = array_size(pll_clk_parents), -}; - -static struct clk_pll clk_mempll = { - .regofs = sirfsoc_clkc_mempll_ab_freq, - .hw = { - .init = &clk_mempll_init, - }, -}; - -static const struct clk_init_data clk_sys0pll_init = { - .name = "sys0pll_vco", - .ops = &ab_pll_ops, - .parent_names = pll_clk_parents, - .num_parents = array_size(pll_clk_parents), -}; - -static struct clk_pll clk_sys0pll = { - .regofs = sirfsoc_clkc_sys0pll_ab_freq, - .hw = { - .init = &clk_sys0pll_init, - }, -}; - -static const struct clk_init_data clk_sys1pll_init = { - .name = "sys1pll_vco", - .ops = &ab_pll_ops, - .parent_names = pll_clk_parents, - .num_parents = array_size(pll_clk_parents), -}; - -static struct clk_pll clk_sys1pll = { - .regofs = sirfsoc_clkc_sys1pll_ab_freq, - .hw = { - .init = &clk_sys1pll_init, - }, -}; - -static const struct clk_init_data clk_sys2pll_init = { - .name = "sys2pll_vco", - .ops = &ab_pll_ops, - .parent_names = pll_clk_parents, - .num_parents = array_size(pll_clk_parents), -}; - -static struct clk_pll clk_sys2pll = { - .regofs = sirfsoc_clkc_sys2pll_ab_freq, - .hw = { - .init = &clk_sys2pll_init, - }, -}; - -static const struct clk_init_data clk_sys3pll_init = { - .name = "sys3pll_vco", - .ops = &ab_pll_ops, - .parent_names = pll_clk_parents, - .num_parents = array_size(pll_clk_parents), -}; - -static struct clk_pll clk_sys3pll = { - .regofs = sirfsoc_clkc_sys3pll_ab_freq, - .hw = { - .init = &clk_sys3pll_init, - }, -}; - -/* - * dto in clkc, default enable double resolution mode - * double resolution mode:fout = fin * finc / 2^29 - * normal mode:fout = fin * finc / 2^28 - */ -#define dto_resl_double (1ull << 29) -#define dto_resl_normal (1ull << 28) - -static int dto_clk_is_enabled(struct clk_hw *hw) -{ - struct clk_dto *clk = to_dtoclk(hw); - int reg; - - reg = clk->src_offset + sirfsoc_clkc_audio_dto_ena - sirfsoc_clkc_audio_dto_src; - - return !!(clkc_readl(reg) & bit(0)); -} - -static int dto_clk_enable(struct clk_hw *hw) -{ - u32 val, reg; - struct clk_dto *clk = to_dtoclk(hw); - - reg = clk->src_offset + sirfsoc_clkc_audio_dto_ena - sirfsoc_clkc_audio_dto_src; - - val = clkc_readl(reg) | bit(0); - clkc_writel(val, reg); - return 0; -} - -static void dto_clk_disable(struct clk_hw *hw) -{ - u32 val, reg; - struct clk_dto *clk = to_dtoclk(hw); - - reg = clk->src_offset + sirfsoc_clkc_audio_dto_ena - sirfsoc_clkc_audio_dto_src; - - val = clkc_readl(reg) & ~bit(0); - clkc_writel(val, reg); -} - -static unsigned long dto_clk_recalc_rate(struct clk_hw *hw, - unsigned long parent_rate) -{ - u64 rate = parent_rate; - struct clk_dto *clk = to_dtoclk(hw); - u32 finc = clkc_readl(clk->inc_offset); - u32 droff = clkc_readl(clk->src_offset + sirfsoc_clkc_audio_dto_droff - sirfsoc_clkc_audio_dto_src); - - rate *= finc; - if (droff & bit(0)) - /* double resolution off */ - do_div(rate, dto_resl_normal); - else - do_div(rate, dto_resl_double); - - return rate; -} - -static long dto_clk_round_rate(struct clk_hw *hw, unsigned long rate, - unsigned long *parent_rate) -{ - u64 dividend = rate * dto_resl_double; - - do_div(dividend, *parent_rate); - dividend *= *parent_rate; - do_div(dividend, dto_resl_double); - - return dividend; -} - -static int dto_clk_set_rate(struct clk_hw *hw, unsigned long rate, - unsigned long parent_rate) -{ - u64 dividend = rate * dto_resl_double; - struct clk_dto *clk = to_dtoclk(hw); - - do_div(dividend, parent_rate); - clkc_writel(0, clk->src_offset + sirfsoc_clkc_audio_dto_droff - sirfsoc_clkc_audio_dto_src); - clkc_writel(dividend, clk->inc_offset); - - return 0; -} - -static u8 dto_clk_get_parent(struct clk_hw *hw) -{ - struct clk_dto *clk = to_dtoclk(hw); - - return clkc_readl(clk->src_offset); -} - -/* - * dto need clk_set_parent_gate - */ -static int dto_clk_set_parent(struct clk_hw *hw, u8 index) -{ - struct clk_dto *clk = to_dtoclk(hw); - - clkc_writel(index, clk->src_offset); - return 0; -} - -static const struct clk_ops dto_ops = { - .is_enabled = dto_clk_is_enabled, - .enable = dto_clk_enable, - .disable = dto_clk_disable, - .recalc_rate = dto_clk_recalc_rate, - .round_rate = dto_clk_round_rate, - .set_rate = dto_clk_set_rate, - .get_parent = dto_clk_get_parent, - .set_parent = dto_clk_set_parent, -}; - -/* dto parent clock as syspllvco/clk1 */ -static const char * const audiodto_clk_parents[] = { - "sys0pll_clk1", - "sys1pll_clk1", - "sys3pll_clk1", -}; - -static const struct clk_init_data clk_audiodto_init = { - .name = "audio_dto", - .ops = &dto_ops, - .parent_names = audiodto_clk_parents, - .num_parents = array_size(audiodto_clk_parents), -}; - -static struct clk_dto clk_audio_dto = { - .inc_offset = sirfsoc_clkc_audio_dto_inc, - .src_offset = sirfsoc_clkc_audio_dto_src, - .hw = { - .init = &clk_audiodto_init, - }, -}; - -static const char * const disp0dto_clk_parents[] = { - "sys0pll_clk1", - "sys1pll_clk1", - "sys3pll_clk1", -}; - -static const struct clk_init_data clk_disp0dto_init = { - .name = "disp0_dto", - .ops = &dto_ops, - .parent_names = disp0dto_clk_parents, - .num_parents = array_size(disp0dto_clk_parents), -}; - -static struct clk_dto clk_disp0_dto = { - .inc_offset = sirfsoc_clkc_disp0_dto_inc, - .src_offset = sirfsoc_clkc_disp0_dto_src, - .hw = { - .init = &clk_disp0dto_init, - }, -}; - -static const char * const disp1dto_clk_parents[] = { - "sys0pll_clk1", - "sys1pll_clk1", - "sys3pll_clk1", -}; - -static const struct clk_init_data clk_disp1dto_init = { - .name = "disp1_dto", - .ops = &dto_ops, - .parent_names = disp1dto_clk_parents, - .num_parents = array_size(disp1dto_clk_parents), -}; - -static struct clk_dto clk_disp1_dto = { - .inc_offset = sirfsoc_clkc_disp1_dto_inc, - .src_offset = sirfsoc_clkc_disp1_dto_src, - .hw = { - .init = &clk_disp1dto_init, - }, -}; - -static struct atlas7_div_init_data divider_list[] __initdata = { - /* div_name, parent_name, gate_name, clk_flag, divider_flag, gate_flag, div_offset, shift, wdith, gate_offset, bit_enable, lock */ - { "sys0pll_qa1", "sys0pll_fixdiv", "sys0pll_a1", 0, 0, 0, sirfsoc_clkc_usbphy_clkdiv_cfg, 0, 6, sirfsoc_clkc_usbphy_clkdiv_ena, 0, &usbphy_div_lock }, - { "sys1pll_qa1", "sys1pll_fixdiv", "sys1pll_a1", 0, 0, 0, sirfsoc_clkc_usbphy_clkdiv_cfg, 8, 6, sirfsoc_clkc_usbphy_clkdiv_ena, 4, &usbphy_div_lock }, - { "sys2pll_qa1", "sys2pll_fixdiv", "sys2pll_a1", 0, 0, 0, sirfsoc_clkc_usbphy_clkdiv_cfg, 16, 6, sirfsoc_clkc_usbphy_clkdiv_ena, 8, &usbphy_div_lock }, - { "sys3pll_qa1", "sys3pll_fixdiv", "sys3pll_a1", 0, 0, 0, sirfsoc_clkc_usbphy_clkdiv_cfg, 24, 6, sirfsoc_clkc_usbphy_clkdiv_ena, 12, &usbphy_div_lock }, - { "sys0pll_qa2", "sys0pll_fixdiv", "sys0pll_a2", 0, 0, 0, sirfsoc_clkc_btss_clkdiv_cfg, 0, 6, sirfsoc_clkc_btss_clkdiv_ena, 0, &btss_div_lock }, - { "sys1pll_qa2", "sys1pll_fixdiv", "sys1pll_a2", 0, 0, 0, sirfsoc_clkc_btss_clkdiv_cfg, 8, 6, sirfsoc_clkc_btss_clkdiv_ena, 4, &btss_div_lock }, - { "sys2pll_qa2", "sys2pll_fixdiv", "sys2pll_a2", 0, 0, 0, sirfsoc_clkc_btss_clkdiv_cfg, 16, 6, sirfsoc_clkc_btss_clkdiv_ena, 8, &btss_div_lock }, - { "sys3pll_qa2", "sys3pll_fixdiv", "sys3pll_a2", 0, 0, 0, sirfsoc_clkc_btss_clkdiv_cfg, 24, 6, sirfsoc_clkc_btss_clkdiv_ena, 12, &btss_div_lock }, - { "sys0pll_qa3", "sys0pll_fixdiv", "sys0pll_a3", 0, 0, 0, sirfsoc_clkc_rgmii_clkdiv_cfg, 0, 6, sirfsoc_clkc_rgmii_clkdiv_ena, 0, &rgmii_div_lock }, - { "sys1pll_qa3", "sys1pll_fixdiv", "sys1pll_a3", 0, 0, 0, sirfsoc_clkc_rgmii_clkdiv_cfg, 8, 6, sirfsoc_clkc_rgmii_clkdiv_ena, 4, &rgmii_div_lock }, - { "sys2pll_qa3", "sys2pll_fixdiv", "sys2pll_a3", 0, 0, 0, sirfsoc_clkc_rgmii_clkdiv_cfg, 16, 6, sirfsoc_clkc_rgmii_clkdiv_ena, 8, &rgmii_div_lock }, - { "sys3pll_qa3", "sys3pll_fixdiv", "sys3pll_a3", 0, 0, 0, sirfsoc_clkc_rgmii_clkdiv_cfg, 24, 6, sirfsoc_clkc_rgmii_clkdiv_ena, 12, &rgmii_div_lock }, - { "sys0pll_qa4", "sys0pll_fixdiv", "sys0pll_a4", 0, 0, 0, sirfsoc_clkc_cpu_clkdiv_cfg, 0, 6, sirfsoc_clkc_cpu_clkdiv_ena, 0, &cpu_div_lock }, - { "sys1pll_qa4", "sys1pll_fixdiv", "sys1pll_a4", 0, 0, clk_ignore_unused, sirfsoc_clkc_cpu_clkdiv_cfg, 8, 6, sirfsoc_clkc_cpu_clkdiv_ena, 4, &cpu_div_lock }, - { "sys0pll_qa5", "sys0pll_fixdiv", "sys0pll_a5", 0, 0, 0, sirfsoc_clkc_sdphy01_clkdiv_cfg, 0, 6, sirfsoc_clkc_sdphy01_clkdiv_ena, 0, &sdphy01_div_lock }, - { "sys1pll_qa5", "sys1pll_fixdiv", "sys1pll_a5", 0, 0, 0, sirfsoc_clkc_sdphy01_clkdiv_cfg, 8, 6, sirfsoc_clkc_sdphy01_clkdiv_ena, 4, &sdphy01_div_lock }, - { "sys2pll_qa5", "sys2pll_fixdiv", "sys2pll_a5", 0, 0, 0, sirfsoc_clkc_sdphy01_clkdiv_cfg, 16, 6, sirfsoc_clkc_sdphy01_clkdiv_ena, 8, &sdphy01_div_lock }, - { "sys3pll_qa5", "sys3pll_fixdiv", "sys3pll_a5", 0, 0, 0, sirfsoc_clkc_sdphy01_clkdiv_cfg, 24, 6, sirfsoc_clkc_sdphy01_clkdiv_ena, 12, &sdphy01_div_lock }, - { "sys0pll_qa6", "sys0pll_fixdiv", "sys0pll_a6", 0, 0, 0, sirfsoc_clkc_sdphy23_clkdiv_cfg, 0, 6, sirfsoc_clkc_sdphy23_clkdiv_ena, 0, &sdphy23_div_lock }, - { "sys1pll_qa6", "sys1pll_fixdiv", "sys1pll_a6", 0, 0, 0, sirfsoc_clkc_sdphy23_clkdiv_cfg, 8, 6, sirfsoc_clkc_sdphy23_clkdiv_ena, 4, &sdphy23_div_lock }, - { "sys2pll_qa6", "sys2pll_fixdiv", "sys2pll_a6", 0, 0, 0, sirfsoc_clkc_sdphy23_clkdiv_cfg, 16, 6, sirfsoc_clkc_sdphy23_clkdiv_ena, 8, &sdphy23_div_lock }, - { "sys3pll_qa6", "sys3pll_fixdiv", "sys3pll_a6", 0, 0, 0, sirfsoc_clkc_sdphy23_clkdiv_cfg, 24, 6, sirfsoc_clkc_sdphy23_clkdiv_ena, 12, &sdphy23_div_lock }, - { "sys0pll_qa7", "sys0pll_fixdiv", "sys0pll_a7", 0, 0, 0, sirfsoc_clkc_sdphy45_clkdiv_cfg, 0, 6, sirfsoc_clkc_sdphy45_clkdiv_ena, 0, &sdphy45_div_lock }, - { "sys1pll_qa7", "sys1pll_fixdiv", "sys1pll_a7", 0, 0, 0, sirfsoc_clkc_sdphy45_clkdiv_cfg, 8, 6, sirfsoc_clkc_sdphy45_clkdiv_ena, 4, &sdphy45_div_lock }, - { "sys2pll_qa7", "sys2pll_fixdiv", "sys2pll_a7", 0, 0, 0, sirfsoc_clkc_sdphy45_clkdiv_cfg, 16, 6, sirfsoc_clkc_sdphy45_clkdiv_ena, 8, &sdphy45_div_lock }, - { "sys3pll_qa7", "sys3pll_fixdiv", "sys3pll_a7", 0, 0, 0, sirfsoc_clkc_sdphy45_clkdiv_cfg, 24, 6, sirfsoc_clkc_sdphy45_clkdiv_ena, 12, &sdphy45_div_lock }, - { "sys0pll_qa8", "sys0pll_fixdiv", "sys0pll_a8", 0, 0, 0, sirfsoc_clkc_sdphy67_clkdiv_cfg, 0, 6, sirfsoc_clkc_sdphy67_clkdiv_ena, 0, &sdphy67_div_lock }, - { "sys1pll_qa8", "sys1pll_fixdiv", "sys1pll_a8", 0, 0, 0, sirfsoc_clkc_sdphy67_clkdiv_cfg, 8, 6, sirfsoc_clkc_sdphy67_clkdiv_ena, 4, &sdphy67_div_lock }, - { "sys2pll_qa8", "sys2pll_fixdiv", "sys2pll_a8", 0, 0, 0, sirfsoc_clkc_sdphy67_clkdiv_cfg, 16, 6, sirfsoc_clkc_sdphy67_clkdiv_ena, 8, &sdphy67_div_lock }, - { "sys3pll_qa8", "sys3pll_fixdiv", "sys3pll_a8", 0, 0, 0, sirfsoc_clkc_sdphy67_clkdiv_cfg, 24, 6, sirfsoc_clkc_sdphy67_clkdiv_ena, 12, &sdphy67_div_lock }, - { "sys0pll_qa9", "sys0pll_fixdiv", "sys0pll_a9", 0, 0, 0, sirfsoc_clkc_can_clkdiv_cfg, 0, 6, sirfsoc_clkc_can_clkdiv_ena, 0, &can_div_lock }, - { "sys1pll_qa9", "sys1pll_fixdiv", "sys1pll_a9", 0, 0, 0, sirfsoc_clkc_can_clkdiv_cfg, 8, 6, sirfsoc_clkc_can_clkdiv_ena, 4, &can_div_lock }, - { "sys2pll_qa9", "sys2pll_fixdiv", "sys2pll_a9", 0, 0, 0, sirfsoc_clkc_can_clkdiv_cfg, 16, 6, sirfsoc_clkc_can_clkdiv_ena, 8, &can_div_lock }, - { "sys3pll_qa9", "sys3pll_fixdiv", "sys3pll_a9", 0, 0, 0, sirfsoc_clkc_can_clkdiv_cfg, 24, 6, sirfsoc_clkc_can_clkdiv_ena, 12, &can_div_lock }, - { "sys0pll_qa10", "sys0pll_fixdiv", "sys0pll_a10", 0, 0, 0, sirfsoc_clkc_deint_clkdiv_cfg, 0, 6, sirfsoc_clkc_deint_clkdiv_ena, 0, &deint_div_lock }, - { "sys1pll_qa10", "sys1pll_fixdiv", "sys1pll_a10", 0, 0, 0, sirfsoc_clkc_deint_clkdiv_cfg, 8, 6, sirfsoc_clkc_deint_clkdiv_ena, 4, &deint_div_lock }, - { "sys2pll_qa10", "sys2pll_fixdiv", "sys2pll_a10", 0, 0, 0, sirfsoc_clkc_deint_clkdiv_cfg, 16, 6, sirfsoc_clkc_deint_clkdiv_ena, 8, &deint_div_lock }, - { "sys3pll_qa10", "sys3pll_fixdiv", "sys3pll_a10", 0, 0, 0, sirfsoc_clkc_deint_clkdiv_cfg, 24, 6, sirfsoc_clkc_deint_clkdiv_ena, 12, &deint_div_lock }, - { "sys0pll_qa11", "sys0pll_fixdiv", "sys0pll_a11", 0, 0, 0, sirfsoc_clkc_nand_clkdiv_cfg, 0, 6, sirfsoc_clkc_nand_clkdiv_ena, 0, &nand_div_lock }, - { "sys1pll_qa11", "sys1pll_fixdiv", "sys1pll_a11", 0, 0, 0, sirfsoc_clkc_nand_clkdiv_cfg, 8, 6, sirfsoc_clkc_nand_clkdiv_ena, 4, &nand_div_lock }, - { "sys2pll_qa11", "sys2pll_fixdiv", "sys2pll_a11", 0, 0, 0, sirfsoc_clkc_nand_clkdiv_cfg, 16, 6, sirfsoc_clkc_nand_clkdiv_ena, 8, &nand_div_lock }, - { "sys3pll_qa11", "sys3pll_fixdiv", "sys3pll_a11", 0, 0, 0, sirfsoc_clkc_nand_clkdiv_cfg, 24, 6, sirfsoc_clkc_nand_clkdiv_ena, 12, &nand_div_lock }, - { "sys0pll_qa12", "sys0pll_fixdiv", "sys0pll_a12", 0, 0, 0, sirfsoc_clkc_disp0_clkdiv_cfg, 0, 6, sirfsoc_clkc_disp0_clkdiv_ena, 0, &disp0_div_lock }, - { "sys1pll_qa12", "sys1pll_fixdiv", "sys1pll_a12", 0, 0, 0, sirfsoc_clkc_disp0_clkdiv_cfg, 8, 6, sirfsoc_clkc_disp0_clkdiv_ena, 4, &disp0_div_lock }, - { "sys2pll_qa12", "sys2pll_fixdiv", "sys2pll_a12", 0, 0, 0, sirfsoc_clkc_disp0_clkdiv_cfg, 16, 6, sirfsoc_clkc_disp0_clkdiv_ena, 8, &disp0_div_lock }, - { "sys3pll_qa12", "sys3pll_fixdiv", "sys3pll_a12", 0, 0, 0, sirfsoc_clkc_disp0_clkdiv_cfg, 24, 6, sirfsoc_clkc_disp0_clkdiv_ena, 12, &disp0_div_lock }, - { "sys0pll_qa13", "sys0pll_fixdiv", "sys0pll_a13", 0, 0, 0, sirfsoc_clkc_disp1_clkdiv_cfg, 0, 6, sirfsoc_clkc_disp1_clkdiv_ena, 0, &disp1_div_lock }, - { "sys1pll_qa13", "sys1pll_fixdiv", "sys1pll_a13", 0, 0, 0, sirfsoc_clkc_disp1_clkdiv_cfg, 8, 6, sirfsoc_clkc_disp1_clkdiv_ena, 4, &disp1_div_lock }, - { "sys2pll_qa13", "sys2pll_fixdiv", "sys2pll_a13", 0, 0, 0, sirfsoc_clkc_disp1_clkdiv_cfg, 16, 6, sirfsoc_clkc_disp1_clkdiv_ena, 8, &disp1_div_lock }, - { "sys3pll_qa13", "sys3pll_fixdiv", "sys3pll_a13", 0, 0, 0, sirfsoc_clkc_disp1_clkdiv_cfg, 24, 6, sirfsoc_clkc_disp1_clkdiv_ena, 12, &disp1_div_lock }, - { "sys0pll_qa14", "sys0pll_fixdiv", "sys0pll_a14", 0, 0, 0, sirfsoc_clkc_gpu_clkdiv_cfg, 0, 6, sirfsoc_clkc_gpu_clkdiv_ena, 0, &gpu_div_lock }, - { "sys1pll_qa14", "sys1pll_fixdiv", "sys1pll_a14", 0, 0, 0, sirfsoc_clkc_gpu_clkdiv_cfg, 8, 6, sirfsoc_clkc_gpu_clkdiv_ena, 4, &gpu_div_lock }, - { "sys2pll_qa14", "sys2pll_fixdiv", "sys2pll_a14", 0, 0, 0, sirfsoc_clkc_gpu_clkdiv_cfg, 16, 6, sirfsoc_clkc_gpu_clkdiv_ena, 8, &gpu_div_lock }, - { "sys3pll_qa14", "sys3pll_fixdiv", "sys3pll_a14", 0, 0, 0, sirfsoc_clkc_gpu_clkdiv_cfg, 24, 6, sirfsoc_clkc_gpu_clkdiv_ena, 12, &gpu_div_lock }, - { "sys0pll_qa15", "sys0pll_fixdiv", "sys0pll_a15", 0, 0, 0, sirfsoc_clkc_gnss_clkdiv_cfg, 0, 6, sirfsoc_clkc_gnss_clkdiv_ena, 0, &gnss_div_lock }, - { "sys1pll_qa15", "sys1pll_fixdiv", "sys1pll_a15", 0, 0, 0, sirfsoc_clkc_gnss_clkdiv_cfg, 8, 6, sirfsoc_clkc_gnss_clkdiv_ena, 4, &gnss_div_lock }, - { "sys2pll_qa15", "sys2pll_fixdiv", "sys2pll_a15", 0, 0, 0, sirfsoc_clkc_gnss_clkdiv_cfg, 16, 6, sirfsoc_clkc_gnss_clkdiv_ena, 8, &gnss_div_lock }, - { "sys3pll_qa15", "sys3pll_fixdiv", "sys3pll_a15", 0, 0, 0, sirfsoc_clkc_gnss_clkdiv_cfg, 24, 6, sirfsoc_clkc_gnss_clkdiv_ena, 12, &gnss_div_lock }, - { "sys1pll_qa18", "sys1pll_fixdiv", "sys1pll_a18", 0, 0, 0, sirfsoc_clkc_shared_divider_cfg0, 24, 6, sirfsoc_clkc_shared_divider_ena, 12, &share_div_lock }, - { "sys1pll_qa19", "sys1pll_fixdiv", "sys1pll_a19", 0, 0, clk_ignore_unused, sirfsoc_clkc_shared_divider_cfg0, 16, 6, sirfsoc_clkc_shared_divider_ena, 8, &share_div_lock }, - { "sys1pll_qa20", "sys1pll_fixdiv", "sys1pll_a20", 0, 0, 0, sirfsoc_clkc_shared_divider_cfg0, 8, 6, sirfsoc_clkc_shared_divider_ena, 4, &share_div_lock }, - { "sys2pll_qa20", "sys2pll_fixdiv", "sys2pll_a20", 0, 0, 0, sirfsoc_clkc_shared_divider_cfg0, 0, 6, sirfsoc_clkc_shared_divider_ena, 0, &share_div_lock }, - { "sys1pll_qa17", "sys1pll_fixdiv", "sys1pll_a17", 0, 0, clk_ignore_unused, sirfsoc_clkc_shared_divider_cfg1, 8, 6, sirfsoc_clkc_shared_divider_ena, 20, &share_div_lock }, - { "sys0pll_qa20", "sys0pll_fixdiv", "sys0pll_a20", 0, 0, 0, sirfsoc_clkc_shared_divider_cfg1, 0, 6, sirfsoc_clkc_shared_divider_ena, 16, &share_div_lock }, -}; - -static const char * const i2s_clk_parents[] = { - "xin", - "xinw", - "audio_dto", - /* "pwm_i2s01" */ -}; - -static const char * const usbphy_clk_parents[] = { - "xin", - "xinw", - "sys0pll_a1", - "sys1pll_a1", - "sys2pll_a1", - "sys3pll_a1", -}; - -static const char * const btss_clk_parents[] = { - "xin", - "xinw", - "sys0pll_a2", - "sys1pll_a2", - "sys2pll_a2", - "sys3pll_a2", -}; - -static const char * const rgmii_clk_parents[] = { - "xin", - "xinw", - "sys0pll_a3", - "sys1pll_a3", - "sys2pll_a3", - "sys3pll_a3", -}; - -static const char * const cpu_clk_parents[] = { - "xin", - "xinw", - "sys0pll_a4", - "sys1pll_a4", - "cpupll_clk1", -}; - -static const char * const sdphy01_clk_parents[] = { - "xin", - "xinw", - "sys0pll_a5", - "sys1pll_a5", - "sys2pll_a5", - "sys3pll_a5", -}; - -static const char * const sdphy23_clk_parents[] = { - "xin", - "xinw", - "sys0pll_a6", - "sys1pll_a6", - "sys2pll_a6", - "sys3pll_a6", -}; - -static const char * const sdphy45_clk_parents[] = { - "xin", - "xinw", - "sys0pll_a7", - "sys1pll_a7", - "sys2pll_a7", - "sys3pll_a7", -}; - -static const char * const sdphy67_clk_parents[] = { - "xin", - "xinw", - "sys0pll_a8", - "sys1pll_a8", - "sys2pll_a8", - "sys3pll_a8", -}; - -static const char * const can_clk_parents[] = { - "xin", - "xinw", - "sys0pll_a9", - "sys1pll_a9", - "sys2pll_a9", - "sys3pll_a9", -}; - -static const char * const deint_clk_parents[] = { - "xin", - "xinw", - "sys0pll_a10", - "sys1pll_a10", - "sys2pll_a10", - "sys3pll_a10", -}; - -static const char * const nand_clk_parents[] = { - "xin", - "xinw", - "sys0pll_a11", - "sys1pll_a11", - "sys2pll_a11", - "sys3pll_a11", -}; - -static const char * const disp0_clk_parents[] = { - "xin", - "xinw", - "sys0pll_a12", - "sys1pll_a12", - "sys2pll_a12", - "sys3pll_a12", - "disp0_dto", -}; - -static const char * const disp1_clk_parents[] = { - "xin", - "xinw", - "sys0pll_a13", - "sys1pll_a13", - "sys2pll_a13", - "sys3pll_a13", - "disp1_dto", -}; - -static const char * const gpu_clk_parents[] = { - "xin", - "xinw", - "sys0pll_a14", - "sys1pll_a14", - "sys2pll_a14", - "sys3pll_a14", -}; - -static const char * const gnss_clk_parents[] = { - "xin", - "xinw", - "sys0pll_a15", - "sys1pll_a15", - "sys2pll_a15", - "sys3pll_a15", -}; - -static const char * const sys_clk_parents[] = { - "xin", - "xinw", - "sys2pll_a20", - "sys1pll_a20", - "sys1pll_a19", - "sys1pll_a18", - "sys0pll_a20", - "sys1pll_a17", -}; - -static const char * const io_clk_parents[] = { - "xin", - "xinw", - "sys2pll_a20", - "sys1pll_a20", - "sys1pll_a19", - "sys1pll_a18", - "sys0pll_a20", - "sys1pll_a17", -}; - -static const char * const g2d_clk_parents[] = { - "xin", - "xinw", - "sys2pll_a20", - "sys1pll_a20", - "sys1pll_a19", - "sys1pll_a18", - "sys0pll_a20", - "sys1pll_a17", -}; - -static const char * const jpenc_clk_parents[] = { - "xin", - "xinw", - "sys2pll_a20", - "sys1pll_a20", - "sys1pll_a19", - "sys1pll_a18", - "sys0pll_a20", - "sys1pll_a17", -}; - -static const char * const vdec_clk_parents[] = { - "xin", - "xinw", - "sys2pll_a20", - "sys1pll_a20", - "sys1pll_a19", - "sys1pll_a18", - "sys0pll_a20", - "sys1pll_a17", -}; - -static const char * const gmac_clk_parents[] = { - "xin", - "xinw", - "sys2pll_a20", - "sys1pll_a20", - "sys1pll_a19", - "sys1pll_a18", - "sys0pll_a20", - "sys1pll_a17", -}; - -static const char * const usb_clk_parents[] = { - "xin", - "xinw", - "sys2pll_a20", - "sys1pll_a20", - "sys1pll_a19", - "sys1pll_a18", - "sys0pll_a20", - "sys1pll_a17", -}; - -static const char * const kas_clk_parents[] = { - "xin", - "xinw", - "sys2pll_a20", - "sys1pll_a20", - "sys1pll_a19", - "sys1pll_a18", - "sys0pll_a20", - "sys1pll_a17", -}; - -static const char * const sec_clk_parents[] = { - "xin", - "xinw", - "sys2pll_a20", - "sys1pll_a20", - "sys1pll_a19", - "sys1pll_a18", - "sys0pll_a20", - "sys1pll_a17", -}; - -static const char * const sdr_clk_parents[] = { - "xin", - "xinw", - "sys2pll_a20", - "sys1pll_a20", - "sys1pll_a19", - "sys1pll_a18", - "sys0pll_a20", - "sys1pll_a17", -}; - -static const char * const vip_clk_parents[] = { - "xin", - "xinw", - "sys2pll_a20", - "sys1pll_a20", - "sys1pll_a19", - "sys1pll_a18", - "sys0pll_a20", - "sys1pll_a17", -}; - -static const char * const nocd_clk_parents[] = { - "xin", - "xinw", - "sys2pll_a20", - "sys1pll_a20", - "sys1pll_a19", - "sys1pll_a18", - "sys0pll_a20", - "sys1pll_a17", -}; - -static const char * const nocr_clk_parents[] = { - "xin", - "xinw", - "sys2pll_a20", - "sys1pll_a20", - "sys1pll_a19", - "sys1pll_a18", - "sys0pll_a20", - "sys1pll_a17", -}; - -static const char * const tpiu_clk_parents[] = { - "xin", - "xinw", - "sys2pll_a20", - "sys1pll_a20", - "sys1pll_a19", - "sys1pll_a18", - "sys0pll_a20", - "sys1pll_a17", -}; - -static struct atlas7_mux_init_data mux_list[] __initdata = { - /* mux_name, parent_names, parent_num, flags, mux_flags, mux_offset, shift, width */ - { "i2s_mux", i2s_clk_parents, array_size(i2s_clk_parents), 0, 0, sirfsoc_clkc_i2s_clk_sel, 0, 2 }, - { "usbphy_mux", usbphy_clk_parents, array_size(usbphy_clk_parents), 0, 0, sirfsoc_clkc_i2s_clk_sel, 0, 3 }, - { "btss_mux", btss_clk_parents, array_size(btss_clk_parents), 0, 0, sirfsoc_clkc_btss_clk_sel, 0, 3 }, - { "rgmii_mux", rgmii_clk_parents, array_size(rgmii_clk_parents), 0, 0, sirfsoc_clkc_rgmii_clk_sel, 0, 3 }, - { "cpu_mux", cpu_clk_parents, array_size(cpu_clk_parents), 0, 0, sirfsoc_clkc_cpu_clk_sel, 0, 3 }, - { "sdphy01_mux", sdphy01_clk_parents, array_size(sdphy01_clk_parents), 0, 0, sirfsoc_clkc_sdphy01_clk_sel, 0, 3 }, - { "sdphy23_mux", sdphy23_clk_parents, array_size(sdphy23_clk_parents), 0, 0, sirfsoc_clkc_sdphy23_clk_sel, 0, 3 }, - { "sdphy45_mux", sdphy45_clk_parents, array_size(sdphy45_clk_parents), 0, 0, sirfsoc_clkc_sdphy45_clk_sel, 0, 3 }, - { "sdphy67_mux", sdphy67_clk_parents, array_size(sdphy67_clk_parents), 0, 0, sirfsoc_clkc_sdphy67_clk_sel, 0, 3 }, - { "can_mux", can_clk_parents, array_size(can_clk_parents), 0, 0, sirfsoc_clkc_can_clk_sel, 0, 3 }, - { "deint_mux", deint_clk_parents, array_size(deint_clk_parents), 0, 0, sirfsoc_clkc_deint_clk_sel, 0, 3 }, - { "nand_mux", nand_clk_parents, array_size(nand_clk_parents), 0, 0, sirfsoc_clkc_nand_clk_sel, 0, 3 }, - { "disp0_mux", disp0_clk_parents, array_size(disp0_clk_parents), 0, 0, sirfsoc_clkc_disp0_clk_sel, 0, 3 }, - { "disp1_mux", disp1_clk_parents, array_size(disp1_clk_parents), 0, 0, sirfsoc_clkc_disp1_clk_sel, 0, 3 }, - { "gpu_mux", gpu_clk_parents, array_size(gpu_clk_parents), 0, 0, sirfsoc_clkc_gpu_clk_sel, 0, 3 }, - { "gnss_mux", gnss_clk_parents, array_size(gnss_clk_parents), 0, 0, sirfsoc_clkc_gnss_clk_sel, 0, 3 }, - { "sys_mux", sys_clk_parents, array_size(sys_clk_parents), 0, 0, sirfsoc_clkc_sys_clk_sel, 0, 3 }, - { "io_mux", io_clk_parents, array_size(io_clk_parents), 0, 0, sirfsoc_clkc_io_clk_sel, 0, 3 }, - { "g2d_mux", g2d_clk_parents, array_size(g2d_clk_parents), 0, 0, sirfsoc_clkc_g2d_clk_sel, 0, 3 }, - { "jpenc_mux", jpenc_clk_parents, array_size(jpenc_clk_parents), 0, 0, sirfsoc_clkc_jpenc_clk_sel, 0, 3 }, - { "vdec_mux", vdec_clk_parents, array_size(vdec_clk_parents), 0, 0, sirfsoc_clkc_vdec_clk_sel, 0, 3 }, - { "gmac_mux", gmac_clk_parents, array_size(gmac_clk_parents), 0, 0, sirfsoc_clkc_gmac_clk_sel, 0, 3 }, - { "usb_mux", usb_clk_parents, array_size(usb_clk_parents), 0, 0, sirfsoc_clkc_usb_clk_sel, 0, 3 }, - { "kas_mux", kas_clk_parents, array_size(kas_clk_parents), 0, 0, sirfsoc_clkc_kas_clk_sel, 0, 3 }, - { "sec_mux", sec_clk_parents, array_size(sec_clk_parents), 0, 0, sirfsoc_clkc_sec_clk_sel, 0, 3 }, - { "sdr_mux", sdr_clk_parents, array_size(sdr_clk_parents), 0, 0, sirfsoc_clkc_sdr_clk_sel, 0, 3 }, - { "vip_mux", vip_clk_parents, array_size(vip_clk_parents), 0, 0, sirfsoc_clkc_vip_clk_sel, 0, 3 }, - { "nocd_mux", nocd_clk_parents, array_size(nocd_clk_parents), 0, 0, sirfsoc_clkc_nocd_clk_sel, 0, 3 }, - { "nocr_mux", nocr_clk_parents, array_size(nocr_clk_parents), 0, 0, sirfsoc_clkc_nocr_clk_sel, 0, 3 }, - { "tpiu_mux", tpiu_clk_parents, array_size(tpiu_clk_parents), 0, 0, sirfsoc_clkc_tpiu_clk_sel, 0, 3 }, -}; - - /* new unit should add start from the tail of list */ -static struct atlas7_unit_init_data unit_list[] __initdata = { - /* unit_name, parent_name, flags, regofs, bit, lock */ - { 0, "audmscm_kas", "kas_mux", 0, sirfsoc_clkc_root_clk_en0_set, 0, 0, 0, &root0_gate_lock }, - { 1, "gnssm_gnss", "gnss_mux", 0, sirfsoc_clkc_root_clk_en0_set, 1, 0, 0, &root0_gate_lock }, - { 2, "gpum_gpu", "gpu_mux", 0, sirfsoc_clkc_root_clk_en0_set, 2, 0, 0, &root0_gate_lock }, - { 3, "mediam_g2d", "g2d_mux", 0, sirfsoc_clkc_root_clk_en0_set, 3, 0, 0, &root0_gate_lock }, - { 4, "mediam_jpenc", "jpenc_mux", 0, sirfsoc_clkc_root_clk_en0_set, 4, 0, 0, &root0_gate_lock }, - { 5, "vdifm_disp0", "disp0_mux", 0, sirfsoc_clkc_root_clk_en0_set, 5, 0, 0, &root0_gate_lock }, - { 6, "vdifm_disp1", "disp1_mux", 0, sirfsoc_clkc_root_clk_en0_set, 6, 0, 0, &root0_gate_lock }, - { 7, "audmscm_i2s", "i2s_mux", 0, sirfsoc_clkc_root_clk_en0_set, 8, 0, 0, &root0_gate_lock }, - { 8, "audmscm_io", "io_mux", 0, sirfsoc_clkc_root_clk_en0_set, 11, 0, 0, &root0_gate_lock }, - { 9, "vdifm_io", "io_mux", 0, sirfsoc_clkc_root_clk_en0_set, 12, 0, 0, &root0_gate_lock }, - { 10, "gnssm_io", "io_mux", 0, sirfsoc_clkc_root_clk_en0_set, 13, 0, 0, &root0_gate_lock }, - { 11, "mediam_io", "io_mux", 0, sirfsoc_clkc_root_clk_en0_set, 14, 0, 0, &root0_gate_lock }, - { 12, "btm_io", "io_mux", 0, sirfsoc_clkc_root_clk_en0_set, 17, 0, 0, &root0_gate_lock }, - { 13, "mediam_sdphy01", "sdphy01_mux", 0, sirfsoc_clkc_root_clk_en0_set, 18, 0, 0, &root0_gate_lock }, - { 14, "vdifm_sdphy23", "sdphy23_mux", 0, sirfsoc_clkc_root_clk_en0_set, 19, 0, 0, &root0_gate_lock }, - { 15, "vdifm_sdphy45", "sdphy45_mux", 0, sirfsoc_clkc_root_clk_en0_set, 20, 0, 0, &root0_gate_lock }, - { 16, "vdifm_sdphy67", "sdphy67_mux", 0, sirfsoc_clkc_root_clk_en0_set, 21, 0, 0, &root0_gate_lock }, - { 17, "audmscm_xin", "xin", 0, sirfsoc_clkc_root_clk_en0_set, 22, 0, 0, &root0_gate_lock }, - { 18, "mediam_nand", "nand_mux", 0, sirfsoc_clkc_root_clk_en0_set, 27, 0, 0, &root0_gate_lock }, - { 19, "gnssm_sec", "sec_mux", 0, sirfsoc_clkc_root_clk_en0_set, 28, 0, 0, &root0_gate_lock }, - { 20, "cpum_cpu", "cpu_mux", 0, sirfsoc_clkc_root_clk_en0_set, 29, 0, 0, &root0_gate_lock }, - { 21, "gnssm_xin", "xin", 0, sirfsoc_clkc_root_clk_en0_set, 30, 0, 0, &root0_gate_lock }, - { 22, "vdifm_vip", "vip_mux", 0, sirfsoc_clkc_root_clk_en0_set, 31, 0, 0, &root0_gate_lock }, - { 23, "btm_btss", "btss_mux", 0, sirfsoc_clkc_root_clk_en1_set, 0, 0, 0, &root1_gate_lock }, - { 24, "mediam_usbphy", "usbphy_mux", 0, sirfsoc_clkc_root_clk_en1_set, 1, 0, 0, &root1_gate_lock }, - { 25, "rtcm_kas", "kas_mux", 0, sirfsoc_clkc_root_clk_en1_set, 2, 0, 0, &root1_gate_lock }, - { 26, "audmscm_nocd", "nocd_mux", 0, sirfsoc_clkc_root_clk_en1_set, 3, 0, 0, &root1_gate_lock }, - { 27, "vdifm_nocd", "nocd_mux", 0, sirfsoc_clkc_root_clk_en1_set, 4, 0, 0, &root1_gate_lock }, - { 28, "gnssm_nocd", "nocd_mux", 0, sirfsoc_clkc_root_clk_en1_set, 5, 0, 0, &root1_gate_lock }, - { 29, "mediam_nocd", "nocd_mux", 0, sirfsoc_clkc_root_clk_en1_set, 6, 0, 0, &root1_gate_lock }, - { 30, "cpum_nocd", "nocd_mux", 0, sirfsoc_clkc_root_clk_en1_set, 8, 0, 0, &root1_gate_lock }, - { 31, "gpum_nocd", "nocd_mux", 0, sirfsoc_clkc_root_clk_en1_set, 9, 0, 0, &root1_gate_lock }, - { 32, "audmscm_nocr", "nocr_mux", 0, sirfsoc_clkc_root_clk_en1_set, 11, 0, 0, &root1_gate_lock }, - { 33, "vdifm_nocr", "nocr_mux", 0, sirfsoc_clkc_root_clk_en1_set, 12, 0, 0, &root1_gate_lock }, - { 34, "gnssm_nocr", "nocr_mux", clk_ignore_unused, sirfsoc_clkc_root_clk_en1_set, 13, 0, 0, &root1_gate_lock }, - { 35, "mediam_nocr", "nocr_mux", clk_ignore_unused, sirfsoc_clkc_root_clk_en1_set, 14, 0, 0, &root1_gate_lock }, - { 36, "ddrm_nocr", "nocr_mux", clk_ignore_unused, sirfsoc_clkc_root_clk_en1_set, 15, 0, 0, &root1_gate_lock }, - { 37, "cpum_tpiu", "tpiu_mux", 0, sirfsoc_clkc_root_clk_en1_set, 16, 0, 0, &root1_gate_lock }, - { 38, "gpum_nocr", "nocr_mux", 0, sirfsoc_clkc_root_clk_en1_set, 17, 0, 0, &root1_gate_lock }, - { 39, "gnssm_rgmii", "rgmii_mux", 0, sirfsoc_clkc_root_clk_en1_set, 20, 0, 0, &root1_gate_lock }, - { 40, "mediam_vdec", "vdec_mux", 0, sirfsoc_clkc_root_clk_en1_set, 21, 0, 0, &root1_gate_lock }, - { 41, "gpum_sdr", "sdr_mux", 0, sirfsoc_clkc_root_clk_en1_set, 22, 0, 0, &root1_gate_lock }, - { 42, "vdifm_deint", "deint_mux", 0, sirfsoc_clkc_root_clk_en1_set, 23, 0, 0, &root1_gate_lock }, - { 43, "gnssm_can", "can_mux", 0, sirfsoc_clkc_root_clk_en1_set, 26, 0, 0, &root1_gate_lock }, - { 44, "mediam_usb", "usb_mux", 0, sirfsoc_clkc_root_clk_en1_set, 28, 0, 0, &root1_gate_lock }, - { 45, "gnssm_gmac", "gmac_mux", 0, sirfsoc_clkc_root_clk_en1_set, 29, 0, 0, &root1_gate_lock }, - { 46, "cvd_io", "audmscm_io", 0, sirfsoc_clkc_leaf_clk_en1_set, 0, clk_unit_noc_clock, 4, &leaf1_gate_lock }, - { 47, "timer_io", "audmscm_io", 0, sirfsoc_clkc_leaf_clk_en1_set, 1, 0, 0, &leaf1_gate_lock }, - { 48, "pulse_io", "audmscm_io", 0, sirfsoc_clkc_leaf_clk_en1_set, 2, 0, 0, &leaf1_gate_lock }, - { 49, "tsc_io", "audmscm_io", 0, sirfsoc_clkc_leaf_clk_en1_set, 3, 0, 0, &leaf1_gate_lock }, - { 50, "tsc_xin", "audmscm_xin", 0, sirfsoc_clkc_leaf_clk_en1_set, 21, 0, 0, &leaf1_gate_lock }, - { 51, "ioctop_io", "audmscm_io", 0, sirfsoc_clkc_leaf_clk_en1_set, 4, 0, 0, &leaf1_gate_lock }, - { 52, "rsc_io", "audmscm_io", 0, sirfsoc_clkc_leaf_clk_en1_set, 5, 0, 0, &leaf1_gate_lock }, - { 53, "dvm_io", "audmscm_io", 0, sirfsoc_clkc_leaf_clk_en1_set, 6, clk_unit_noc_socket, 7, &leaf1_gate_lock }, - { 54, "lvds_xin", "audmscm_xin", 0, sirfsoc_clkc_leaf_clk_en1_set, 7, clk_unit_noc_socket, 8, &leaf1_gate_lock }, - { 55, "kas_kas", "audmscm_kas", 0, sirfsoc_clkc_leaf_clk_en1_set, 8, clk_unit_noc_clock, 2, &leaf1_gate_lock }, - { 56, "ac97_kas", "audmscm_kas", 0, sirfsoc_clkc_leaf_clk_en1_set, 9, 0, 0, &leaf1_gate_lock }, - { 57, "usp0_kas", "audmscm_kas", 0, sirfsoc_clkc_leaf_clk_en1_set, 10, clk_unit_noc_socket, 4, &leaf1_gate_lock }, - { 58, "usp1_kas", "audmscm_kas", 0, sirfsoc_clkc_leaf_clk_en1_set, 11, clk_unit_noc_socket, 5, &leaf1_gate_lock }, - { 59, "usp2_kas", "audmscm_kas", 0, sirfsoc_clkc_leaf_clk_en1_set, 12, clk_unit_noc_socket, 6, &leaf1_gate_lock }, - { 60, "dmac2_kas", "audmscm_kas", 0, sirfsoc_clkc_leaf_clk_en1_set, 13, clk_unit_noc_socket, 1, &leaf1_gate_lock }, - { 61, "dmac3_kas", "audmscm_kas", 0, sirfsoc_clkc_leaf_clk_en1_set, 14, clk_unit_noc_socket, 2, &leaf1_gate_lock }, - { 62, "audioif_kas", "audmscm_kas", 0, sirfsoc_clkc_leaf_clk_en1_set, 15, clk_unit_noc_socket, 0, &leaf1_gate_lock }, - { 63, "i2s1_kas", "audmscm_kas", 0, sirfsoc_clkc_leaf_clk_en1_set, 17, clk_unit_noc_clock, 2, &leaf1_gate_lock }, - { 64, "thaudmscm_io", "audmscm_io", 0, sirfsoc_clkc_leaf_clk_en1_set, 22, 0, 0, &leaf1_gate_lock }, - { 65, "analogtest_xin", "audmscm_xin", 0, sirfsoc_clkc_leaf_clk_en1_set, 23, 0, 0, &leaf1_gate_lock }, - { 66, "sys2pci_io", "vdifm_io", 0, sirfsoc_clkc_leaf_clk_en2_set, 0, clk_unit_noc_clock, 20, &leaf2_gate_lock }, - { 67, "pciarb_io", "vdifm_io", 0, sirfsoc_clkc_leaf_clk_en2_set, 1, 0, 0, &leaf2_gate_lock }, - { 68, "pcicopy_io", "vdifm_io", 0, sirfsoc_clkc_leaf_clk_en2_set, 2, 0, 0, &leaf2_gate_lock }, - { 69, "rom_io", "vdifm_io", 0, sirfsoc_clkc_leaf_clk_en2_set, 3, 0, 0, &leaf2_gate_lock }, - { 70, "sdio23_io", "vdifm_io", 0, sirfsoc_clkc_leaf_clk_en2_set, 4, 0, 0, &leaf2_gate_lock }, - { 71, "sdio45_io", "vdifm_io", 0, sirfsoc_clkc_leaf_clk_en2_set, 5, 0, 0, &leaf2_gate_lock }, - { 72, "sdio67_io", "vdifm_io", 0, sirfsoc_clkc_leaf_clk_en2_set, 6, 0, 0, &leaf2_gate_lock }, - { 73, "vip1_io", "vdifm_io", 0, sirfsoc_clkc_leaf_clk_en2_set, 7, 0, 0, &leaf2_gate_lock }, - { 74, "vip1_vip", "vdifm_vip", 0, sirfsoc_clkc_leaf_clk_en2_set, 16, clk_unit_noc_clock, 21, &leaf2_gate_lock }, - { 75, "sdio23_sdphy23", "vdifm_sdphy23", 0, sirfsoc_clkc_leaf_clk_en2_set, 8, 0, 0, &leaf2_gate_lock }, - { 76, "sdio45_sdphy45", "vdifm_sdphy45", 0, sirfsoc_clkc_leaf_clk_en2_set, 9, 0, 0, &leaf2_gate_lock }, - { 77, "sdio67_sdphy67", "vdifm_sdphy67", 0, sirfsoc_clkc_leaf_clk_en2_set, 10, 0, 0, &leaf2_gate_lock }, - { 78, "vpp0_disp0", "vdifm_disp0", 0, sirfsoc_clkc_leaf_clk_en2_set, 11, clk_unit_noc_clock, 22, &leaf2_gate_lock }, - { 79, "lcd0_disp0", "vdifm_disp0", 0, sirfsoc_clkc_leaf_clk_en2_set, 12, clk_unit_noc_clock, 18, &leaf2_gate_lock }, - { 80, "vpp1_disp1", "vdifm_disp1", 0, sirfsoc_clkc_leaf_clk_en2_set, 13, clk_unit_noc_clock, 23, &leaf2_gate_lock }, - { 81, "lcd1_disp1", "vdifm_disp1", 0, sirfsoc_clkc_leaf_clk_en2_set, 14, clk_unit_noc_clock, 19, &leaf2_gate_lock }, - { 82, "dcu_deint", "vdifm_deint", 0, sirfsoc_clkc_leaf_clk_en2_set, 15, clk_unit_noc_clock, 17, &leaf2_gate_lock }, - { 83, "vdifm_dapa_r_nocr", "vdifm_nocr", 0, sirfsoc_clkc_leaf_clk_en2_set, 17, 0, 0, &leaf2_gate_lock }, - { 84, "gpio1_io", "vdifm_io", 0, sirfsoc_clkc_leaf_clk_en2_set, 18, 0, 0, &leaf2_gate_lock }, - { 85, "thvdifm_io", "vdifm_io", 0, sirfsoc_clkc_leaf_clk_en2_set, 19, 0, 0, &leaf2_gate_lock }, - { 86, "gmac_rgmii", "gnssm_rgmii", 0, sirfsoc_clkc_leaf_clk_en3_set, 0, 0, 0, &leaf3_gate_lock }, - { 87, "gmac_gmac", "gnssm_gmac", 0, sirfsoc_clkc_leaf_clk_en3_set, 1, clk_unit_noc_clock, 10, &leaf3_gate_lock }, - { 88, "uart1_io", "gnssm_io", 0, sirfsoc_clkc_leaf_clk_en3_set, 2, clk_unit_noc_socket, 14, &leaf3_gate_lock }, - { 89, "dmac0_io", "gnssm_io", 0, sirfsoc_clkc_leaf_clk_en3_set, 3, clk_unit_noc_socket, 11, &leaf3_gate_lock }, - { 90, "uart0_io", "gnssm_io", 0, sirfsoc_clkc_leaf_clk_en3_set, 4, clk_unit_noc_socket, 13, &leaf3_gate_lock }, - { 91, "uart2_io", "gnssm_io", 0, sirfsoc_clkc_leaf_clk_en3_set, 5, clk_unit_noc_socket, 15, &leaf3_gate_lock }, - { 92, "uart3_io", "gnssm_io", 0, sirfsoc_clkc_leaf_clk_en3_set, 6, clk_unit_noc_socket, 16, &leaf3_gate_lock }, - { 93, "uart4_io", "gnssm_io", 0, sirfsoc_clkc_leaf_clk_en3_set, 7, clk_unit_noc_socket, 17, &leaf3_gate_lock }, - { 94, "uart5_io", "gnssm_io", 0, sirfsoc_clkc_leaf_clk_en3_set, 8, clk_unit_noc_socket, 18, &leaf3_gate_lock }, - { 95, "spi1_io", "gnssm_io", 0, sirfsoc_clkc_leaf_clk_en3_set, 9, clk_unit_noc_socket, 12, &leaf3_gate_lock }, - { 96, "gnss_gnss", "gnssm_gnss", 0, sirfsoc_clkc_leaf_clk_en3_set, 10, 0, 0, &leaf3_gate_lock }, - { 97, "canbus1_can", "gnssm_can", 0, sirfsoc_clkc_leaf_clk_en3_set, 12, clk_unit_noc_clock, 7, &leaf3_gate_lock }, - { 98, "ccsec_sec", "gnssm_sec", 0, sirfsoc_clkc_leaf_clk_en3_set, 15, clk_unit_noc_clock, 9, &leaf3_gate_lock }, - { 99, "ccpub_sec", "gnssm_sec", 0, sirfsoc_clkc_leaf_clk_en3_set, 16, clk_unit_noc_clock, 8, &leaf3_gate_lock }, - { 100, "gnssm_dapa_r_nocr", "gnssm_nocr", 0, sirfsoc_clkc_leaf_clk_en3_set, 13, 0, 0, &leaf3_gate_lock }, - { 101, "thgnssm_io", "gnssm_io", 0, sirfsoc_clkc_leaf_clk_en3_set, 14, 0, 0, &leaf3_gate_lock }, - { 102, "media_vdec", "mediam_vdec", 0, sirfsoc_clkc_leaf_clk_en4_set, 0, clk_unit_noc_clock, 3, &leaf4_gate_lock }, - { 103, "media_jpenc", "mediam_jpenc", 0, sirfsoc_clkc_leaf_clk_en4_set, 1, clk_unit_noc_clock, 1, &leaf4_gate_lock }, - { 104, "g2d_g2d", "mediam_g2d", 0, sirfsoc_clkc_leaf_clk_en4_set, 2, clk_unit_noc_clock, 12, &leaf4_gate_lock }, - { 105, "i2c0_io", "mediam_io", 0, sirfsoc_clkc_leaf_clk_en4_set, 3, clk_unit_noc_socket, 21, &leaf4_gate_lock }, - { 106, "i2c1_io", "mediam_io", 0, sirfsoc_clkc_leaf_clk_en4_set, 4, clk_unit_noc_socket, 20, &leaf4_gate_lock }, - { 107, "gpio0_io", "mediam_io", 0, sirfsoc_clkc_leaf_clk_en4_set, 5, clk_unit_noc_socket, 19, &leaf4_gate_lock }, - { 108, "nand_io", "mediam_io", 0, sirfsoc_clkc_leaf_clk_en4_set, 6, 0, 0, &leaf4_gate_lock }, - { 109, "sdio01_io", "mediam_io", 0, sirfsoc_clkc_leaf_clk_en4_set, 7, 0, 0, &leaf4_gate_lock }, - { 110, "sys2pci2_io", "mediam_io", 0, sirfsoc_clkc_leaf_clk_en4_set, 8, clk_unit_noc_clock, 13, &leaf4_gate_lock }, - { 111, "sdio01_sdphy01", "mediam_sdphy01", 0, sirfsoc_clkc_leaf_clk_en4_set, 9, 0, 0, &leaf4_gate_lock }, - { 112, "nand_nand", "mediam_nand", 0, sirfsoc_clkc_leaf_clk_en4_set, 10, clk_unit_noc_clock, 14, &leaf4_gate_lock }, - { 113, "usb0_usb", "mediam_usb", 0, sirfsoc_clkc_leaf_clk_en4_set, 11, clk_unit_noc_clock, 15, &leaf4_gate_lock }, - { 114, "usb1_usb", "mediam_usb", 0, sirfsoc_clkc_leaf_clk_en4_set, 12, clk_unit_noc_clock, 16, &leaf4_gate_lock }, - { 115, "usbphy0_usbphy", "mediam_usbphy", 0, sirfsoc_clkc_leaf_clk_en4_set, 13, 0, 0, &leaf4_gate_lock }, - { 116, "usbphy1_usbphy", "mediam_usbphy", 0, sirfsoc_clkc_leaf_clk_en4_set, 14, 0, 0, &leaf4_gate_lock }, - { 117, "thmediam_io", "mediam_io", 0, sirfsoc_clkc_leaf_clk_en4_set, 15, 0, 0, &leaf4_gate_lock }, - { 118, "memc_mem", "mempll_clk1", clk_ignore_unused, sirfsoc_clkc_leaf_clk_en5_set, 0, 0, 0, &leaf5_gate_lock }, - { 119, "dapa_mem", "mempll_clk1", 0, sirfsoc_clkc_leaf_clk_en5_set, 1, 0, 0, &leaf5_gate_lock }, - { 120, "nocddrm_nocr", "ddrm_nocr", 0, sirfsoc_clkc_leaf_clk_en5_set, 2, 0, 0, &leaf5_gate_lock }, - { 121, "thddrm_nocr", "ddrm_nocr", 0, sirfsoc_clkc_leaf_clk_en5_set, 3, 0, 0, &leaf5_gate_lock }, - { 122, "spram1_cpudiv2", "cpum_cpu", 0, sirfsoc_clkc_leaf_clk_en6_set, 0, clk_unit_noc_socket, 9, &leaf6_gate_lock }, - { 123, "spram2_cpudiv2", "cpum_cpu", 0, sirfsoc_clkc_leaf_clk_en6_set, 1, clk_unit_noc_socket, 10, &leaf6_gate_lock }, - { 124, "coresight_cpudiv2", "cpum_cpu", 0, sirfsoc_clkc_leaf_clk_en6_set, 2, 0, 0, &leaf6_gate_lock }, - { 125, "coresight_tpiu", "cpum_tpiu", 0, sirfsoc_clkc_leaf_clk_en6_set, 3, 0, 0, &leaf6_gate_lock }, - { 126, "graphic_gpu", "gpum_gpu", 0, sirfsoc_clkc_leaf_clk_en7_set, 0, clk_unit_noc_clock, 0, &leaf7_gate_lock }, - { 127, "vss_sdr", "gpum_sdr", 0, sirfsoc_clkc_leaf_clk_en7_set, 1, clk_unit_noc_clock, 11, &leaf7_gate_lock }, - { 128, "thgpum_nocr", "gpum_nocr", 0, sirfsoc_clkc_leaf_clk_en7_set, 2, 0, 0, &leaf7_gate_lock }, - { 129, "a7ca_btss", "btm_btss", 0, sirfsoc_clkc_leaf_clk_en8_set, 1, 0, 0, &leaf8_gate_lock }, - { 130, "dmac4_io", "a7ca_io", 0, sirfsoc_clkc_leaf_clk_en8_set, 2, 0, 0, &leaf8_gate_lock }, - { 131, "uart6_io", "dmac4_io", 0, sirfsoc_clkc_leaf_clk_en8_set, 3, 0, 0, &leaf8_gate_lock }, - { 132, "usp3_io", "dmac4_io", 0, sirfsoc_clkc_leaf_clk_en8_set, 4, 0, 0, &leaf8_gate_lock }, - { 133, "a7ca_io", "noc_btm_io", 0, sirfsoc_clkc_leaf_clk_en8_set, 5, 0, 0, &leaf8_gate_lock }, - { 134, "noc_btm_io", "btm_io", 0, sirfsoc_clkc_leaf_clk_en8_set, 6, 0, 0, &leaf8_gate_lock }, - { 135, "thbtm_io", "btm_io", 0, sirfsoc_clkc_leaf_clk_en8_set, 7, 0, 0, &leaf8_gate_lock }, - { 136, "btslow", "xinw_fixdiv_btslow", 0, sirfsoc_clkc_root_clk_en1_set, 25, 0, 0, &root1_gate_lock }, - { 137, "a7ca_btslow", "btslow", 0, sirfsoc_clkc_leaf_clk_en8_set, 0, 0, 0, &leaf8_gate_lock }, - { 138, "pwm_io", "io_mux", 0, sirfsoc_clkc_leaf_clk_en0_set, 0, 0, 0, &leaf0_gate_lock }, - { 139, "pwm_xin", "xin", 0, sirfsoc_clkc_leaf_clk_en0_set, 1, 0, 0, &leaf0_gate_lock }, - { 140, "pwm_xinw", "xinw", 0, sirfsoc_clkc_leaf_clk_en0_set, 2, 0, 0, &leaf0_gate_lock }, - { 141, "thcgum_sys", "sys_mux", 0, sirfsoc_clkc_leaf_clk_en0_set, 3, 0, 0, &leaf0_gate_lock }, -}; - -static struct clk *atlas7_clks[array_size(unit_list) + array_size(mux_list)]; - -static int unit_clk_is_enabled(struct clk_hw *hw) -{ - struct clk_unit *clk = to_unitclk(hw); - u32 reg; - - reg = clk->regofs + sirfsoc_clkc_root_clk_en0_stat - sirfsoc_clkc_root_clk_en0_set; - - return !!(clkc_readl(reg) & bit(clk->bit)); -} - -static int unit_clk_enable(struct clk_hw *hw) -{ - u32 reg; - struct clk_unit *clk = to_unitclk(hw); - unsigned long flags; - - reg = clk->regofs; - - spin_lock_irqsave(clk->lock, flags); - clkc_writel(bit(clk->bit), reg); - if (clk->type == clk_unit_noc_clock) - clkc_writel(bit(clk->idle_bit), sirfsoc_noc_clk_idlereq_clr); - else if (clk->type == clk_unit_noc_socket) - clkc_writel(bit(clk->idle_bit), sirfsoc_noc_clk_slvrdy_set); - - spin_unlock_irqrestore(clk->lock, flags); - return 0; -} - -static void unit_clk_disable(struct clk_hw *hw) -{ - u32 reg; - u32 i = 0; - struct clk_unit *clk = to_unitclk(hw); - unsigned long flags; - - reg = clk->regofs + sirfsoc_clkc_root_clk_en0_clr - sirfsoc_clkc_root_clk_en0_set; - spin_lock_irqsave(clk->lock, flags); - if (clk->type == clk_unit_noc_clock) { - clkc_writel(bit(clk->idle_bit), sirfsoc_noc_clk_idlereq_set); - while (!(clkc_readl(sirfsoc_noc_clk_idle_status) & - bit(clk->idle_bit)) && (i++ < 100)) { - cpu_relax(); - udelay(10); - } - - if (i == 100) { - pr_err("unit noc clock disconnect error:timeout "); - /*once timeout, undo idlereq by clr*/ - clkc_writel(bit(clk->idle_bit), sirfsoc_noc_clk_idlereq_clr); - goto err; - } - - } else if (clk->type == clk_unit_noc_socket) - clkc_writel(bit(clk->idle_bit), sirfsoc_noc_clk_slvrdy_clr); - - clkc_writel(bit(clk->bit), reg); -err: - spin_unlock_irqrestore(clk->lock, flags); -} - -static const struct clk_ops unit_clk_ops = { - .is_enabled = unit_clk_is_enabled, - .enable = unit_clk_enable, - .disable = unit_clk_disable, -}; - -static struct clk * __init -atlas7_unit_clk_register(struct device *dev, const char *name, - const char * const parent_name, unsigned long flags, - u32 regofs, u8 bit, u32 type, u8 idle_bit, spinlock_t *lock) -{ - struct clk *clk; - struct clk_unit *unit; - struct clk_init_data init; - - unit = kzalloc(sizeof(*unit), gfp_kernel); - if (!unit) - return err_ptr(-enomem); - - init.name = name; - init.parent_names = &parent_name; - init.num_parents = 1; - init.ops = &unit_clk_ops; - init.flags = flags; - - unit->hw.init = &init; - unit->regofs = regofs; - unit->bit = bit; - - unit->type = type; - unit->idle_bit = idle_bit; - unit->lock = lock; - - clk = clk_register(dev, &unit->hw); - if (is_err(clk)) - kfree(unit); - - return clk; -} - -static struct atlas7_reset_desc atlas7_reset_unit[] = { - { "pwm", 0x0244, 0, 0x0320, 0, &leaf0_gate_lock }, /* 0-5 */ - { "thcgum", 0x0244, 3, 0x0320, 1, &leaf0_gate_lock }, - { "cvd", 0x04a0, 0, 0x032c, 0, &leaf1_gate_lock }, - { "timer", 0x04a0, 1, 0x032c, 1, &leaf1_gate_lock }, - { "pulsec", 0x04a0, 2, 0x032c, 2, &leaf1_gate_lock }, - { "tsc", 0x04a0, 3, 0x032c, 3, &leaf1_gate_lock }, - { "ioctop", 0x04a0, 4, 0x032c, 4, &leaf1_gate_lock }, /* 6-10 */ - { "rsc", 0x04a0, 5, 0x032c, 5, &leaf1_gate_lock }, - { "dvm", 0x04a0, 6, 0x032c, 6, &leaf1_gate_lock }, - { "lvds", 0x04a0, 7, 0x032c, 7, &leaf1_gate_lock }, - { "kas", 0x04a0, 8, 0x032c, 8, &leaf1_gate_lock }, - { "ac97", 0x04a0, 9, 0x032c, 9, &leaf1_gate_lock }, /* 11-15 */ - { "usp0", 0x04a0, 10, 0x032c, 10, &leaf1_gate_lock }, - { "usp1", 0x04a0, 11, 0x032c, 11, &leaf1_gate_lock }, - { "usp2", 0x04a0, 12, 0x032c, 12, &leaf1_gate_lock }, - { "dmac2", 0x04a0, 13, 0x032c, 13, &leaf1_gate_lock }, - { "dmac3", 0x04a0, 14, 0x032c, 14, &leaf1_gate_lock }, /* 16-20 */ - { "audio", 0x04a0, 15, 0x032c, 15, &leaf1_gate_lock }, - { "i2s1", 0x04a0, 17, 0x032c, 16, &leaf1_gate_lock }, - { "pmu_audio", 0x04a0, 22, 0x032c, 17, &leaf1_gate_lock }, - { "thaudmscm", 0x04a0, 23, 0x032c, 18, &leaf1_gate_lock }, - { "sys2pci", 0x04b8, 0, 0x0338, 0, &leaf2_gate_lock }, /* 21-25 */ - { "pciarb", 0x04b8, 1, 0x0338, 1, &leaf2_gate_lock }, - { "pcicopy", 0x04b8, 2, 0x0338, 2, &leaf2_gate_lock }, - { "rom", 0x04b8, 3, 0x0338, 3, &leaf2_gate_lock }, - { "sdio23", 0x04b8, 4, 0x0338, 4, &leaf2_gate_lock }, - { "sdio45", 0x04b8, 5, 0x0338, 5, &leaf2_gate_lock }, /* 26-30 */ - { "sdio67", 0x04b8, 6, 0x0338, 6, &leaf2_gate_lock }, - { "vip1", 0x04b8, 7, 0x0338, 7, &leaf2_gate_lock }, - { "vpp0", 0x04b8, 11, 0x0338, 8, &leaf2_gate_lock }, - { "lcd0", 0x04b8, 12, 0x0338, 9, &leaf2_gate_lock }, - { "vpp1", 0x04b8, 13, 0x0338, 10, &leaf2_gate_lock }, /* 31-35 */ - { "lcd1", 0x04b8, 14, 0x0338, 11, &leaf2_gate_lock }, - { "dcu", 0x04b8, 15, 0x0338, 12, &leaf2_gate_lock }, - { "gpio", 0x04b8, 18, 0x0338, 13, &leaf2_gate_lock }, - { "dapa_vdifm", 0x04b8, 17, 0x0338, 15, &leaf2_gate_lock }, - { "thvdifm", 0x04b8, 19, 0x0338, 16, &leaf2_gate_lock }, /* 36-40 */ - { "rgmii", 0x04d0, 0, 0x0344, 0, &leaf3_gate_lock }, - { "gmac", 0x04d0, 1, 0x0344, 1, &leaf3_gate_lock }, - { "uart1", 0x04d0, 2, 0x0344, 2, &leaf3_gate_lock }, - { "dmac0", 0x04d0, 3, 0x0344, 3, &leaf3_gate_lock }, - { "uart0", 0x04d0, 4, 0x0344, 4, &leaf3_gate_lock }, /* 41-45 */ - { "uart2", 0x04d0, 5, 0x0344, 5, &leaf3_gate_lock }, - { "uart3", 0x04d0, 6, 0x0344, 6, &leaf3_gate_lock }, - { "uart4", 0x04d0, 7, 0x0344, 7, &leaf3_gate_lock }, - { "uart5", 0x04d0, 8, 0x0344, 8, &leaf3_gate_lock }, - { "spi1", 0x04d0, 9, 0x0344, 9, &leaf3_gate_lock }, /* 46-50 */ - { "gnss_sys_m0", 0x04d0, 10, 0x0344, 10, &leaf3_gate_lock }, - { "canbus1", 0x04d0, 12, 0x0344, 11, &leaf3_gate_lock }, - { "ccsec", 0x04d0, 15, 0x0344, 12, &leaf3_gate_lock }, - { "ccpub", 0x04d0, 16, 0x0344, 13, &leaf3_gate_lock }, - { "dapa_gnssm", 0x04d0, 13, 0x0344, 14, &leaf3_gate_lock }, /* 51-55 */ - { "thgnssm", 0x04d0, 14, 0x0344, 15, &leaf3_gate_lock }, - { "vdec", 0x04e8, 0, 0x0350, 0, &leaf4_gate_lock }, - { "jpenc", 0x04e8, 1, 0x0350, 1, &leaf4_gate_lock }, - { "g2d", 0x04e8, 2, 0x0350, 2, &leaf4_gate_lock }, - { "i2c0", 0x04e8, 3, 0x0350, 3, &leaf4_gate_lock }, /* 56-60 */ - { "i2c1", 0x04e8, 4, 0x0350, 4, &leaf4_gate_lock }, - { "gpio0", 0x04e8, 5, 0x0350, 5, &leaf4_gate_lock }, - { "nand", 0x04e8, 6, 0x0350, 6, &leaf4_gate_lock }, - { "sdio01", 0x04e8, 7, 0x0350, 7, &leaf4_gate_lock }, - { "sys2pci2", 0x04e8, 8, 0x0350, 8, &leaf4_gate_lock }, /* 61-65 */ - { "usb0", 0x04e8, 11, 0x0350, 9, &leaf4_gate_lock }, - { "usb1", 0x04e8, 12, 0x0350, 10, &leaf4_gate_lock }, - { "thmediam", 0x04e8, 15, 0x0350, 11, &leaf4_gate_lock }, - { "memc_ddrphy", 0x0500, 0, 0x035c, 0, &leaf5_gate_lock }, - { "memc_upctl", 0x0500, 0, 0x035c, 1, &leaf5_gate_lock }, /* 66-70 */ - { "dapa_mem", 0x0500, 1, 0x035c, 2, &leaf5_gate_lock }, - { "memc_memdiv", 0x0500, 0, 0x035c, 3, &leaf5_gate_lock }, - { "thddrm", 0x0500, 3, 0x035c, 4, &leaf5_gate_lock }, - { "coresight", 0x0518, 3, 0x0368, 13, &leaf6_gate_lock }, - { "thcpum", 0x0518, 4, 0x0368, 17, &leaf6_gate_lock }, /* 71-75 */ - { "graphic", 0x0530, 0, 0x0374, 0, &leaf7_gate_lock }, - { "vss_sdr", 0x0530, 1, 0x0374, 1, &leaf7_gate_lock }, - { "thgpum", 0x0530, 2, 0x0374, 2, &leaf7_gate_lock }, - { "dmac4", 0x0548, 2, 0x0380, 1, &leaf8_gate_lock }, - { "uart6", 0x0548, 3, 0x0380, 2, &leaf8_gate_lock }, /* 76- */ - { "usp3", 0x0548, 4, 0x0380, 3, &leaf8_gate_lock }, - { "thbtm", 0x0548, 5, 0x0380, 5, &leaf8_gate_lock }, - { "a7ca", 0x0548, 1, 0x0380, 0, &leaf8_gate_lock }, - { "a7ca_apb", 0x0548, 5, 0x0380, 4, &leaf8_gate_lock }, -}; - -static int atlas7_reset_module(struct reset_controller_dev *rcdev, - unsigned long reset_idx) -{ - struct atlas7_reset_desc *reset = &atlas7_reset_unit[reset_idx]; - unsigned long flags; - - /* - * hw suggest unit reset sequence: - * assert sw reset (0) - * setting sw clk_en to if the clock was disabled before reset - * delay 16 clocks - * disable clock (sw clk_en = 0) - * de-assert reset (1) - * after this sequence, restore clock or not is decided by sw - */ - - spin_lock_irqsave(reset->lock, flags); - /* clock enable or not */ - if (clkc_readl(reset->clk_ofs + 8) & (1 << reset->clk_bit)) { - clkc_writel(1 << reset->rst_bit, reset->rst_ofs + 4); - udelay(2); - clkc_writel(1 << reset->clk_bit, reset->clk_ofs + 4); - clkc_writel(1 << reset->rst_bit, reset->rst_ofs); - /* restore clock enable */ - clkc_writel(1 << reset->clk_bit, reset->clk_ofs); - } else { - clkc_writel(1 << reset->rst_bit, reset->rst_ofs + 4); - clkc_writel(1 << reset->clk_bit, reset->clk_ofs); - udelay(2); - clkc_writel(1 << reset->clk_bit, reset->clk_ofs + 4); - clkc_writel(1 << reset->rst_bit, reset->rst_ofs); - } - spin_unlock_irqrestore(reset->lock, flags); - - return 0; -} - -static const struct reset_control_ops atlas7_rst_ops = { - .reset = atlas7_reset_module, -}; - -static struct reset_controller_dev atlas7_rst_ctlr = { - .ops = &atlas7_rst_ops, - .owner = this_module, - .of_reset_n_cells = 1, -}; - -static void __init atlas7_clk_init(struct device_node *np) -{ - struct clk *clk; - struct atlas7_div_init_data *div; - struct atlas7_mux_init_data *mux; - struct atlas7_unit_init_data *unit; - int i; - int ret; - - sirfsoc_clk_vbase = of_iomap(np, 0); - if (!sirfsoc_clk_vbase) - panic("unable to map clkc registers "); - - of_node_put(np); - - clk = clk_register(null, &clk_cpupll.hw); - bug_on(!clk); - clk = clk_register(null, &clk_mempll.hw); - bug_on(!clk); - clk = clk_register(null, &clk_sys0pll.hw); - bug_on(!clk); - clk = clk_register(null, &clk_sys1pll.hw); - bug_on(!clk); - clk = clk_register(null, &clk_sys2pll.hw); - bug_on(!clk); - clk = clk_register(null, &clk_sys3pll.hw); - bug_on(!clk); - - clk = clk_register_divider_table(null, "cpupll_div1", "cpupll_vco", 0, - sirfsoc_clk_vbase + sirfsoc_clkc_cpupll_ab_ctrl1, 0, 3, 0, - pll_div_table, &cpupll_ctrl1_lock); - bug_on(!clk); - clk = clk_register_divider_table(null, "cpupll_div2", "cpupll_vco", 0, - sirfsoc_clk_vbase + sirfsoc_clkc_cpupll_ab_ctrl1, 4, 3, 0, - pll_div_table, &cpupll_ctrl1_lock); - bug_on(!clk); - clk = clk_register_divider_table(null, "cpupll_div3", "cpupll_vco", 0, - sirfsoc_clk_vbase + sirfsoc_clkc_cpupll_ab_ctrl1, 8, 3, 0, - pll_div_table, &cpupll_ctrl1_lock); - bug_on(!clk); - - clk = clk_register_divider_table(null, "mempll_div1", "mempll_vco", 0, - sirfsoc_clk_vbase + sirfsoc_clkc_mempll_ab_ctrl1, 0, 3, 0, - pll_div_table, &mempll_ctrl1_lock); - bug_on(!clk); - clk = clk_register_divider_table(null, "mempll_div2", "mempll_vco", 0, - sirfsoc_clk_vbase + sirfsoc_clkc_mempll_ab_ctrl1, 4, 3, 0, - pll_div_table, &mempll_ctrl1_lock); - bug_on(!clk); - clk = clk_register_divider_table(null, "mempll_div3", "mempll_vco", 0, - sirfsoc_clk_vbase + sirfsoc_clkc_mempll_ab_ctrl1, 8, 3, 0, - pll_div_table, &mempll_ctrl1_lock); - bug_on(!clk); - - clk = clk_register_divider_table(null, "sys0pll_div1", "sys0pll_vco", 0, - sirfsoc_clk_vbase + sirfsoc_clkc_sys0pll_ab_ctrl1, 0, 3, 0, - pll_div_table, &sys0pll_ctrl1_lock); - bug_on(!clk); - clk = clk_register_divider_table(null, "sys0pll_div2", "sys0pll_vco", 0, - sirfsoc_clk_vbase + sirfsoc_clkc_sys0pll_ab_ctrl1, 4, 3, 0, - pll_div_table, &sys0pll_ctrl1_lock); - bug_on(!clk); - clk = clk_register_divider_table(null, "sys0pll_div3", "sys0pll_vco", 0, - sirfsoc_clk_vbase + sirfsoc_clkc_sys0pll_ab_ctrl1, 8, 3, 0, - pll_div_table, &sys0pll_ctrl1_lock); - bug_on(!clk); - clk = clk_register_fixed_factor(null, "sys0pll_fixdiv", "sys0pll_vco", - clk_set_rate_parent, 1, 2); - - clk = clk_register_divider_table(null, "sys1pll_div1", "sys1pll_vco", 0, - sirfsoc_clk_vbase + sirfsoc_clkc_sys1pll_ab_ctrl1, 0, 3, 0, - pll_div_table, &sys1pll_ctrl1_lock); - bug_on(!clk); - clk = clk_register_divider_table(null, "sys1pll_div2", "sys1pll_vco", 0, - sirfsoc_clk_vbase + sirfsoc_clkc_sys1pll_ab_ctrl1, 4, 3, 0, - pll_div_table, &sys1pll_ctrl1_lock); - bug_on(!clk); - clk = clk_register_divider_table(null, "sys1pll_div3", "sys1pll_vco", 0, - sirfsoc_clk_vbase + sirfsoc_clkc_sys1pll_ab_ctrl1, 8, 3, 0, - pll_div_table, &sys1pll_ctrl1_lock); - bug_on(!clk); - clk = clk_register_fixed_factor(null, "sys1pll_fixdiv", "sys1pll_vco", - clk_set_rate_parent, 1, 2); - - clk = clk_register_divider_table(null, "sys2pll_div1", "sys2pll_vco", 0, - sirfsoc_clk_vbase + sirfsoc_clkc_sys2pll_ab_ctrl1, 0, 3, 0, - pll_div_table, &sys2pll_ctrl1_lock); - bug_on(!clk); - clk = clk_register_divider_table(null, "sys2pll_div2", "sys2pll_vco", 0, - sirfsoc_clk_vbase + sirfsoc_clkc_sys2pll_ab_ctrl1, 4, 3, 0, - pll_div_table, &sys2pll_ctrl1_lock); - bug_on(!clk); - clk = clk_register_divider_table(null, "sys2pll_div3", "sys2pll_vco", 0, - sirfsoc_clk_vbase + sirfsoc_clkc_sys2pll_ab_ctrl1, 8, 3, 0, - pll_div_table, &sys2pll_ctrl1_lock); - bug_on(!clk); - clk = clk_register_fixed_factor(null, "sys2pll_fixdiv", "sys2pll_vco", - clk_set_rate_parent, 1, 2); - - clk = clk_register_divider_table(null, "sys3pll_div1", "sys3pll_vco", 0, - sirfsoc_clk_vbase + sirfsoc_clkc_sys3pll_ab_ctrl1, 0, 3, 0, - pll_div_table, &sys3pll_ctrl1_lock); - bug_on(!clk); - clk = clk_register_divider_table(null, "sys3pll_div2", "sys3pll_vco", 0, - sirfsoc_clk_vbase + sirfsoc_clkc_sys3pll_ab_ctrl1, 4, 3, 0, - pll_div_table, &sys3pll_ctrl1_lock); - bug_on(!clk); - clk = clk_register_divider_table(null, "sys3pll_div3", "sys3pll_vco", 0, - sirfsoc_clk_vbase + sirfsoc_clkc_sys3pll_ab_ctrl1, 8, 3, 0, - pll_div_table, &sys3pll_ctrl1_lock); - bug_on(!clk); - clk = clk_register_fixed_factor(null, "sys3pll_fixdiv", "sys3pll_vco", - clk_set_rate_parent, 1, 2); - - bug_on(!clk); - clk = clk_register_fixed_factor(null, "xinw_fixdiv_btslow", "xinw", - clk_set_rate_parent, 1, 4); - - bug_on(!clk); - clk = clk_register_gate(null, "cpupll_clk1", "cpupll_div1", - clk_set_rate_parent, sirfsoc_clk_vbase + sirfsoc_clkc_cpupll_ab_ctrl1, - 12, 0, &cpupll_ctrl1_lock); - bug_on(!clk); - clk = clk_register_gate(null, "cpupll_clk2", "cpupll_div2", - clk_set_rate_parent, sirfsoc_clk_vbase + sirfsoc_clkc_cpupll_ab_ctrl1, - 13, 0, &cpupll_ctrl1_lock); - bug_on(!clk); - clk = clk_register_gate(null, "cpupll_clk3", "cpupll_div3", - clk_set_rate_parent, sirfsoc_clk_vbase + sirfsoc_clkc_cpupll_ab_ctrl1, - 14, 0, &cpupll_ctrl1_lock); - bug_on(!clk); - - clk = clk_register_gate(null, "mempll_clk1", "mempll_div1", - clk_set_rate_parent | clk_ignore_unused, - sirfsoc_clk_vbase + sirfsoc_clkc_mempll_ab_ctrl1, - 12, 0, &mempll_ctrl1_lock); - bug_on(!clk); - clk = clk_register_gate(null, "mempll_clk2", "mempll_div2", - clk_set_rate_parent, sirfsoc_clk_vbase + sirfsoc_clkc_mempll_ab_ctrl1, - 13, 0, &mempll_ctrl1_lock); - bug_on(!clk); - clk = clk_register_gate(null, "mempll_clk3", "mempll_div3", - clk_set_rate_parent, sirfsoc_clk_vbase + sirfsoc_clkc_mempll_ab_ctrl1, - 14, 0, &mempll_ctrl1_lock); - bug_on(!clk); - - clk = clk_register_gate(null, "sys0pll_clk1", "sys0pll_div1", - clk_set_rate_parent, sirfsoc_clk_vbase + sirfsoc_clkc_sys0pll_ab_ctrl1, - 12, 0, &sys0pll_ctrl1_lock); - bug_on(!clk); - clk = clk_register_gate(null, "sys0pll_clk2", "sys0pll_div2", - clk_set_rate_parent, sirfsoc_clk_vbase + sirfsoc_clkc_sys0pll_ab_ctrl1, - 13, 0, &sys0pll_ctrl1_lock); - bug_on(!clk); - clk = clk_register_gate(null, "sys0pll_clk3", "sys0pll_div3", - clk_set_rate_parent, sirfsoc_clk_vbase + sirfsoc_clkc_sys0pll_ab_ctrl1, - 14, 0, &sys0pll_ctrl1_lock); - bug_on(!clk); - - clk = clk_register_gate(null, "sys1pll_clk1", "sys1pll_div1", - clk_set_rate_parent, sirfsoc_clk_vbase + sirfsoc_clkc_sys1pll_ab_ctrl1, - 12, 0, &sys1pll_ctrl1_lock); - bug_on(!clk); - clk = clk_register_gate(null, "sys1pll_clk2", "sys1pll_div2", - clk_set_rate_parent, sirfsoc_clk_vbase + sirfsoc_clkc_sys1pll_ab_ctrl1, - 13, 0, &sys1pll_ctrl1_lock); - bug_on(!clk); - clk = clk_register_gate(null, "sys1pll_clk3", "sys1pll_div3", - clk_set_rate_parent, sirfsoc_clk_vbase + sirfsoc_clkc_sys1pll_ab_ctrl1, - 14, 0, &sys1pll_ctrl1_lock); - bug_on(!clk); - - clk = clk_register_gate(null, "sys2pll_clk1", "sys2pll_div1", - clk_set_rate_parent, sirfsoc_clk_vbase + sirfsoc_clkc_sys2pll_ab_ctrl1, - 12, 0, &sys2pll_ctrl1_lock); - bug_on(!clk); - clk = clk_register_gate(null, "sys2pll_clk2", "sys2pll_div2", - clk_set_rate_parent, sirfsoc_clk_vbase + sirfsoc_clkc_sys2pll_ab_ctrl1, - 13, 0, &sys2pll_ctrl1_lock); - bug_on(!clk); - clk = clk_register_gate(null, "sys2pll_clk3", "sys2pll_div3", - clk_set_rate_parent, sirfsoc_clk_vbase + sirfsoc_clkc_sys2pll_ab_ctrl1, - 14, 0, &sys2pll_ctrl1_lock); - bug_on(!clk); - - clk = clk_register_gate(null, "sys3pll_clk1", "sys3pll_div1", - clk_set_rate_parent, sirfsoc_clk_vbase + sirfsoc_clkc_sys3pll_ab_ctrl1, - 12, 0, &sys3pll_ctrl1_lock); - bug_on(!clk); - clk = clk_register_gate(null, "sys3pll_clk2", "sys3pll_div2", - clk_set_rate_parent, sirfsoc_clk_vbase + sirfsoc_clkc_sys3pll_ab_ctrl1, - 13, 0, &sys3pll_ctrl1_lock); - bug_on(!clk); - clk = clk_register_gate(null, "sys3pll_clk3", "sys3pll_div3", - clk_set_rate_parent, sirfsoc_clk_vbase + sirfsoc_clkc_sys3pll_ab_ctrl1, - 14, 0, &sys3pll_ctrl1_lock); - bug_on(!clk); - - clk = clk_register(null, &clk_audio_dto.hw); - bug_on(!clk); - - clk = clk_register(null, &clk_disp0_dto.hw); - bug_on(!clk); - - clk = clk_register(null, &clk_disp1_dto.hw); - bug_on(!clk); - - for (i = 0; i < array_size(divider_list); i++) { - div = ÷r_list[i]; - clk = clk_register_divider(null, div->div_name, - div->parent_name, div->divider_flags, sirfsoc_clk_vbase + div->div_offset, - div->shift, div->width, 0, div->lock); - bug_on(!clk); - clk = clk_register_gate(null, div->gate_name, div->div_name, - div->gate_flags, sirfsoc_clk_vbase + div->gate_offset, - div->gate_bit, 0, div->lock); - bug_on(!clk); - } - /* ignore selector status register check */ - for (i = 0; i < array_size(mux_list); i++) { - mux = &mux_list[i]; - clk = clk_register_mux(null, mux->mux_name, mux->parent_names, - mux->parent_num, mux->flags, - sirfsoc_clk_vbase + mux->mux_offset, - mux->shift, mux->width, - mux->mux_flags, null); - atlas7_clks[array_size(unit_list) + i] = clk; - bug_on(!clk); - } - - for (i = 0; i < array_size(unit_list); i++) { - unit = &unit_list[i]; - atlas7_clks[i] = atlas7_unit_clk_register(null, unit->unit_name, unit->parent_name, - unit->flags, unit->regofs, unit->bit, unit->type, unit->idle_bit, unit->lock); - bug_on(!atlas7_clks[i]); - } - - clk_data.clks = atlas7_clks; - clk_data.clk_num = array_size(unit_list) + array_size(mux_list); - - ret = of_clk_add_provider(np, of_clk_src_onecell_get, &clk_data); - bug_on(ret); - - atlas7_rst_ctlr.of_node = np; - atlas7_rst_ctlr.nr_resets = array_size(atlas7_reset_unit); - reset_controller_register(&atlas7_rst_ctlr); -} -clk_of_declare(atlas7_clk, "sirf,atlas7-car", atlas7_clk_init); diff --git a/drivers/clk/sirf/clk-common.c b/drivers/clk/sirf/clk-common.c --- a/drivers/clk/sirf/clk-common.c +++ /dev/null -// spdx-license-identifier: gpl-2.0-or-later -/* - * common clks module for all sirf socs - * - * copyright (c) 2011 - 2014 cambridge silicon radio limited, a csr plc group - * company. - */ - -#include <linux/clk.h> - -#define khz 1000 -#define mhz (khz * khz) - -static void __iomem *sirfsoc_clk_vbase; -static void __iomem *sirfsoc_rsc_vbase; -static struct clk_onecell_data clk_data; - -/* - * sirfprimaii clock controller - * - 2 oscillators: osc-26mhz, rtc-32.768khz - * - 3 standard configurable plls: pll1, pll2 & pll3 - * - 2 exclusive plls: usb phy pll and sata phy pll - * - 8 clock domains: cpu/cpudiv, mem/memdiv, sys/io, dsp, graphic, multimedia, - * display and sdphy. - * each clock domain can select its own clock source from five clock sources, - * x_xin, x_xinw, pll1, pll2 and pll3. the domain clock is used as the source - * clock of the group clock. - * - dsp domain: gps, mf - * - io domain: dmac, nand, audio, uart, i2c, spi, usp, pwm, pulse - * - sys domain: security - */ - -struct clk_pll { - struct clk_hw hw; - unsigned short regofs; /* register offset */ -}; - -#define to_pllclk(_hw) container_of(_hw, struct clk_pll, hw) - -struct clk_dmn { - struct clk_hw hw; - signed char enable_bit; /* enable bit: 0 ~ 63 */ - unsigned short regofs; /* register offset */ -}; - -#define to_dmnclk(_hw) container_of(_hw, struct clk_dmn, hw) - -struct clk_std { - struct clk_hw hw; - signed char enable_bit; /* enable bit: 0 ~ 63 */ -}; - -#define to_stdclk(_hw) container_of(_hw, struct clk_std, hw) - -static int std_clk_is_enabled(struct clk_hw *hw); -static int std_clk_enable(struct clk_hw *hw); -static void std_clk_disable(struct clk_hw *hw); - -static inline unsigned long clkc_readl(unsigned reg) -{ - return readl(sirfsoc_clk_vbase + reg); -} - -static inline void clkc_writel(u32 val, unsigned reg) -{ - writel(val, sirfsoc_clk_vbase + reg); -} - -/* - * std pll - */ - -static unsigned long pll_clk_recalc_rate(struct clk_hw *hw, - unsigned long parent_rate) -{ - unsigned long fin = parent_rate; - struct clk_pll *clk = to_pllclk(hw); - u32 regcfg2 = clk->regofs + sirfsoc_clkc_pll1_cfg2 - - sirfsoc_clkc_pll1_cfg0; - - if (clkc_readl(regcfg2) & bit(2)) { - /* pll bypass mode */ - return fin; - } else { - /* fout = fin * nf / nr / od */ - u32 cfg0 = clkc_readl(clk->regofs); - u32 nf = (cfg0 & (bit(13) - 1)) + 1; - u32 nr = ((cfg0 >> 13) & (bit(6) - 1)) + 1; - u32 od = ((cfg0 >> 19) & (bit(4) - 1)) + 1; - warn_on(fin % mhz); - return fin / mhz * nf / nr / od * mhz; - } -} - -static long pll_clk_round_rate(struct clk_hw *hw, unsigned long rate, - unsigned long *parent_rate) -{ - unsigned long fin, nf, nr, od; - u64 dividend; - - /* - * fout = fin * nf / (nr * od); - * set od = 1, nr = fin/mhz, so fout = nf * mhz - */ - rate = rate - rate % mhz; - - nf = rate / mhz; - if (nf > bit(13)) - nf = bit(13); - if (nf < 1) - nf = 1; - - fin = *parent_rate; - - nr = fin / mhz; - if (nr > bit(6)) - nr = bit(6); - od = 1; - - dividend = (u64)fin * nf; - do_div(dividend, nr * od); - - return (long)dividend; -} - -static int pll_clk_set_rate(struct clk_hw *hw, unsigned long rate, - unsigned long parent_rate) -{ - struct clk_pll *clk = to_pllclk(hw); - unsigned long fin, nf, nr, od, reg; - - /* - * fout = fin * nf / (nr * od); - * set od = 1, nr = fin/mhz, so fout = nf * mhz - */ - - nf = rate / mhz; - if (unlikely((rate % mhz) || nf > bit(13) || nf < 1)) - return -einval; - - fin = parent_rate; - bug_on(fin < mhz); - - nr = fin / mhz; - bug_on((fin % mhz) || nr > bit(6)); - - od = 1; - - reg = (nf - 1) | ((nr - 1) << 13) | ((od - 1) << 19); - clkc_writel(reg, clk->regofs); - - reg = clk->regofs + sirfsoc_clkc_pll1_cfg1 - sirfsoc_clkc_pll1_cfg0; - clkc_writel((nf >> 1) - 1, reg); - - reg = clk->regofs + sirfsoc_clkc_pll1_cfg2 - sirfsoc_clkc_pll1_cfg0; - while (!(clkc_readl(reg) & bit(6))) - cpu_relax(); - - return 0; -} - -static long cpu_clk_round_rate(struct clk_hw *hw, unsigned long rate, - unsigned long *parent_rate) -{ - /* - * sirf soc has not cpu clock control, - * so bypass to it's parent pll. - */ - struct clk_hw *parent_clk = clk_hw_get_parent(hw); - struct clk_hw *pll_parent_clk = clk_hw_get_parent(parent_clk); - unsigned long pll_parent_rate = clk_hw_get_rate(pll_parent_clk); - return pll_clk_round_rate(parent_clk, rate, &pll_parent_rate); -} - -static unsigned long cpu_clk_recalc_rate(struct clk_hw *hw, - unsigned long parent_rate) -{ - /* - * sirf soc has not cpu clock control, - * so return the parent pll rate. - */ - struct clk_hw *parent_clk = clk_hw_get_parent(hw); - return clk_hw_get_rate(parent_clk); -} - -static const struct clk_ops std_pll_ops = { - .recalc_rate = pll_clk_recalc_rate, - .round_rate = pll_clk_round_rate, - .set_rate = pll_clk_set_rate, -}; - -static const char * const pll_clk_parents[] = { - "osc", -}; - -static const struct clk_init_data clk_pll1_init = { - .name = "pll1", - .ops = &std_pll_ops, - .parent_names = pll_clk_parents, - .num_parents = array_size(pll_clk_parents), -}; - -static const struct clk_init_data clk_pll2_init = { - .name = "pll2", - .ops = &std_pll_ops, - .parent_names = pll_clk_parents, - .num_parents = array_size(pll_clk_parents), -}; - -static const struct clk_init_data clk_pll3_init = { - .name = "pll3", - .ops = &std_pll_ops, - .parent_names = pll_clk_parents, - .num_parents = array_size(pll_clk_parents), -}; - -static struct clk_pll clk_pll1 = { - .regofs = sirfsoc_clkc_pll1_cfg0, - .hw = { - .init = &clk_pll1_init, - }, -}; - -static struct clk_pll clk_pll2 = { - .regofs = sirfsoc_clkc_pll2_cfg0, - .hw = { - .init = &clk_pll2_init, - }, -}; - -static struct clk_pll clk_pll3 = { - .regofs = sirfsoc_clkc_pll3_cfg0, - .hw = { - .init = &clk_pll3_init, - }, -}; - -/* - * usb uses specified pll - */ - -static int usb_pll_clk_enable(struct clk_hw *hw) -{ - u32 reg = readl(sirfsoc_rsc_vbase + sirfsoc_usbphy_pll_ctrl); - reg &= ~(sirfsoc_usbphy_pll_powerdown | sirfsoc_usbphy_pll_bypass); - writel(reg, sirfsoc_rsc_vbase + sirfsoc_usbphy_pll_ctrl); - while (!(readl(sirfsoc_rsc_vbase + sirfsoc_usbphy_pll_ctrl) & - sirfsoc_usbphy_pll_lock)) - cpu_relax(); - - return 0; -} - -static void usb_pll_clk_disable(struct clk_hw *clk) -{ - u32 reg = readl(sirfsoc_rsc_vbase + sirfsoc_usbphy_pll_ctrl); - reg |= (sirfsoc_usbphy_pll_powerdown | sirfsoc_usbphy_pll_bypass); - writel(reg, sirfsoc_rsc_vbase + sirfsoc_usbphy_pll_ctrl); -} - -static unsigned long usb_pll_clk_recalc_rate(struct clk_hw *hw, unsigned long parent_rate) -{ - u32 reg = readl(sirfsoc_rsc_vbase + sirfsoc_usbphy_pll_ctrl); - return (reg & sirfsoc_usbphy_pll_bypass) ? parent_rate : 48*mhz; -} - -static const struct clk_ops usb_pll_ops = { - .enable = usb_pll_clk_enable, - .disable = usb_pll_clk_disable, - .recalc_rate = usb_pll_clk_recalc_rate, -}; - -static const struct clk_init_data clk_usb_pll_init = { - .name = "usb_pll", - .ops = &usb_pll_ops, - .parent_names = pll_clk_parents, - .num_parents = array_size(pll_clk_parents), -}; - -static struct clk_hw usb_pll_clk_hw = { - .init = &clk_usb_pll_init, -}; - -/* - * clock domains - cpu, mem, sys/io, dsp, gfx - */ - -static const char * const dmn_clk_parents[] = { - "rtc", - "osc", - "pll1", - "pll2", - "pll3", -}; - -static u8 dmn_clk_get_parent(struct clk_hw *hw) -{ - struct clk_dmn *clk = to_dmnclk(hw); - u32 cfg = clkc_readl(clk->regofs); - const char *name = clk_hw_get_name(hw); - - /* parent of io domain can only be pll3 */ - if (strcmp(name, "io") == 0) - return 4; - - warn_on((cfg & (bit(3) - 1)) > 4); - - return cfg & (bit(3) - 1); -} - -static int dmn_clk_set_parent(struct clk_hw *hw, u8 parent) -{ - struct clk_dmn *clk = to_dmnclk(hw); - u32 cfg = clkc_readl(clk->regofs); - const char *name = clk_hw_get_name(hw); - - /* parent of io domain can only be pll3 */ - if (strcmp(name, "io") == 0) - return -einval; - - cfg &= ~(bit(3) - 1); - clkc_writel(cfg | parent, clk->regofs); - /* bit(3) - switching status: 1 - busy, 0 - done */ - while (clkc_readl(clk->regofs) & bit(3)) - cpu_relax(); - - return 0; -} - -static unsigned long dmn_clk_recalc_rate(struct clk_hw *hw, - unsigned long parent_rate) - -{ - unsigned long fin = parent_rate; - struct clk_dmn *clk = to_dmnclk(hw); - - u32 cfg = clkc_readl(clk->regofs); - - if (cfg & bit(24)) { - /* fcd bypass mode */ - return fin; - } else { - /* - * wait count: bit[19:16], hold count: bit[23:20] - */ - u32 wait = (cfg >> 16) & (bit(4) - 1); - u32 hold = (cfg >> 20) & (bit(4) - 1); - - return fin / (wait + hold + 2); - } -} - -static long dmn_clk_round_rate(struct clk_hw *hw, unsigned long rate, - unsigned long *parent_rate) -{ - unsigned long fin; - unsigned ratio, wait, hold; - const char *name = clk_hw_get_name(hw); - unsigned bits = (strcmp(name, "mem") == 0) ? 3 : 4; - - fin = *parent_rate; - ratio = fin / rate; - - if (ratio < 2) - ratio = 2; - if (ratio > bit(bits + 1)) - ratio = bit(bits + 1); - - wait = (ratio >> 1) - 1; - hold = ratio - wait - 2; - - return fin / (wait + hold + 2); -} - -static int dmn_clk_set_rate(struct clk_hw *hw, unsigned long rate, - unsigned long parent_rate) -{ - struct clk_dmn *clk = to_dmnclk(hw); - unsigned long fin; - unsigned ratio, wait, hold, reg; - const char *name = clk_hw_get_name(hw); - unsigned bits = (strcmp(name, "mem") == 0) ? 3 : 4; - - fin = parent_rate; - ratio = fin / rate; - - if (unlikely(ratio < 2 || ratio > bit(bits + 1))) - return -einval; - - warn_on(fin % rate); - - wait = (ratio >> 1) - 1; - hold = ratio - wait - 2; - - reg = clkc_readl(clk->regofs); - reg &= ~(((bit(bits) - 1) << 16) | ((bit(bits) - 1) << 20)); - reg |= (wait << 16) | (hold << 20) | bit(25); - clkc_writel(reg, clk->regofs); - - /* waiting fcd been effective */ - while (clkc_readl(clk->regofs) & bit(25)) - cpu_relax(); - - return 0; -} - -static int cpu_clk_set_rate(struct clk_hw *hw, unsigned long rate, - unsigned long parent_rate) -{ - int ret1, ret2; - struct clk *cur_parent; - - if (rate == clk_get_rate(clk_pll1.hw.clk)) { - ret1 = clk_set_parent(hw->clk, clk_pll1.hw.clk); - return ret1; - } - - if (rate == clk_get_rate(clk_pll2.hw.clk)) { - ret1 = clk_set_parent(hw->clk, clk_pll2.hw.clk); - return ret1; - } - - if (rate == clk_get_rate(clk_pll3.hw.clk)) { - ret1 = clk_set_parent(hw->clk, clk_pll3.hw.clk); - return ret1; - } - - cur_parent = clk_get_parent(hw->clk); - - /* switch to tmp pll before setting parent clock's rate */ - if (cur_parent == clk_pll1.hw.clk) { - ret1 = clk_set_parent(hw->clk, clk_pll2.hw.clk); - bug_on(ret1); - } - - ret2 = clk_set_rate(clk_pll1.hw.clk, rate); - - ret1 = clk_set_parent(hw->clk, clk_pll1.hw.clk); - - return ret2 ? ret2 : ret1; -} - -static const struct clk_ops msi_ops = { - .set_rate = dmn_clk_set_rate, - .round_rate = dmn_clk_round_rate, - .recalc_rate = dmn_clk_recalc_rate, - .set_parent = dmn_clk_set_parent, - .get_parent = dmn_clk_get_parent, -}; - -static const struct clk_init_data clk_mem_init = { - .name = "mem", - .ops = &msi_ops, - .parent_names = dmn_clk_parents, - .num_parents = array_size(dmn_clk_parents), -}; - -static struct clk_dmn clk_mem = { - .regofs = sirfsoc_clkc_mem_cfg, - .hw = { - .init = &clk_mem_init, - }, -}; - -static const struct clk_init_data clk_sys_init = { - .name = "sys", - .ops = &msi_ops, - .parent_names = dmn_clk_parents, - .num_parents = array_size(dmn_clk_parents), - .flags = clk_set_rate_gate, -}; - -static struct clk_dmn clk_sys = { - .regofs = sirfsoc_clkc_sys_cfg, - .hw = { - .init = &clk_sys_init, - }, -}; - -static const struct clk_init_data clk_io_init = { - .name = "io", - .ops = &msi_ops, - .parent_names = dmn_clk_parents, - .num_parents = array_size(dmn_clk_parents), -}; - -static struct clk_dmn clk_io = { - .regofs = sirfsoc_clkc_io_cfg, - .hw = { - .init = &clk_io_init, - }, -}; - -static const struct clk_ops cpu_ops = { - .set_parent = dmn_clk_set_parent, - .get_parent = dmn_clk_get_parent, - .set_rate = cpu_clk_set_rate, - .round_rate = cpu_clk_round_rate, - .recalc_rate = cpu_clk_recalc_rate, -}; - -static const struct clk_init_data clk_cpu_init = { - .name = "cpu", - .ops = &cpu_ops, - .parent_names = dmn_clk_parents, - .num_parents = array_size(dmn_clk_parents), - .flags = clk_set_rate_parent, -}; - -static struct clk_dmn clk_cpu = { - .regofs = sirfsoc_clkc_cpu_cfg, - .hw = { - .init = &clk_cpu_init, - }, -}; - -static const struct clk_ops dmn_ops = { - .is_enabled = std_clk_is_enabled, - .enable = std_clk_enable, - .disable = std_clk_disable, - .set_rate = dmn_clk_set_rate, - .round_rate = dmn_clk_round_rate, - .recalc_rate = dmn_clk_recalc_rate, - .set_parent = dmn_clk_set_parent, - .get_parent = dmn_clk_get_parent, -}; - -/* dsp, gfx, mm, lcd and vpp domain */ - -static const struct clk_init_data clk_dsp_init = { - .name = "dsp", - .ops = &dmn_ops, - .parent_names = dmn_clk_parents, - .num_parents = array_size(dmn_clk_parents), -}; - -static struct clk_dmn clk_dsp = { - .regofs = sirfsoc_clkc_dsp_cfg, - .enable_bit = 0, - .hw = { - .init = &clk_dsp_init, - }, -}; - -static const struct clk_init_data clk_gfx_init = { - .name = "gfx", - .ops = &dmn_ops, - .parent_names = dmn_clk_parents, - .num_parents = array_size(dmn_clk_parents), -}; - -static struct clk_dmn clk_gfx = { - .regofs = sirfsoc_clkc_gfx_cfg, - .enable_bit = 8, - .hw = { - .init = &clk_gfx_init, - }, -}; - -static const struct clk_init_data clk_mm_init = { - .name = "mm", - .ops = &dmn_ops, - .parent_names = dmn_clk_parents, - .num_parents = array_size(dmn_clk_parents), -}; - -static struct clk_dmn clk_mm = { - .regofs = sirfsoc_clkc_mm_cfg, - .enable_bit = 9, - .hw = { - .init = &clk_mm_init, - }, -}; - -/* - * for atlas6, gfx2d holds the bit of prima2's clk_mm - */ -#define clk_gfx2d clk_mm - -static const struct clk_init_data clk_lcd_init = { - .name = "lcd", - .ops = &dmn_ops, - .parent_names = dmn_clk_parents, - .num_parents = array_size(dmn_clk_parents), -}; - -static struct clk_dmn clk_lcd = { - .regofs = sirfsoc_clkc_lcd_cfg, - .enable_bit = 10, - .hw = { - .init = &clk_lcd_init, - }, -}; - -static const struct clk_init_data clk_vpp_init = { - .name = "vpp", - .ops = &dmn_ops, - .parent_names = dmn_clk_parents, - .num_parents = array_size(dmn_clk_parents), -}; - -static struct clk_dmn clk_vpp = { - .regofs = sirfsoc_clkc_lcd_cfg, - .enable_bit = 11, - .hw = { - .init = &clk_vpp_init, - }, -}; - -static const struct clk_init_data clk_mmc01_init = { - .name = "mmc01", - .ops = &dmn_ops, - .parent_names = dmn_clk_parents, - .num_parents = array_size(dmn_clk_parents), -}; - -static const struct clk_init_data clk_mmc23_init = { - .name = "mmc23", - .ops = &dmn_ops, - .parent_names = dmn_clk_parents, - .num_parents = array_size(dmn_clk_parents), -}; - -static const struct clk_init_data clk_mmc45_init = { - .name = "mmc45", - .ops = &dmn_ops, - .parent_names = dmn_clk_parents, - .num_parents = array_size(dmn_clk_parents), -}; - -/* - * peripheral controllers in io domain - */ - -static int std_clk_is_enabled(struct clk_hw *hw) -{ - u32 reg; - int bit; - struct clk_std *clk = to_stdclk(hw); - - bit = clk->enable_bit % 32; - reg = clk->enable_bit / 32; - reg = sirfsoc_clkc_clk_en0 + reg * sizeof(reg); - - return !!(clkc_readl(reg) & bit(bit)); -} - -static int std_clk_enable(struct clk_hw *hw) -{ - u32 val, reg; - int bit; - struct clk_std *clk = to_stdclk(hw); - - bug_on(clk->enable_bit < 0 || clk->enable_bit > 63); - - bit = clk->enable_bit % 32; - reg = clk->enable_bit / 32; - reg = sirfsoc_clkc_clk_en0 + reg * sizeof(reg); - - val = clkc_readl(reg) | bit(bit); - clkc_writel(val, reg); - return 0; -} - -static void std_clk_disable(struct clk_hw *hw) -{ - u32 val, reg; - int bit; - struct clk_std *clk = to_stdclk(hw); - - bug_on(clk->enable_bit < 0 || clk->enable_bit > 63); - - bit = clk->enable_bit % 32; - reg = clk->enable_bit / 32; - reg = sirfsoc_clkc_clk_en0 + reg * sizeof(reg); - - val = clkc_readl(reg) & ~bit(bit); - clkc_writel(val, reg); -} - -static const char * const std_clk_io_parents[] = { - "io", -}; - -static const struct clk_ops ios_ops = { - .is_enabled = std_clk_is_enabled, - .enable = std_clk_enable, - .disable = std_clk_disable, -}; - -static const struct clk_init_data clk_cphif_init = { - .name = "cphif", - .ops = &ios_ops, - .parent_names = std_clk_io_parents, - .num_parents = array_size(std_clk_io_parents), -}; - -static struct clk_std clk_cphif = { - .enable_bit = 20, - .hw = { - .init = &clk_cphif_init, - }, -}; - -static const struct clk_init_data clk_dmac0_init = { - .name = "dmac0", - .ops = &ios_ops, - .parent_names = std_clk_io_parents, - .num_parents = array_size(std_clk_io_parents), -}; - -static struct clk_std clk_dmac0 = { - .enable_bit = 32, - .hw = { - .init = &clk_dmac0_init, - }, -}; - -static const struct clk_init_data clk_dmac1_init = { - .name = "dmac1", - .ops = &ios_ops, - .parent_names = std_clk_io_parents, - .num_parents = array_size(std_clk_io_parents), -}; - -static struct clk_std clk_dmac1 = { - .enable_bit = 33, - .hw = { - .init = &clk_dmac1_init, - }, -}; - -static const struct clk_init_data clk_audio_init = { - .name = "audio", - .ops = &ios_ops, - .parent_names = std_clk_io_parents, - .num_parents = array_size(std_clk_io_parents), -}; - -static struct clk_std clk_audio = { - .enable_bit = 35, - .hw = { - .init = &clk_audio_init, - }, -}; - -static const struct clk_init_data clk_uart0_init = { - .name = "uart0", - .ops = &ios_ops, - .parent_names = std_clk_io_parents, - .num_parents = array_size(std_clk_io_parents), -}; - -static struct clk_std clk_uart0 = { - .enable_bit = 36, - .hw = { - .init = &clk_uart0_init, - }, -}; - -static const struct clk_init_data clk_uart1_init = { - .name = "uart1", - .ops = &ios_ops, - .parent_names = std_clk_io_parents, - .num_parents = array_size(std_clk_io_parents), -}; - -static struct clk_std clk_uart1 = { - .enable_bit = 37, - .hw = { - .init = &clk_uart1_init, - }, -}; - -static const struct clk_init_data clk_uart2_init = { - .name = "uart2", - .ops = &ios_ops, - .parent_names = std_clk_io_parents, - .num_parents = array_size(std_clk_io_parents), -}; - -static struct clk_std clk_uart2 = { - .enable_bit = 38, - .hw = { - .init = &clk_uart2_init, - }, -}; - -static const struct clk_init_data clk_usp0_init = { - .name = "usp0", - .ops = &ios_ops, - .parent_names = std_clk_io_parents, - .num_parents = array_size(std_clk_io_parents), -}; - -static struct clk_std clk_usp0 = { - .enable_bit = 39, - .hw = { - .init = &clk_usp0_init, - }, -}; - -static const struct clk_init_data clk_usp1_init = { - .name = "usp1", - .ops = &ios_ops, - .parent_names = std_clk_io_parents, - .num_parents = array_size(std_clk_io_parents), -}; - -static struct clk_std clk_usp1 = { - .enable_bit = 40, - .hw = { - .init = &clk_usp1_init, - }, -}; - -static const struct clk_init_data clk_usp2_init = { - .name = "usp2", - .ops = &ios_ops, - .parent_names = std_clk_io_parents, - .num_parents = array_size(std_clk_io_parents), -}; - -static struct clk_std clk_usp2 = { - .enable_bit = 41, - .hw = { - .init = &clk_usp2_init, - }, -}; - -static const struct clk_init_data clk_vip_init = { - .name = "vip", - .ops = &ios_ops, - .parent_names = std_clk_io_parents, - .num_parents = array_size(std_clk_io_parents), -}; - -static struct clk_std clk_vip = { - .enable_bit = 42, - .hw = { - .init = &clk_vip_init, - }, -}; - -static const struct clk_init_data clk_spi0_init = { - .name = "spi0", - .ops = &ios_ops, - .parent_names = std_clk_io_parents, - .num_parents = array_size(std_clk_io_parents), -}; - -static struct clk_std clk_spi0 = { - .enable_bit = 43, - .hw = { - .init = &clk_spi0_init, - }, -}; - -static const struct clk_init_data clk_spi1_init = { - .name = "spi1", - .ops = &ios_ops, - .parent_names = std_clk_io_parents, - .num_parents = array_size(std_clk_io_parents), -}; - -static struct clk_std clk_spi1 = { - .enable_bit = 44, - .hw = { - .init = &clk_spi1_init, - }, -}; - -static const struct clk_init_data clk_tsc_init = { - .name = "tsc", - .ops = &ios_ops, - .parent_names = std_clk_io_parents, - .num_parents = array_size(std_clk_io_parents), -}; - -static struct clk_std clk_tsc = { - .enable_bit = 45, - .hw = { - .init = &clk_tsc_init, - }, -}; - -static const struct clk_init_data clk_i2c0_init = { - .name = "i2c0", - .ops = &ios_ops, - .parent_names = std_clk_io_parents, - .num_parents = array_size(std_clk_io_parents), -}; - -static struct clk_std clk_i2c0 = { - .enable_bit = 46, - .hw = { - .init = &clk_i2c0_init, - }, -}; - -static const struct clk_init_data clk_i2c1_init = { - .name = "i2c1", - .ops = &ios_ops, - .parent_names = std_clk_io_parents, - .num_parents = array_size(std_clk_io_parents), -}; - -static struct clk_std clk_i2c1 = { - .enable_bit = 47, - .hw = { - .init = &clk_i2c1_init, - }, -}; - -static const struct clk_init_data clk_pwmc_init = { - .name = "pwmc", - .ops = &ios_ops, - .parent_names = std_clk_io_parents, - .num_parents = array_size(std_clk_io_parents), -}; - -static struct clk_std clk_pwmc = { - .enable_bit = 48, - .hw = { - .init = &clk_pwmc_init, - }, -}; - -static const struct clk_init_data clk_efuse_init = { - .name = "efuse", - .ops = &ios_ops, - .parent_names = std_clk_io_parents, - .num_parents = array_size(std_clk_io_parents), -}; - -static struct clk_std clk_efuse = { - .enable_bit = 49, - .hw = { - .init = &clk_efuse_init, - }, -}; - -static const struct clk_init_data clk_pulse_init = { - .name = "pulse", - .ops = &ios_ops, - .parent_names = std_clk_io_parents, - .num_parents = array_size(std_clk_io_parents), -}; - -static struct clk_std clk_pulse = { - .enable_bit = 50, - .hw = { - .init = &clk_pulse_init, - }, -}; - -static const char * const std_clk_dsp_parents[] = { - "dsp", -}; - -static const struct clk_init_data clk_gps_init = { - .name = "gps", - .ops = &ios_ops, - .parent_names = std_clk_dsp_parents, - .num_parents = array_size(std_clk_dsp_parents), -}; - -static struct clk_std clk_gps = { - .enable_bit = 1, - .hw = { - .init = &clk_gps_init, - }, -}; - -static const struct clk_init_data clk_mf_init = { - .name = "mf", - .ops = &ios_ops, - .parent_names = std_clk_io_parents, - .num_parents = array_size(std_clk_io_parents), -}; - -static struct clk_std clk_mf = { - .enable_bit = 2, - .hw = { - .init = &clk_mf_init, - }, -}; - -static const char * const std_clk_sys_parents[] = { - "sys", -}; - -static const struct clk_init_data clk_security_init = { - .name = "security", - .ops = &ios_ops, - .parent_names = std_clk_sys_parents, - .num_parents = array_size(std_clk_sys_parents), -}; - -static struct clk_std clk_security = { - .enable_bit = 19, - .hw = { - .init = &clk_security_init, - }, -}; - -static const char * const std_clk_usb_parents[] = { - "usb_pll", -}; - -static const struct clk_init_data clk_usb0_init = { - .name = "usb0", - .ops = &ios_ops, - .parent_names = std_clk_usb_parents, - .num_parents = array_size(std_clk_usb_parents), -}; - -static struct clk_std clk_usb0 = { - .enable_bit = 16, - .hw = { - .init = &clk_usb0_init, - }, -}; - -static const struct clk_init_data clk_usb1_init = { - .name = "usb1", - .ops = &ios_ops, - .parent_names = std_clk_usb_parents, - .num_parents = array_size(std_clk_usb_parents), -}; - -static struct clk_std clk_usb1 = { - .enable_bit = 17, - .hw = { - .init = &clk_usb1_init, - }, -}; diff --git a/drivers/clk/sirf/clk-prima2.c b/drivers/clk/sirf/clk-prima2.c --- a/drivers/clk/sirf/clk-prima2.c +++ /dev/null -// spdx-license-identifier: gpl-2.0-or-later -/* - * clock tree for csr sirfprimaii - * - * copyright (c) 2011 - 2014 cambridge silicon radio limited, a csr plc group - * company. - */ - -#include <linux/module.h> -#include <linux/bitops.h> -#include <linux/io.h> -#include <linux/clkdev.h> -#include <linux/clk-provider.h> -#include <linux/of_address.h> -#include <linux/syscore_ops.h> - -#include "prima2.h" -#include "clk-common.c" - -static struct clk_dmn clk_mmc01 = { - .regofs = sirfsoc_clkc_mmc_cfg, - .enable_bit = 59, - .hw = { - .init = &clk_mmc01_init, - }, -}; - -static struct clk_dmn clk_mmc23 = { - .regofs = sirfsoc_clkc_mmc_cfg, - .enable_bit = 60, - .hw = { - .init = &clk_mmc23_init, - }, -}; - -static struct clk_dmn clk_mmc45 = { - .regofs = sirfsoc_clkc_mmc_cfg, - .enable_bit = 61, - .hw = { - .init = &clk_mmc45_init, - }, -}; - -static const struct clk_init_data clk_nand_init = { - .name = "nand", - .ops = &ios_ops, - .parent_names = std_clk_io_parents, - .num_parents = array_size(std_clk_io_parents), -}; - -static struct clk_std clk_nand = { - .enable_bit = 34, - .hw = { - .init = &clk_nand_init, - }, -}; - -enum prima2_clk_index { - /* 0 1 2 3 4 5 6 7 8 9 */ - rtc, osc, pll1, pll2, pll3, mem, sys, security, dsp, gps, - mf, io, cpu, uart0, uart1, uart2, tsc, i2c0, i2c1, spi0, - spi1, pwmc, efuse, pulse, dmac0, dmac1, nand, audio, usp0, usp1, - usp2, vip, gfx, mm, lcd, vpp, mmc01, mmc23, mmc45, usbpll, - usb0, usb1, cphif, maxclk, -}; - -static __initdata struct clk_hw *prima2_clk_hw_array[maxclk] = { - null, /* dummy */ - null, - &clk_pll1.hw, - &clk_pll2.hw, - &clk_pll3.hw, - &clk_mem.hw, - &clk_sys.hw, - &clk_security.hw, - &clk_dsp.hw, - &clk_gps.hw, - &clk_mf.hw, - &clk_io.hw, - &clk_cpu.hw, - &clk_uart0.hw, - &clk_uart1.hw, - &clk_uart2.hw, - &clk_tsc.hw, - &clk_i2c0.hw, - &clk_i2c1.hw, - &clk_spi0.hw, - &clk_spi1.hw, - &clk_pwmc.hw, - &clk_efuse.hw, - &clk_pulse.hw, - &clk_dmac0.hw, - &clk_dmac1.hw, - &clk_nand.hw, - &clk_audio.hw, - &clk_usp0.hw, - &clk_usp1.hw, - &clk_usp2.hw, - &clk_vip.hw, - &clk_gfx.hw, - &clk_mm.hw, - &clk_lcd.hw, - &clk_vpp.hw, - &clk_mmc01.hw, - &clk_mmc23.hw, - &clk_mmc45.hw, - &usb_pll_clk_hw, - &clk_usb0.hw, - &clk_usb1.hw, - &clk_cphif.hw, -}; - -static struct clk *prima2_clks[maxclk]; - -static void __init prima2_clk_init(struct device_node *np) -{ - struct device_node *rscnp; - int i; - - rscnp = of_find_compatible_node(null, null, "sirf,prima2-rsc"); - sirfsoc_rsc_vbase = of_iomap(rscnp, 0); - if (!sirfsoc_rsc_vbase) - panic("unable to map rsc registers "); - of_node_put(rscnp); - - sirfsoc_clk_vbase = of_iomap(np, 0); - if (!sirfsoc_clk_vbase) - panic("unable to map clkc registers "); - - /* these are always available (rtc and 26mhz osc)*/ - prima2_clks[rtc] = clk_register_fixed_rate(null, "rtc", null, 0, 32768); - prima2_clks[osc] = clk_register_fixed_rate(null, "osc", null, 0, - 26000000); - - for (i = pll1; i < maxclk; i++) { - prima2_clks[i] = clk_register(null, prima2_clk_hw_array[i]); - bug_on(is_err(prima2_clks[i])); - } - clk_register_clkdev(prima2_clks[cpu], null, "cpu"); - clk_register_clkdev(prima2_clks[io], null, "io"); - clk_register_clkdev(prima2_clks[mem], null, "mem"); - clk_register_clkdev(prima2_clks[mem], null, "osc"); - - clk_data.clks = prima2_clks; - clk_data.clk_num = maxclk; - - of_clk_add_provider(np, of_clk_src_onecell_get, &clk_data); -} -clk_of_declare(prima2_clk, "sirf,prima2-clkc", prima2_clk_init); diff --git a/drivers/clk/sirf/prima2.h b/drivers/clk/sirf/prima2.h --- a/drivers/clk/sirf/prima2.h +++ /dev/null -/* spdx-license-identifier: gpl-2.0 */ -#define sirfsoc_clkc_clk_en0 0x0000 -#define sirfsoc_clkc_clk_en1 0x0004 -#define sirfsoc_clkc_ref_cfg 0x0014 -#define sirfsoc_clkc_cpu_cfg 0x0018 -#define sirfsoc_clkc_mem_cfg 0x001c -#define sirfsoc_clkc_sys_cfg 0x0020 -#define sirfsoc_clkc_io_cfg 0x0024 -#define sirfsoc_clkc_dsp_cfg 0x0028 -#define sirfsoc_clkc_gfx_cfg 0x002c -#define sirfsoc_clkc_mm_cfg 0x0030 -#define sirfsoc_clkc_lcd_cfg 0x0034 -#define sirfsoc_clkc_mmc_cfg 0x0038 -#define sirfsoc_clkc_pll1_cfg0 0x0040 -#define sirfsoc_clkc_pll2_cfg0 0x0044 -#define sirfsoc_clkc_pll3_cfg0 0x0048 -#define sirfsoc_clkc_pll1_cfg1 0x004c -#define sirfsoc_clkc_pll2_cfg1 0x0050 -#define sirfsoc_clkc_pll3_cfg1 0x0054 -#define sirfsoc_clkc_pll1_cfg2 0x0058 -#define sirfsoc_clkc_pll2_cfg2 0x005c -#define sirfsoc_clkc_pll3_cfg2 0x0060 -#define sirfsoc_usbphy_pll_ctrl 0x0008 -#define sirfsoc_usbphy_pll_powerdown bit(1) -#define sirfsoc_usbphy_pll_bypass bit(2) -#define sirfsoc_usbphy_pll_lock bit(3)
|
Clock
|
ed0f3e23d10699df7b8f6189f7c52d0d4a3619db
|
arnd bergmann barry song baohua kernel org
|
drivers
|
clk
|
bindings, clock, sirf
|
clk: remove tango4 driver
|
the tango platform is getting removed, so the driver is no longer needed.
|
this release allows to map an uid to a different one in a mount; it also adds support for selecting the preemption model at runtime; support for a low-overhead memory error detector designed to be used in production; support for the acrn hypervisor designed for embedded systems; btrfs initial support for zoned devices, subpage blocks sizes and performance improvements; support for eager nfs writes; support for a thermal power management to control the surface temperature of embedded devices in an unified way; the napi polling can be moved to a kernel thread; and support for non-blocking path lookups. as always, there are many other features, new drivers, improvements and fixes.
|
remove tango4 driver
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'cryptography', 'security', 'networking', 'architectures x86 arm risc-v powerpc mips csky s390 pa-risc c6x']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'device voltage and frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'non-transparent bridge (ntb)', 'thunderbolt', 'clock', 'phy ("physical layer" framework)', 'cxl (compute express link)', 'various']
|
[]
|
['txt', 'c', 'makefile']
| 3
| 0
| 109
|
--- diff --git a/documentation/devicetree/bindings/clock/tango4-clock.txt b/documentation/devicetree/bindings/clock/tango4-clock.txt --- a/documentation/devicetree/bindings/clock/tango4-clock.txt +++ /dev/null -* sigma designs tango4 clock generator - -the tango4 clock generator outputs cpu_clk and sys_clk (the latter is used -for ram and various peripheral devices). the clock binding described here -is applicable to all tango4 socs. - -required properties: - -- compatible: should be "sigma,tango4-clkgen". -- reg: physical base address of the device and length of memory mapped region. -- clocks: phandle of the input clock (crystal oscillator). -- clock-output-names: should be "cpuclk" and "sysclk". -- #clock-cells: should be set to 1. - -example: - - clkgen: clkgen@10000 { - compatible = "sigma,tango4-clkgen"; - reg = <0x10000 0x40>; - clocks = <&xtal>; - clock-output-names = "cpuclk", "sysclk"; - #clock-cells = <1>; - }; diff --git a/drivers/clk/makefile b/drivers/clk/makefile --- a/drivers/clk/makefile +++ b/drivers/clk/makefile -obj-$(config_arch_tango) += clk-tango4.o diff --git a/drivers/clk/clk-tango4.c b/drivers/clk/clk-tango4.c --- a/drivers/clk/clk-tango4.c +++ /dev/null -// spdx-license-identifier: gpl-2.0 -#include <linux/kernel.h> -#include <linux/clk-provider.h> -#include <linux/of_address.h> -#include <linux/init.h> -#include <linux/io.h> - -#define clk_count 4 /* cpu_clk, sys_clk, usb_clk, sdio_clk */ -static struct clk *clks[clk_count]; -static struct clk_onecell_data clk_data = { clks, clk_count }; - -#define sysclk_div 0x20 -#define cpuclk_div 0x24 -#define div_bypass bit(23) - -/*** clkgen_pll ***/ -#define extract_pll_n(val) ((val >> 0) & ((1u << 7) - 1)) -#define extract_pll_k(val) ((val >> 13) & ((1u << 3) - 1)) -#define extract_pll_m(val) ((val >> 16) & ((1u << 3) - 1)) -#define extract_pll_isel(val) ((val >> 24) & ((1u << 3) - 1)) - -static void __init make_pll(int idx, const char *parent, void __iomem *base) -{ - char name[8]; - u32 val, mul, div; - - sprintf(name, "pll%d", idx); - val = readl(base + idx * 8); - mul = extract_pll_n(val) + 1; - div = (extract_pll_m(val) + 1) << extract_pll_k(val); - clk_register_fixed_factor(null, name, parent, 0, mul, div); - if (extract_pll_isel(val) != 1) - panic("%s: input not set to xtal_in ", name); -} - -static void __init make_cd(int idx, void __iomem *base) -{ - char name[8]; - u32 val, mul, div; - - sprintf(name, "cd%d", idx); - val = readl(base + idx * 8); - mul = 1 << 27; - div = (2 << 27) + val; - clk_register_fixed_factor(null, name, "pll2", 0, mul, div); - if (val > 0xf0000000) - panic("%s: unsupported divider %x ", name, val); -} - -static void __init tango4_clkgen_setup(struct device_node *np) -{ - struct clk **pp = clk_data.clks; - void __iomem *base = of_iomap(np, 0); - const char *parent = of_clk_get_parent_name(np, 0); - - if (!base) - panic("%pofn: invalid address ", np); - - if (readl(base + cpuclk_div) & div_bypass) - panic("%pofn: unsupported cpuclk setup ", np); - - if (readl(base + sysclk_div) & div_bypass) - panic("%pofn: unsupported sysclk setup ", np); - - writel(0x100, base + cpuclk_div); /* disable frequency ramping */ - - make_pll(0, parent, base); - make_pll(1, parent, base); - make_pll(2, parent, base); - make_cd(2, base + 0x80); - make_cd(6, base + 0x80); - - pp[0] = clk_register_divider(null, "cpu_clk", "pll0", 0, - base + cpuclk_div, 8, 8, clk_divider_one_based, null); - pp[1] = clk_register_fixed_factor(null, "sys_clk", "pll1", 0, 1, 4); - pp[2] = clk_register_fixed_factor(null, "usb_clk", "cd2", 0, 1, 2); - pp[3] = clk_register_fixed_factor(null, "sdio_clk", "cd6", 0, 1, 2); - - if (is_err(pp[0]) || is_err(pp[1]) || is_err(pp[2]) || is_err(pp[3])) - panic("%pofn: clk registration failed ", np); - - if (of_clk_add_provider(np, of_clk_src_onecell_get, &clk_data)) - panic("%pofn: clk provider registration failed ", np); -} -clk_of_declare(tango4_clkgen, "sigma,tango4-clkgen", tango4_clkgen_setup);
|
Clock
|
7765f32a8e9b03cf0e25698b5a841e00c1a5090e
|
arnd bergmann mans rullgard mans mansr com
|
drivers
|
clk
|
bindings, clock
|
clk: remove u300 driver
|
the st-ericsson u300 platform is getting removed, so this driver is no longer needed.
|
this release allows to map an uid to a different one in a mount; it also adds support for selecting the preemption model at runtime; support for a low-overhead memory error detector designed to be used in production; support for the acrn hypervisor designed for embedded systems; btrfs initial support for zoned devices, subpage blocks sizes and performance improvements; support for eager nfs writes; support for a thermal power management to control the surface temperature of embedded devices in an unified way; the napi polling can be moved to a kernel thread; and support for non-blocking path lookups. as always, there are many other features, new drivers, improvements and fixes.
|
remove u300 driver
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'cryptography', 'security', 'networking', 'architectures x86 arm risc-v powerpc mips csky s390 pa-risc c6x']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'device voltage and frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'non-transparent bridge (ntb)', 'thunderbolt', 'clock', 'phy ("physical layer" framework)', 'cxl (compute express link)', 'various']
|
[]
|
['txt', 'h', 'c', 'makefile']
| 4
| 0
| 1,281
|
--- diff --git a/documentation/devicetree/bindings/clock/ste-u300-syscon-clock.txt b/documentation/devicetree/bindings/clock/ste-u300-syscon-clock.txt --- a/documentation/devicetree/bindings/clock/ste-u300-syscon-clock.txt +++ /dev/null -clock bindings for st-ericsson u300 system controller clocks - -bindings for the gated system controller clocks: - -required properties: -- compatible: must be "stericsson,u300-syscon-clk" -- #clock-cells: must be <0> -- clock-type: specifies the type of clock: - 0 = slow clock - 1 = fast clock - 2 = rest/remaining clock -- clock-id: specifies the clock in the type range - -optional properties: -- clocks: parent clock(s) - -the available clocks per type are as follows: - -type: id: clock: -------------------- -0 0 slow peripheral bridge clock -0 1 uart0 clock -0 4 gpio clock -0 6 rtc clock -0 7 application timer clock -0 8 access timer clock - -1 0 fast peripheral bridge clock -1 1 i2c bus 0 clock -1 2 i2c bus 1 clock -1 5 mmc interface peripheral (silicon) clock -1 6 spi clock - -2 3 cpu clock -2 4 dma controller clock -2 5 external memory interface (emif) clock -2 6 nand flask interface clock -2 8 xgam graphics engine clock -2 9 shared external memory interface (semi) clock -2 10 ahb subsystem bridge clock -2 12 interrupt controller clock - -example: - -gpio_clk: gpio_clk@13m { - #clock-cells = <0>; - compatible = "stericsson,u300-syscon-clk"; - clock-type = <0>; /* slow */ - clock-id = <4>; - clocks = <&slow_clk>; -}; - -gpio: gpio@c0016000 { - compatible = "stericsson,gpio-coh901"; - (...) - clocks = <&gpio_clk>; -}; - - -bindings for the mmc/sd card clock: - -required properties: -- compatible: must be "stericsson,u300-syscon-mclk" -- #clock-cells: must be <0> - -optional properties: -- clocks: parent clock(s) - -mmc_mclk: mmc_mclk { - #clock-cells = <0>; - compatible = "stericsson,u300-syscon-mclk"; - clocks = <&mmc_pclk>; -}; - -mmcsd: mmcsd@c0001000 { - compatible = "arm,pl18x", "arm,primecell"; - clocks = <&mmc_pclk>, <&mmc_mclk>; - clock-names = "apb_pclk", "mclk"; - (...) -}; diff --git a/drivers/clk/makefile b/drivers/clk/makefile --- a/drivers/clk/makefile +++ b/drivers/clk/makefile -obj-$(config_arch_u300) += clk-u300.o diff --git a/drivers/clk/clk-u300.c b/drivers/clk/clk-u300.c --- a/drivers/clk/clk-u300.c +++ /dev/null -// spdx-license-identifier: gpl-2.0-only -/* - * u300 clock implementation - * copyright (c) 2007-2012 st-ericsson ab - * author: linus walleij <linus.walleij@stericsson.com> - * author: jonas aaberg <jonas.aberg@stericsson.com> - */ -#include <linux/clkdev.h> -#include <linux/slab.h> -#include <linux/err.h> -#include <linux/io.h> -#include <linux/clk-provider.h> -#include <linux/spinlock.h> -#include <linux/of.h> -#include <linux/platform_data/clk-u300.h> - -/* app side syscon registers */ -/* clk control register 16bit (r/w) */ -#define u300_syscon_ccr (0x0000) -#define u300_syscon_ccr_i2s1_use_vcxo (0x0040) -#define u300_syscon_ccr_i2s0_use_vcxo (0x0020) -#define u300_syscon_ccr_turn_vcxo_on (0x0008) -#define u300_syscon_ccr_clking_performance_mask (0x0007) -#define u300_syscon_ccr_clking_performance_low_power (0x04) -#define u300_syscon_ccr_clking_performance_low (0x03) -#define u300_syscon_ccr_clking_performance_intermediate (0x02) -#define u300_syscon_ccr_clking_performance_high (0x01) -#define u300_syscon_ccr_clking_performance_best (0x00) -/* clk status register 16bit (r/w) */ -#define u300_syscon_csr (0x0004) -#define u300_syscon_csr_pll208_lock_ind (0x0002) -#define u300_syscon_csr_pll13_lock_ind (0x0001) -/* reset lines for slow devices 16bit (r/w) */ -#define u300_syscon_rsr (0x0014) -#define u300_syscon_rsr_ppm_reset_en (0x0200) -#define u300_syscon_rsr_acc_tmr_reset_en (0x0100) -#define u300_syscon_rsr_app_tmr_reset_en (0x0080) -#define u300_syscon_rsr_rtc_reset_en (0x0040) -#define u300_syscon_rsr_keypad_reset_en (0x0020) -#define u300_syscon_rsr_gpio_reset_en (0x0010) -#define u300_syscon_rsr_eh_reset_en (0x0008) -#define u300_syscon_rsr_btr_reset_en (0x0004) -#define u300_syscon_rsr_uart_reset_en (0x0002) -#define u300_syscon_rsr_slow_bridge_reset_en (0x0001) -/* reset lines for fast devices 16bit (r/w) */ -#define u300_syscon_rfr (0x0018) -#define u300_syscon_rfr_uart1_reset_enable (0x0080) -#define u300_syscon_rfr_spi_reset_enable (0x0040) -#define u300_syscon_rfr_mmc_reset_enable (0x0020) -#define u300_syscon_rfr_pcm_i2s1_reset_enable (0x0010) -#define u300_syscon_rfr_pcm_i2s0_reset_enable (0x0008) -#define u300_syscon_rfr_i2c1_reset_enable (0x0004) -#define u300_syscon_rfr_i2c0_reset_enable (0x0002) -#define u300_syscon_rfr_fast_bridge_reset_enable (0x0001) -/* reset lines for the rest of the peripherals 16bit (r/w) */ -#define u300_syscon_rrr (0x001c) -#define u300_syscon_rrr_cds_reset_en (0x4000) -#define u300_syscon_rrr_isp_reset_en (0x2000) -#define u300_syscon_rrr_intcon_reset_en (0x1000) -#define u300_syscon_rrr_mspro_reset_en (0x0800) -#define u300_syscon_rrr_xgam_reset_en (0x0100) -#define u300_syscon_rrr_xgam_vc_sync_reset_en (0x0080) -#define u300_syscon_rrr_nandif_reset_en (0x0040) -#define u300_syscon_rrr_emif_reset_en (0x0020) -#define u300_syscon_rrr_dmac_reset_en (0x0010) -#define u300_syscon_rrr_cpu_reset_en (0x0008) -#define u300_syscon_rrr_apex_reset_en (0x0004) -#define u300_syscon_rrr_ahb_reset_en (0x0002) -#define u300_syscon_rrr_aaif_reset_en (0x0001) -/* clock enable for slow peripherals 16bit (r/w) */ -#define u300_syscon_cesr (0x0020) -#define u300_syscon_cesr_ppm_clk_en (0x0200) -#define u300_syscon_cesr_acc_tmr_clk_en (0x0100) -#define u300_syscon_cesr_app_tmr_clk_en (0x0080) -#define u300_syscon_cesr_keypad_clk_en (0x0040) -#define u300_syscon_cesr_gpio_clk_en (0x0010) -#define u300_syscon_cesr_eh_clk_en (0x0008) -#define u300_syscon_cesr_btr_clk_en (0x0004) -#define u300_syscon_cesr_uart_clk_en (0x0002) -#define u300_syscon_cesr_slow_bridge_clk_en (0x0001) -/* clock enable for fast peripherals 16bit (r/w) */ -#define u300_syscon_cefr (0x0024) -#define u300_syscon_cefr_uart1_clk_en (0x0200) -#define u300_syscon_cefr_i2s1_core_clk_en (0x0100) -#define u300_syscon_cefr_i2s0_core_clk_en (0x0080) -#define u300_syscon_cefr_spi_clk_en (0x0040) -#define u300_syscon_cefr_mmc_clk_en (0x0020) -#define u300_syscon_cefr_i2s1_clk_en (0x0010) -#define u300_syscon_cefr_i2s0_clk_en (0x0008) -#define u300_syscon_cefr_i2c1_clk_en (0x0004) -#define u300_syscon_cefr_i2c0_clk_en (0x0002) -#define u300_syscon_cefr_fast_bridge_clk_en (0x0001) -/* clock enable for the rest of the peripherals 16bit (r/w) */ -#define u300_syscon_cerr (0x0028) -#define u300_syscon_cerr_cds_clk_en (0x2000) -#define u300_syscon_cerr_isp_clk_en (0x1000) -#define u300_syscon_cerr_mspro_clk_en (0x0800) -#define u300_syscon_cerr_ahb_subsys_bridge_clk_en (0x0400) -#define u300_syscon_cerr_semi_clk_en (0x0200) -#define u300_syscon_cerr_xgam_clk_en (0x0100) -#define u300_syscon_cerr_video_enc_clk_en (0x0080) -#define u300_syscon_cerr_nandif_clk_en (0x0040) -#define u300_syscon_cerr_emif_clk_en (0x0020) -#define u300_syscon_cerr_dmac_clk_en (0x0010) -#define u300_syscon_cerr_cpu_clk_en (0x0008) -#define u300_syscon_cerr_apex_clk_en (0x0004) -#define u300_syscon_cerr_ahb_clk_en (0x0002) -#define u300_syscon_cerr_aaif_clk_en (0x0001) -/* single block clock enable 16bit (-/w) */ -#define u300_syscon_sbcer (0x002c) -#define u300_syscon_sbcer_ppm_clk_en (0x0009) -#define u300_syscon_sbcer_acc_tmr_clk_en (0x0008) -#define u300_syscon_sbcer_app_tmr_clk_en (0x0007) -#define u300_syscon_sbcer_keypad_clk_en (0x0006) -#define u300_syscon_sbcer_gpio_clk_en (0x0004) -#define u300_syscon_sbcer_eh_clk_en (0x0003) -#define u300_syscon_sbcer_btr_clk_en (0x0002) -#define u300_syscon_sbcer_uart_clk_en (0x0001) -#define u300_syscon_sbcer_slow_bridge_clk_en (0x0000) -#define u300_syscon_sbcer_uart1_clk_en (0x0019) -#define u300_syscon_sbcer_i2s1_core_clk_en (0x0018) -#define u300_syscon_sbcer_i2s0_core_clk_en (0x0017) -#define u300_syscon_sbcer_spi_clk_en (0x0016) -#define u300_syscon_sbcer_mmc_clk_en (0x0015) -#define u300_syscon_sbcer_i2s1_clk_en (0x0014) -#define u300_syscon_sbcer_i2s0_clk_en (0x0013) -#define u300_syscon_sbcer_i2c1_clk_en (0x0012) -#define u300_syscon_sbcer_i2c0_clk_en (0x0011) -#define u300_syscon_sbcer_fast_bridge_clk_en (0x0010) -#define u300_syscon_sbcer_cds_clk_en (0x002d) -#define u300_syscon_sbcer_isp_clk_en (0x002c) -#define u300_syscon_sbcer_mspro_clk_en (0x002b) -#define u300_syscon_sbcer_ahb_subsys_bridge_clk_en (0x002a) -#define u300_syscon_sbcer_semi_clk_en (0x0029) -#define u300_syscon_sbcer_xgam_clk_en (0x0028) -#define u300_syscon_sbcer_video_enc_clk_en (0x0027) -#define u300_syscon_sbcer_nandif_clk_en (0x0026) -#define u300_syscon_sbcer_emif_clk_en (0x0025) -#define u300_syscon_sbcer_dmac_clk_en (0x0024) -#define u300_syscon_sbcer_cpu_clk_en (0x0023) -#define u300_syscon_sbcer_apex_clk_en (0x0022) -#define u300_syscon_sbcer_ahb_clk_en (0x0021) -#define u300_syscon_sbcer_aaif_clk_en (0x0020) -/* single block clock disable 16bit (-/w) */ -#define u300_syscon_sbcdr (0x0030) -/* same values as above for sbcer */ -/* clock force slow peripherals 16bit (r/w) */ -#define u300_syscon_cfsr (0x003c) -#define u300_syscon_cfsr_ppm_clk_force_en (0x0200) -#define u300_syscon_cfsr_acc_tmr_clk_force_en (0x0100) -#define u300_syscon_cfsr_app_tmr_clk_force_en (0x0080) -#define u300_syscon_cfsr_keypad_clk_force_en (0x0020) -#define u300_syscon_cfsr_gpio_clk_force_en (0x0010) -#define u300_syscon_cfsr_eh_clk_force_en (0x0008) -#define u300_syscon_cfsr_btr_clk_force_en (0x0004) -#define u300_syscon_cfsr_uart_clk_force_en (0x0002) -#define u300_syscon_cfsr_slow_bridge_clk_force_en (0x0001) -/* clock force fast peripherals 16bit (r/w) */ -#define u300_syscon_cffr (0x40) -/* values not defined. define if you want to use them. */ -/* clock force the rest of the peripherals 16bit (r/w) */ -#define u300_syscon_cfrr (0x44) -#define u300_syscon_cfrr_cds_clk_force_en (0x2000) -#define u300_syscon_cfrr_isp_clk_force_en (0x1000) -#define u300_syscon_cfrr_mspro_clk_force_en (0x0800) -#define u300_syscon_cfrr_ahb_subsys_bridge_clk_force_en (0x0400) -#define u300_syscon_cfrr_semi_clk_force_en (0x0200) -#define u300_syscon_cfrr_xgam_clk_force_en (0x0100) -#define u300_syscon_cfrr_video_enc_clk_force_en (0x0080) -#define u300_syscon_cfrr_nandif_clk_force_en (0x0040) -#define u300_syscon_cfrr_emif_clk_force_en (0x0020) -#define u300_syscon_cfrr_dmac_clk_force_en (0x0010) -#define u300_syscon_cfrr_cpu_clk_force_en (0x0008) -#define u300_syscon_cfrr_apex_clk_force_en (0x0004) -#define u300_syscon_cfrr_ahb_clk_force_en (0x0002) -#define u300_syscon_cfrr_aaif_clk_force_en (0x0001) -/* pll208 frequency control 16bit (r/w) */ -#define u300_syscon_pfcr (0x48) -#define u300_syscon_pfcr_dpll_mult_num (0x000f) -/* power management control 16bit (r/w) */ -#define u300_syscon_pmcr (0x50) -#define u300_syscon_pmcr_dcon_enable (0x0002) -#define u300_syscon_pmcr_pwr_mgnt_enable (0x0001) -/* reset out 16bit (r/w) */ -#define u300_syscon_rcr (0x6c) -#define u300_syscon_rcr_resout0_rst_n_disable (0x0001) -/* emif slew rate control 16bit (r/w) */ -#define u300_syscon_srclr (0x70) -#define u300_syscon_srclr_mask (0x03ff) -#define u300_syscon_srclr_value (0x03ff) -#define u300_syscon_srclr_emif_1_slrc_5_b (0x0200) -#define u300_syscon_srclr_emif_1_slrc_5_a (0x0100) -#define u300_syscon_srclr_emif_1_slrc_4_b (0x0080) -#define u300_syscon_srclr_emif_1_slrc_4_a (0x0040) -#define u300_syscon_srclr_emif_1_slrc_3_b (0x0020) -#define u300_syscon_srclr_emif_1_slrc_3_a (0x0010) -#define u300_syscon_srclr_emif_1_slrc_2_b (0x0008) -#define u300_syscon_srclr_emif_1_slrc_2_a (0x0004) -#define u300_syscon_srclr_emif_1_slrc_1_b (0x0002) -#define u300_syscon_srclr_emif_1_slrc_1_a (0x0001) -/* emif clock control register 16bit (r/w) */ -#define u300_syscon_eccr (0x0078) -#define u300_syscon_eccr_mask (0x000f) -#define u300_syscon_eccr_emif_1_static_clk_en_n_disable (0x0008) -#define u300_syscon_eccr_emif_1_ret_out_clk_en_n_disable (0x0004) -#define u300_syscon_eccr_emif_memclk_ret_en_n_disable (0x0002) -#define u300_syscon_eccr_emif_sdrclk_ret_en_n_disable (0x0001) -/* mmc/mspro frequency divider register 0 16bit (r/w) */ -#define u300_syscon_mmf0r (0x90) -#define u300_syscon_mmf0r_mask (0x00ff) -#define u300_syscon_mmf0r_freq_0_high_mask (0x00f0) -#define u300_syscon_mmf0r_freq_0_low_mask (0x000f) -/* mmc/mspro frequency divider register 1 16bit (r/w) */ -#define u300_syscon_mmf1r (0x94) -#define u300_syscon_mmf1r_mask (0x00ff) -#define u300_syscon_mmf1r_freq_1_high_mask (0x00f0) -#define u300_syscon_mmf1r_freq_1_low_mask (0x000f) -/* clock control for the mmc and mspro blocks 16bit (r/w) */ -#define u300_syscon_mmcr (0x9c) -#define u300_syscon_mmcr_mask (0x0003) -#define u300_syscon_mmcr_mmc_fb_clk_sel_enable (0x0002) -#define u300_syscon_mmcr_mspro_freqsel_enable (0x0001) -/* sys_0_clk_control first clock control 16bit (r/w) */ -#define u300_syscon_s0ccr (0x120) -#define u300_syscon_s0ccr_field_mask (0x43ff) -#define u300_syscon_s0ccr_clock_req (0x4000) -#define u300_syscon_s0ccr_clock_req_monitor (0x2000) -#define u300_syscon_s0ccr_clock_inv (0x0200) -#define u300_syscon_s0ccr_clock_freq_mask (0x01e0) -#define u300_syscon_s0ccr_clock_select_mask (0x001e) -#define u300_syscon_s0ccr_clock_enable (0x0001) -#define u300_syscon_s0ccr_sel_mclk (0x8 << 1) -#define u300_syscon_s0ccr_sel_acc_fsm_clk (0xa << 1) -#define u300_syscon_s0ccr_sel_pll60_48_clk (0xc << 1) -#define u300_syscon_s0ccr_sel_pll60_60_clk (0xd << 1) -#define u300_syscon_s0ccr_sel_acc_pll208_clk (0xe << 1) -#define u300_syscon_s0ccr_sel_app_pll13_clk (0x0 << 1) -#define u300_syscon_s0ccr_sel_app_fsm_clk (0x2 << 1) -#define u300_syscon_s0ccr_sel_rtc_clk (0x4 << 1) -#define u300_syscon_s0ccr_sel_app_pll208_clk (0x6 << 1) -/* sys_1_clk_control second clock control 16 bit (r/w) */ -#define u300_syscon_s1ccr (0x124) -#define u300_syscon_s1ccr_field_mask (0x43ff) -#define u300_syscon_s1ccr_clock_req (0x4000) -#define u300_syscon_s1ccr_clock_req_monitor (0x2000) -#define u300_syscon_s1ccr_clock_inv (0x0200) -#define u300_syscon_s1ccr_clock_freq_mask (0x01e0) -#define u300_syscon_s1ccr_clock_select_mask (0x001e) -#define u300_syscon_s1ccr_clock_enable (0x0001) -#define u300_syscon_s1ccr_sel_mclk (0x8 << 1) -#define u300_syscon_s1ccr_sel_acc_fsm_clk (0xa << 1) -#define u300_syscon_s1ccr_sel_pll60_48_clk (0xc << 1) -#define u300_syscon_s1ccr_sel_pll60_60_clk (0xd << 1) -#define u300_syscon_s1ccr_sel_acc_pll208_clk (0xe << 1) -#define u300_syscon_s1ccr_sel_acc_pll13_clk (0x0 << 1) -#define u300_syscon_s1ccr_sel_app_fsm_clk (0x2 << 1) -#define u300_syscon_s1ccr_sel_rtc_clk (0x4 << 1) -#define u300_syscon_s1ccr_sel_app_pll208_clk (0x6 << 1) -/* sys_2_clk_control third clock control 16 bit (r/w) */ -#define u300_syscon_s2ccr (0x128) -#define u300_syscon_s2ccr_field_mask (0xc3ff) -#define u300_syscon_s2ccr_clk_steal (0x8000) -#define u300_syscon_s2ccr_clock_req (0x4000) -#define u300_syscon_s2ccr_clock_req_monitor (0x2000) -#define u300_syscon_s2ccr_clock_inv (0x0200) -#define u300_syscon_s2ccr_clock_freq_mask (0x01e0) -#define u300_syscon_s2ccr_clock_select_mask (0x001e) -#define u300_syscon_s2ccr_clock_enable (0x0001) -#define u300_syscon_s2ccr_sel_mclk (0x8 << 1) -#define u300_syscon_s2ccr_sel_acc_fsm_clk (0xa << 1) -#define u300_syscon_s2ccr_sel_pll60_48_clk (0xc << 1) -#define u300_syscon_s2ccr_sel_pll60_60_clk (0xd << 1) -#define u300_syscon_s2ccr_sel_acc_pll208_clk (0xe << 1) -#define u300_syscon_s2ccr_sel_acc_pll13_clk (0x0 << 1) -#define u300_syscon_s2ccr_sel_app_fsm_clk (0x2 << 1) -#define u300_syscon_s2ccr_sel_rtc_clk (0x4 << 1) -#define u300_syscon_s2ccr_sel_app_pll208_clk (0x6 << 1) -/* sc_pll_irq_control 16bit (r/w) */ -#define u300_syscon_picr (0x0130) -#define u300_syscon_picr_mask (0x00ff) -#define u300_syscon_picr_force_pll208_lock_low_enable (0x0080) -#define u300_syscon_picr_force_pll208_lock_high_enable (0x0040) -#define u300_syscon_picr_force_pll13_lock_low_enable (0x0020) -#define u300_syscon_picr_force_pll13_lock_high_enable (0x0010) -#define u300_syscon_picr_irqmask_pll13_unlock_enable (0x0008) -#define u300_syscon_picr_irqmask_pll13_lock_enable (0x0004) -#define u300_syscon_picr_irqmask_pll208_unlock_enable (0x0002) -#define u300_syscon_picr_irqmask_pll208_lock_enable (0x0001) -/* sc_pll_irq_status 16 bit (r/-) */ -#define u300_syscon_pisr (0x0134) -#define u300_syscon_pisr_mask (0x000f) -#define u300_syscon_pisr_pll13_unlock_ind (0x0008) -#define u300_syscon_pisr_pll13_lock_ind (0x0004) -#define u300_syscon_pisr_pll208_unlock_ind (0x0002) -#define u300_syscon_pisr_pll208_lock_ind (0x0001) -/* sc_pll_irq_clear 16 bit (-/w) */ -#define u300_syscon_piclr (0x0138) -#define u300_syscon_piclr_mask (0x000f) -#define u300_syscon_piclr_rwmask (0x0000) -#define u300_syscon_piclr_pll13_unlock_sc (0x0008) -#define u300_syscon_piclr_pll13_lock_sc (0x0004) -#define u300_syscon_piclr_pll208_unlock_sc (0x0002) -#define u300_syscon_piclr_pll208_lock_sc (0x0001) -/* clock activity observability register 0 */ -#define u300_syscon_c0oar (0x140) -#define u300_syscon_c0oar_mask (0xffff) -#define u300_syscon_c0oar_value (0xffff) -#define u300_syscon_c0oar_bt_h_clk (0x8000) -#define u300_syscon_c0oar_aspb_p_clk (0x4000) -#define u300_syscon_c0oar_app_semi_h_clk (0x2000) -#define u300_syscon_c0oar_app_semi_clk (0x1000) -#define u300_syscon_c0oar_app_mmc_mspro_clk (0x0800) -#define u300_syscon_c0oar_app_i2s1_clk (0x0400) -#define u300_syscon_c0oar_app_i2s0_clk (0x0200) -#define u300_syscon_c0oar_app_cpu_clk (0x0100) -#define u300_syscon_c0oar_app_52_clk (0x0080) -#define u300_syscon_c0oar_app_208_clk (0x0040) -#define u300_syscon_c0oar_app_104_clk (0x0020) -#define u300_syscon_c0oar_apex_clk (0x0010) -#define u300_syscon_c0oar_ahpb_m_h_clk (0x0008) -#define u300_syscon_c0oar_ahb_clk (0x0004) -#define u300_syscon_c0oar_afpb_p_clk (0x0002) -#define u300_syscon_c0oar_aaif_clk (0x0001) -/* clock activity observability register 1 */ -#define u300_syscon_c1oar (0x144) -#define u300_syscon_c1oar_mask (0x3ffe) -#define u300_syscon_c1oar_value (0x3ffe) -#define u300_syscon_c1oar_nfif_f_clk (0x2000) -#define u300_syscon_c1oar_mspro_clk (0x1000) -#define u300_syscon_c1oar_mmc_p_clk (0x0800) -#define u300_syscon_c1oar_mmc_clk (0x0400) -#define u300_syscon_c1oar_kp_p_clk (0x0200) -#define u300_syscon_c1oar_i2c1_p_clk (0x0100) -#define u300_syscon_c1oar_i2c0_p_clk (0x0080) -#define u300_syscon_c1oar_gpio_clk (0x0040) -#define u300_syscon_c1oar_emif_mpmc_clk (0x0020) -#define u300_syscon_c1oar_emif_h_clk (0x0010) -#define u300_syscon_c1oar_evhist_clk (0x0008) -#define u300_syscon_c1oar_ppm_clk (0x0004) -#define u300_syscon_c1oar_dma_clk (0x0002) -/* clock activity observability register 2 */ -#define u300_syscon_c2oar (0x148) -#define u300_syscon_c2oar_mask (0x0fff) -#define u300_syscon_c2oar_value (0x0fff) -#define u300_syscon_c2oar_xgam_cdi_clk (0x0800) -#define u300_syscon_c2oar_xgam_clk (0x0400) -#define u300_syscon_c2oar_vc_h_clk (0x0200) -#define u300_syscon_c2oar_vc_clk (0x0100) -#define u300_syscon_c2oar_ua_p_clk (0x0080) -#define u300_syscon_c2oar_tmr1_clk (0x0040) -#define u300_syscon_c2oar_tmr0_clk (0x0020) -#define u300_syscon_c2oar_spi_p_clk (0x0010) -#define u300_syscon_c2oar_pcm_i2s1_core_clk (0x0008) -#define u300_syscon_c2oar_pcm_i2s1_clk (0x0004) -#define u300_syscon_c2oar_pcm_i2s0_core_clk (0x0002) -#define u300_syscon_c2oar_pcm_i2s0_clk (0x0001) - - -/* - * the clocking hierarchy currently looks like this. - * note: the idea is not to show how the clocks are routed on the chip! - * the ideas is to show dependencies, so a clock higher up in the - * hierarchy has to be on in order for another clock to be on. now, - * both cpu and dma can actually be on top of the hierarchy, and that - * is not modeled currently. instead we have the backbone amba bus on - * top. this bus cannot be programmed in any way but conceptually it - * needs to be active for the bridges and devices to transport data. - * - * please be aware that a few clocks are hw controlled, which mean that - * the hw itself can turn on/off or change the rate of the clock when - * needed! - * - * amba bus - * | - * +- cpu - * +- fsmc nandif nand flash interface - * +- semi shared memory interface - * +- isp image signal processor (u335 only) - * +- cds (u335 only) - * +- dma direct memory access controller - * +- aaif app/acc interface (mobile scalable link, msl) - * +- apex - * +- video_enc ave2/3 video encoder - * +- xgam graphics accelerator controller - * +- ahb - * | - * +- ahb:0 ahb bridge - * | | - * | +- ahb:1 intcon interrupt controller - * | +- ahb:3 mspro memory stick pro controller - * | +- ahb:4 emif external memory interface - * | - * +- fast:0 fast bridge - * | | - * | +- fast:1 mmcsd mmc/sd card reader controller - * | +- fast:2 i2s0 pcm i2s channel 0 controller - * | +- fast:3 i2s1 pcm i2s channel 1 controller - * | +- fast:4 i2c0 i2c channel 0 controller - * | +- fast:5 i2c1 i2c channel 1 controller - * | +- fast:6 spi spi controller - * | +- fast:7 uart1 secondary uart (u335 only) - * | - * +- slow:0 slow bridge - * | - * +- slow:1 syscon (not possible to control) - * +- slow:2 wdog watchdog - * +- slow:3 uart0 primary uart - * +- slow:4 timer_app application timer - used in linux - * +- slow:5 keypad controller - * +- slow:6 gpio controller - * +- slow:7 rtc controller - * +- slow:8 bt bus tracer (not used currently) - * +- slow:9 eh event handler (not used currently) - * +- slow:a timer_acc access style timer (not used currently) - * +- slow:b ppm (u335 only, what is that?) - */ - -/* global syscon virtual base */ -static void __iomem *syscon_vbase; - -/** - * struct clk_syscon - u300 syscon clock - * @hw: corresponding clock hardware entry - * @hw_ctrld: whether this clock is hardware controlled (for refcount etc) - * and does not need any magic pokes to be enabled/disabled - * @reset: state holder, whether this block's reset line is asserted or not - * @res_reg: reset line enable/disable flag register - * @res_bit: bit for resetting or taking this consumer out of reset - * @en_reg: clock line enable/disable flag register - * @en_bit: bit for enabling/disabling this consumer clock line - * @clk_val: magic value to poke in the register to enable/disable - * this one clock - */ -struct clk_syscon { - struct clk_hw hw; - bool hw_ctrld; - bool reset; - void __iomem *res_reg; - u8 res_bit; - void __iomem *en_reg; - u8 en_bit; - u16 clk_val; -}; - -#define to_syscon(_hw) container_of(_hw, struct clk_syscon, hw) - -static define_spinlock(syscon_resetreg_lock); - -/* - * reset control functions. we remember if a block has been - * taken out of reset and don't remove the reset assertion again - * and vice versa. currently we only remove resets so the - * enablement function is defined out. - */ -static void syscon_block_reset_enable(struct clk_syscon *sclk) -{ - unsigned long iflags; - u16 val; - - /* not all blocks support resetting */ - if (!sclk->res_reg) - return; - spin_lock_irqsave(&syscon_resetreg_lock, iflags); - val = readw(sclk->res_reg); - val |= bit(sclk->res_bit); - writew(val, sclk->res_reg); - spin_unlock_irqrestore(&syscon_resetreg_lock, iflags); - sclk->reset = true; -} - -static void syscon_block_reset_disable(struct clk_syscon *sclk) -{ - unsigned long iflags; - u16 val; - - /* not all blocks support resetting */ - if (!sclk->res_reg) - return; - spin_lock_irqsave(&syscon_resetreg_lock, iflags); - val = readw(sclk->res_reg); - val &= ~bit(sclk->res_bit); - writew(val, sclk->res_reg); - spin_unlock_irqrestore(&syscon_resetreg_lock, iflags); - sclk->reset = false; -} - -static int syscon_clk_prepare(struct clk_hw *hw) -{ - struct clk_syscon *sclk = to_syscon(hw); - - /* if the block is in reset, bring it out */ - if (sclk->reset) - syscon_block_reset_disable(sclk); - return 0; -} - -static void syscon_clk_unprepare(struct clk_hw *hw) -{ - struct clk_syscon *sclk = to_syscon(hw); - - /* please don't force the console into reset */ - if (sclk->clk_val == u300_syscon_sbcer_uart_clk_en) - return; - /* when unpreparing, force block into reset */ - if (!sclk->reset) - syscon_block_reset_enable(sclk); -} - -static int syscon_clk_enable(struct clk_hw *hw) -{ - struct clk_syscon *sclk = to_syscon(hw); - - /* don't touch the hardware controlled clocks */ - if (sclk->hw_ctrld) - return 0; - /* these cannot be controlled */ - if (sclk->clk_val == 0xffffu) - return 0; - - writew(sclk->clk_val, syscon_vbase + u300_syscon_sbcer); - return 0; -} - -static void syscon_clk_disable(struct clk_hw *hw) -{ - struct clk_syscon *sclk = to_syscon(hw); - - /* don't touch the hardware controlled clocks */ - if (sclk->hw_ctrld) - return; - if (sclk->clk_val == 0xffffu) - return; - /* please don't disable the console port */ - if (sclk->clk_val == u300_syscon_sbcer_uart_clk_en) - return; - - writew(sclk->clk_val, syscon_vbase + u300_syscon_sbcdr); -} - -static int syscon_clk_is_enabled(struct clk_hw *hw) -{ - struct clk_syscon *sclk = to_syscon(hw); - u16 val; - - /* if no enable register defined, it's always-on */ - if (!sclk->en_reg) - return 1; - - val = readw(sclk->en_reg); - val &= bit(sclk->en_bit); - - return val ? 1 : 0; -} - -static u16 syscon_get_perf(void) -{ - u16 val; - - val = readw(syscon_vbase + u300_syscon_ccr); - val &= u300_syscon_ccr_clking_performance_mask; - return val; -} - -static unsigned long -syscon_clk_recalc_rate(struct clk_hw *hw, - unsigned long parent_rate) -{ - struct clk_syscon *sclk = to_syscon(hw); - u16 perf = syscon_get_perf(); - - switch (sclk->clk_val) { - case u300_syscon_sbcer_fast_bridge_clk_en: - case u300_syscon_sbcer_i2c0_clk_en: - case u300_syscon_sbcer_i2c1_clk_en: - case u300_syscon_sbcer_mmc_clk_en: - case u300_syscon_sbcer_spi_clk_en: - /* the fast clocks have one progression */ - switch (perf) { - case u300_syscon_ccr_clking_performance_low_power: - case u300_syscon_ccr_clking_performance_low: - return 13000000; - default: - return parent_rate; /* 26 mhz */ - } - case u300_syscon_sbcer_dmac_clk_en: - case u300_syscon_sbcer_nandif_clk_en: - case u300_syscon_sbcer_xgam_clk_en: - /* amba interconnect peripherals */ - switch (perf) { - case u300_syscon_ccr_clking_performance_low_power: - case u300_syscon_ccr_clking_performance_low: - return 6500000; - case u300_syscon_ccr_clking_performance_intermediate: - return 26000000; - default: - return parent_rate; /* 52 mhz */ - } - case u300_syscon_sbcer_semi_clk_en: - case u300_syscon_sbcer_emif_clk_en: - /* emif speeds */ - switch (perf) { - case u300_syscon_ccr_clking_performance_low_power: - case u300_syscon_ccr_clking_performance_low: - return 13000000; - case u300_syscon_ccr_clking_performance_intermediate: - return 52000000; - default: - return 104000000; - } - case u300_syscon_sbcer_cpu_clk_en: - /* and the fast cpu clock */ - switch (perf) { - case u300_syscon_ccr_clking_performance_low_power: - case u300_syscon_ccr_clking_performance_low: - return 13000000; - case u300_syscon_ccr_clking_performance_intermediate: - return 52000000; - case u300_syscon_ccr_clking_performance_high: - return 104000000; - default: - return parent_rate; /* 208 mhz */ - } - default: - /* - * the slow clocks and default just inherit the rate of - * their parent (typically pll13 13 mhz). - */ - return parent_rate; - } -} - -static long -syscon_clk_round_rate(struct clk_hw *hw, unsigned long rate, - unsigned long *prate) -{ - struct clk_syscon *sclk = to_syscon(hw); - - if (sclk->clk_val != u300_syscon_sbcer_cpu_clk_en) - return *prate; - /* we really only support setting the rate of the cpu clock */ - if (rate <= 13000000) - return 13000000; - if (rate <= 52000000) - return 52000000; - if (rate <= 104000000) - return 104000000; - return 208000000; -} - -static int syscon_clk_set_rate(struct clk_hw *hw, unsigned long rate, - unsigned long parent_rate) -{ - struct clk_syscon *sclk = to_syscon(hw); - u16 val; - - /* we only support setting the rate of the cpu clock */ - if (sclk->clk_val != u300_syscon_sbcer_cpu_clk_en) - return -einval; - switch (rate) { - case 13000000: - val = u300_syscon_ccr_clking_performance_low_power; - break; - case 52000000: - val = u300_syscon_ccr_clking_performance_intermediate; - break; - case 104000000: - val = u300_syscon_ccr_clking_performance_high; - break; - case 208000000: - val = u300_syscon_ccr_clking_performance_best; - break; - default: - return -einval; - } - val |= readw(syscon_vbase + u300_syscon_ccr) & - ~u300_syscon_ccr_clking_performance_mask ; - writew(val, syscon_vbase + u300_syscon_ccr); - return 0; -} - -static const struct clk_ops syscon_clk_ops = { - .prepare = syscon_clk_prepare, - .unprepare = syscon_clk_unprepare, - .enable = syscon_clk_enable, - .disable = syscon_clk_disable, - .is_enabled = syscon_clk_is_enabled, - .recalc_rate = syscon_clk_recalc_rate, - .round_rate = syscon_clk_round_rate, - .set_rate = syscon_clk_set_rate, -}; - -static struct clk_hw * __init -syscon_clk_register(struct device *dev, const char *name, - const char *parent_name, unsigned long flags, - bool hw_ctrld, - void __iomem *res_reg, u8 res_bit, - void __iomem *en_reg, u8 en_bit, - u16 clk_val) -{ - struct clk_hw *hw; - struct clk_syscon *sclk; - struct clk_init_data init; - int ret; - - sclk = kzalloc(sizeof(*sclk), gfp_kernel); - if (!sclk) - return err_ptr(-enomem); - - init.name = name; - init.ops = &syscon_clk_ops; - init.flags = flags; - init.parent_names = (parent_name ? &parent_name : null); - init.num_parents = (parent_name ? 1 : 0); - sclk->hw.init = &init; - sclk->hw_ctrld = hw_ctrld; - /* assume the block is in reset at registration */ - sclk->reset = true; - sclk->res_reg = res_reg; - sclk->res_bit = res_bit; - sclk->en_reg = en_reg; - sclk->en_bit = en_bit; - sclk->clk_val = clk_val; - - hw = &sclk->hw; - ret = clk_hw_register(dev, hw); - if (ret) { - kfree(sclk); - hw = err_ptr(ret); - } - - return hw; -} - -#define u300_clk_type_slow 0 -#define u300_clk_type_fast 1 -#define u300_clk_type_rest 2 - -/** - * struct u300_clock - defines the bits and pieces for a certain clock - * @type: the clock type, slow fast or rest - * @id: the bit in the slow/fast/rest register for this clock - * @hw_ctrld: whether the clock is hardware controlled - * @clk_val: a value to poke in the one-write enable/disable registers - */ -struct u300_clock { - u8 type; - u8 id; - bool hw_ctrld; - u16 clk_val; -}; - -static struct u300_clock const u300_clk_lookup[] __initconst = { - { - .type = u300_clk_type_rest, - .id = 3, - .hw_ctrld = true, - .clk_val = u300_syscon_sbcer_cpu_clk_en, - }, - { - .type = u300_clk_type_rest, - .id = 4, - .hw_ctrld = true, - .clk_val = u300_syscon_sbcer_dmac_clk_en, - }, - { - .type = u300_clk_type_rest, - .id = 5, - .hw_ctrld = false, - .clk_val = u300_syscon_sbcer_emif_clk_en, - }, - { - .type = u300_clk_type_rest, - .id = 6, - .hw_ctrld = false, - .clk_val = u300_syscon_sbcer_nandif_clk_en, - }, - { - .type = u300_clk_type_rest, - .id = 8, - .hw_ctrld = true, - .clk_val = u300_syscon_sbcer_xgam_clk_en, - }, - { - .type = u300_clk_type_rest, - .id = 9, - .hw_ctrld = false, - .clk_val = u300_syscon_sbcer_semi_clk_en, - }, - { - .type = u300_clk_type_rest, - .id = 10, - .hw_ctrld = true, - .clk_val = u300_syscon_sbcer_ahb_subsys_bridge_clk_en, - }, - { - .type = u300_clk_type_rest, - .id = 12, - .hw_ctrld = false, - /* intcon: cannot be enabled, just taken out of reset */ - .clk_val = 0xffffu, - }, - { - .type = u300_clk_type_fast, - .id = 0, - .hw_ctrld = true, - .clk_val = u300_syscon_sbcer_fast_bridge_clk_en, - }, - { - .type = u300_clk_type_fast, - .id = 1, - .hw_ctrld = false, - .clk_val = u300_syscon_sbcer_i2c0_clk_en, - }, - { - .type = u300_clk_type_fast, - .id = 2, - .hw_ctrld = false, - .clk_val = u300_syscon_sbcer_i2c1_clk_en, - }, - { - .type = u300_clk_type_fast, - .id = 5, - .hw_ctrld = false, - .clk_val = u300_syscon_sbcer_mmc_clk_en, - }, - { - .type = u300_clk_type_fast, - .id = 6, - .hw_ctrld = false, - .clk_val = u300_syscon_sbcer_spi_clk_en, - }, - { - .type = u300_clk_type_slow, - .id = 0, - .hw_ctrld = true, - .clk_val = u300_syscon_sbcer_slow_bridge_clk_en, - }, - { - .type = u300_clk_type_slow, - .id = 1, - .hw_ctrld = false, - .clk_val = u300_syscon_sbcer_uart_clk_en, - }, - { - .type = u300_clk_type_slow, - .id = 4, - .hw_ctrld = false, - .clk_val = u300_syscon_sbcer_gpio_clk_en, - }, - { - .type = u300_clk_type_slow, - .id = 6, - .hw_ctrld = true, - /* no clock enable register bit */ - .clk_val = 0xffffu, - }, - { - .type = u300_clk_type_slow, - .id = 7, - .hw_ctrld = false, - .clk_val = u300_syscon_sbcer_app_tmr_clk_en, - }, - { - .type = u300_clk_type_slow, - .id = 8, - .hw_ctrld = false, - .clk_val = u300_syscon_sbcer_acc_tmr_clk_en, - }, -}; - -static void __init of_u300_syscon_clk_init(struct device_node *np) -{ - struct clk_hw *hw = err_ptr(-einval); - const char *clk_name = np->name; - const char *parent_name; - void __iomem *res_reg; - void __iomem *en_reg; - u32 clk_type; - u32 clk_id; - int i; - - if (of_property_read_u32(np, "clock-type", &clk_type)) { - pr_err("%s: syscon clock "%s" missing clock-type property ", - __func__, clk_name); - return; - } - if (of_property_read_u32(np, "clock-id", &clk_id)) { - pr_err("%s: syscon clock "%s" missing clock-id property ", - __func__, clk_name); - return; - } - parent_name = of_clk_get_parent_name(np, 0); - - switch (clk_type) { - case u300_clk_type_slow: - res_reg = syscon_vbase + u300_syscon_rsr; - en_reg = syscon_vbase + u300_syscon_cesr; - break; - case u300_clk_type_fast: - res_reg = syscon_vbase + u300_syscon_rfr; - en_reg = syscon_vbase + u300_syscon_cefr; - break; - case u300_clk_type_rest: - res_reg = syscon_vbase + u300_syscon_rrr; - en_reg = syscon_vbase + u300_syscon_cerr; - break; - default: - pr_err("unknown clock type %x specified ", clk_type); - return; - } - - for (i = 0; i < array_size(u300_clk_lookup); i++) { - const struct u300_clock *u3clk = &u300_clk_lookup[i]; - - if (u3clk->type == clk_type && u3clk->id == clk_id) - hw = syscon_clk_register(null, clk_name, parent_name, - 0, u3clk->hw_ctrld, - res_reg, u3clk->id, - en_reg, u3clk->id, - u3clk->clk_val); - } - - if (!is_err(hw)) { - of_clk_add_hw_provider(np, of_clk_hw_simple_get, hw); - - /* - * some few system clocks - device tree does not - * represent clocks without a corresponding device node. - * for now we add these three clocks here. - */ - if (clk_type == u300_clk_type_rest && clk_id == 5) - clk_hw_register_clkdev(hw, null, "pl172"); - if (clk_type == u300_clk_type_rest && clk_id == 9) - clk_hw_register_clkdev(hw, null, "semi"); - if (clk_type == u300_clk_type_rest && clk_id == 12) - clk_hw_register_clkdev(hw, null, "intcon"); - } -} - -/** - * struct clk_mclk - u300 mclk clock (mmc/sd clock) - * @hw: corresponding clock hardware entry - * @is_mspro: if this is the memory stick clock rather than mmc/sd - */ -struct clk_mclk { - struct clk_hw hw; - bool is_mspro; -}; - -#define to_mclk(_hw) container_of(_hw, struct clk_mclk, hw) - -static int mclk_clk_prepare(struct clk_hw *hw) -{ - struct clk_mclk *mclk = to_mclk(hw); - u16 val; - - /* the mmc and mspro clocks need some special set-up */ - if (!mclk->is_mspro) { - /* set default mmc clock divisor to 18.9 mhz */ - writew(0x0054u, syscon_vbase + u300_syscon_mmf0r); - val = readw(syscon_vbase + u300_syscon_mmcr); - /* disable the mmc feedback clock */ - val &= ~u300_syscon_mmcr_mmc_fb_clk_sel_enable; - /* disable mspro frequency */ - val &= ~u300_syscon_mmcr_mspro_freqsel_enable; - writew(val, syscon_vbase + u300_syscon_mmcr); - } else { - val = readw(syscon_vbase + u300_syscon_mmcr); - /* disable the mmc feedback clock */ - val &= ~u300_syscon_mmcr_mmc_fb_clk_sel_enable; - /* enable mspro frequency */ - val |= u300_syscon_mmcr_mspro_freqsel_enable; - writew(val, syscon_vbase + u300_syscon_mmcr); - } - - return 0; -} - -static unsigned long -mclk_clk_recalc_rate(struct clk_hw *hw, - unsigned long parent_rate) -{ - u16 perf = syscon_get_perf(); - - switch (perf) { - case u300_syscon_ccr_clking_performance_low_power: - /* - * here, the 208 mhz pll gets shut down and the always - * on 13 mhz pll used for rtc etc kicks into use - * instead. - */ - return 13000000; - case u300_syscon_ccr_clking_performance_low: - case u300_syscon_ccr_clking_performance_intermediate: - case u300_syscon_ccr_clking_performance_high: - case u300_syscon_ccr_clking_performance_best: - { - /* - * this clock is under program control. the register is - * divided in two nybbles, bit 7-4 gives cycles-1 to count - * high, bit 3-0 gives cycles-1 to count low. distribute - * these with no more than 1 cycle difference between - * low and high and add low and high to get the actual - * divisor. the base pll is 208 mhz. writing 0x00 will - * divide by 1 and 1 so the highest frequency possible - * is 104 mhz. - * - * e.g. 0x54 => - * f = 208 / ((5+1) + (4+1)) = 208 / 11 = 18.9 mhz - */ - u16 val = readw(syscon_vbase + u300_syscon_mmf0r) & - u300_syscon_mmf0r_mask; - switch (val) { - case 0x0054: - return 18900000; - case 0x0044: - return 20800000; - case 0x0043: - return 23100000; - case 0x0033: - return 26000000; - case 0x0032: - return 29700000; - case 0x0022: - return 34700000; - case 0x0021: - return 41600000; - case 0x0011: - return 52000000; - case 0x0000: - return 104000000; - default: - break; - } - } - default: - break; - } - return parent_rate; -} - -static long -mclk_clk_round_rate(struct clk_hw *hw, unsigned long rate, - unsigned long *prate) -{ - if (rate <= 18900000) - return 18900000; - if (rate <= 20800000) - return 20800000; - if (rate <= 23100000) - return 23100000; - if (rate <= 26000000) - return 26000000; - if (rate <= 29700000) - return 29700000; - if (rate <= 34700000) - return 34700000; - if (rate <= 41600000) - return 41600000; - /* highest rate */ - return 52000000; -} - -static int mclk_clk_set_rate(struct clk_hw *hw, unsigned long rate, - unsigned long parent_rate) -{ - u16 val; - u16 reg; - - switch (rate) { - case 18900000: - val = 0x0054; - break; - case 20800000: - val = 0x0044; - break; - case 23100000: - val = 0x0043; - break; - case 26000000: - val = 0x0033; - break; - case 29700000: - val = 0x0032; - break; - case 34700000: - val = 0x0022; - break; - case 41600000: - val = 0x0021; - break; - case 52000000: - val = 0x0011; - break; - case 104000000: - val = 0x0000; - break; - default: - return -einval; - } - - reg = readw(syscon_vbase + u300_syscon_mmf0r) & - ~u300_syscon_mmf0r_mask; - writew(reg | val, syscon_vbase + u300_syscon_mmf0r); - return 0; -} - -static const struct clk_ops mclk_ops = { - .prepare = mclk_clk_prepare, - .recalc_rate = mclk_clk_recalc_rate, - .round_rate = mclk_clk_round_rate, - .set_rate = mclk_clk_set_rate, -}; - -static struct clk_hw * __init -mclk_clk_register(struct device *dev, const char *name, - const char *parent_name, bool is_mspro) -{ - struct clk_hw *hw; - struct clk_mclk *mclk; - struct clk_init_data init; - int ret; - - mclk = kzalloc(sizeof(*mclk), gfp_kernel); - if (!mclk) - return err_ptr(-enomem); - - init.name = "mclk"; - init.ops = &mclk_ops; - init.flags = 0; - init.parent_names = (parent_name ? &parent_name : null); - init.num_parents = (parent_name ? 1 : 0); - mclk->hw.init = &init; - mclk->is_mspro = is_mspro; - - hw = &mclk->hw; - ret = clk_hw_register(dev, hw); - if (ret) { - kfree(mclk); - hw = err_ptr(ret); - } - - return hw; -} - -static void __init of_u300_syscon_mclk_init(struct device_node *np) -{ - struct clk_hw *hw; - const char *clk_name = np->name; - const char *parent_name; - - parent_name = of_clk_get_parent_name(np, 0); - hw = mclk_clk_register(null, clk_name, parent_name, false); - if (!is_err(hw)) - of_clk_add_hw_provider(np, of_clk_hw_simple_get, hw); -} - -static const struct of_device_id u300_clk_match[] __initconst = { - { - .compatible = "fixed-clock", - .data = of_fixed_clk_setup, - }, - { - .compatible = "fixed-factor-clock", - .data = of_fixed_factor_clk_setup, - }, - { - .compatible = "stericsson,u300-syscon-clk", - .data = of_u300_syscon_clk_init, - }, - { - .compatible = "stericsson,u300-syscon-mclk", - .data = of_u300_syscon_mclk_init, - }, - {} -}; - - -void __init u300_clk_init(void __iomem *base) -{ - u16 val; - - syscon_vbase = base; - - /* set system to run at pll208, max performance, a known state. */ - val = readw(syscon_vbase + u300_syscon_ccr); - val &= ~u300_syscon_ccr_clking_performance_mask; - writew(val, syscon_vbase + u300_syscon_ccr); - /* wait for the pll208 to lock if not locked in yet */ - while (!(readw(syscon_vbase + u300_syscon_csr) & - u300_syscon_csr_pll208_lock_ind)); - - /* power management enable */ - val = readw(syscon_vbase + u300_syscon_pmcr); - val |= u300_syscon_pmcr_pwr_mgnt_enable; - writew(val, syscon_vbase + u300_syscon_pmcr); - - of_clk_init(u300_clk_match); -} diff --git a/include/linux/platform_data/clk-u300.h b/include/linux/platform_data/clk-u300.h --- a/include/linux/platform_data/clk-u300.h +++ /dev/null -void __init u300_clk_init(void __iomem *base);
|
Clock
|
ee7294ba49bf8559b560b21629ed8153082c25cf
|
arnd bergmann linus walleij linus walleij linaro org
|
drivers
|
clk
|
bindings, clock, platform_data
|
clk: remove zte zx driver
|
the zte zx platform is getting removed, so this driver is no longer needed.
|
this release allows to map an uid to a different one in a mount; it also adds support for selecting the preemption model at runtime; support for a low-overhead memory error detector designed to be used in production; support for the acrn hypervisor designed for embedded systems; btrfs initial support for zoned devices, subpage blocks sizes and performance improvements; support for eager nfs writes; support for a thermal power management to control the surface temperature of embedded devices in an unified way; the napi polling can be moved to a kernel thread; and support for non-blocking path lookups. as always, there are many other features, new drivers, improvements and fixes.
|
remove zte zx driver
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'cryptography', 'security', 'networking', 'architectures x86 arm risc-v powerpc mips csky s390 pa-risc c6x']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'device voltage and frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'non-transparent bridge (ntb)', 'thunderbolt', 'clock', 'phy ("physical layer" framework)', 'cxl (compute express link)', 'various']
|
[]
|
['txt', 'h', 'c', 'makefile']
| 9
| 0
| 2,691
|
--- diff --git a/documentation/devicetree/bindings/clock/zx296702-clk.txt b/documentation/devicetree/bindings/clock/zx296702-clk.txt --- a/documentation/devicetree/bindings/clock/zx296702-clk.txt +++ /dev/null -device tree clock bindings for zte zx296702 - -this binding uses the common clock binding[1]. - -[1] documentation/devicetree/bindings/clock/clock-bindings.txt - -required properties: -- compatible : shall be one of the following: - "zte,zx296702-topcrm-clk": - zx296702 top clock selection, divider and gating - - "zte,zx296702-lsp0crpm-clk" and - "zte,zx296702-lsp1crpm-clk": - zx296702 device level clock selection and gating - -- reg: address and length of the register set - -the clock consumer should specify the desired clock by having the clock -id in its "clocks" phandle cell. see include/dt-bindings/clock/zx296702-clock.h -for the full list of zx296702 clock ids. - - -topclk: topcrm@09800000 { - compatible = "zte,zx296702-topcrm-clk"; - reg = <0x09800000 0x1000>; - #clock-cells = <1>; -}; - -uart0: serial@09405000 { - compatible = "zte,zx296702-uart"; - reg = <0x09405000 0x1000>; - interrupts = <gic_spi 37 irq_type_level_high>; - clocks = <&lsp1clk zx296702_uart0_pclk>; -}; diff --git a/documentation/devicetree/bindings/clock/zx296718-clk.txt b/documentation/devicetree/bindings/clock/zx296718-clk.txt --- a/documentation/devicetree/bindings/clock/zx296718-clk.txt +++ /dev/null -device tree clock bindings for zte zx296718 - -this binding uses the common clock binding[1]. - -[1] documentation/devicetree/bindings/clock/clock-bindings.txt - -required properties: -- compatible : shall be one of the following: - "zte,zx296718-topcrm": - zx296718 top clock selection, divider and gating - - "zte,zx296718-lsp0crm" and - "zte,zx296718-lsp1crm": - zx296718 device level clock selection and gating - - "zte,zx296718-audiocrm": - zx296718 audio clock selection, divider and gating - -- reg: address and length of the register set - -the clock consumer should specify the desired clock by having the clock -id in its "clocks" phandle cell. see include/dt-bindings/clock/zx296718-clock.h -for the full list of zx296718 clock ids. - - -topclk: topcrm@1461000 { - compatible = "zte,zx296718-topcrm-clk"; - reg = <0x01461000 0x1000>; - #clock-cells = <1>; -}; - -usbphy0:usb-phy0 { - compatible = "zte,zx296718-usb-phy"; - #phy-cells = <0>; - clocks = <&topclk usb20_phy_clk>; - clock-names = "phyclk"; -}; diff --git a/drivers/clk/makefile b/drivers/clk/makefile --- a/drivers/clk/makefile +++ b/drivers/clk/makefile -obj-$(config_arch_zx) += zte/ diff --git a/drivers/clk/zte/makefile b/drivers/clk/zte/makefile --- a/drivers/clk/zte/makefile +++ /dev/null -# spdx-license-identifier: gpl-2.0-only -obj-y := clk.o -obj-$(config_soc_zx296702) += clk-zx296702.o -obj-$(config_arch_zx) += clk-zx296718.o diff --git a/drivers/clk/zte/clk-zx296702.c b/drivers/clk/zte/clk-zx296702.c --- a/drivers/clk/zte/clk-zx296702.c +++ /dev/null -// spdx-license-identifier: gpl-2.0-only -/* - * copyright 2014 linaro ltd. - * copyright (c) 2014 zte corporation. - */ - -#include <linux/clk-provider.h> -#include <linux/of_address.h> -#include <dt-bindings/clock/zx296702-clock.h> -#include "clk.h" - -static define_spinlock(reg_lock); - -static void __iomem *topcrm_base; -static void __iomem *lsp0crpm_base; -static void __iomem *lsp1crpm_base; - -static struct clk *topclk[zx296702_topclk_end]; -static struct clk *lsp0clk[zx296702_lsp0clk_end]; -static struct clk *lsp1clk[zx296702_lsp1clk_end]; - -static struct clk_onecell_data topclk_data; -static struct clk_onecell_data lsp0clk_data; -static struct clk_onecell_data lsp1clk_data; - -#define clk_mux (topcrm_base + 0x04) -#define clk_div (topcrm_base + 0x08) -#define clk_en0 (topcrm_base + 0x0c) -#define clk_en1 (topcrm_base + 0x10) -#define vou_local_clken (topcrm_base + 0x68) -#define vou_local_clksel (topcrm_base + 0x70) -#define vou_local_div2_set (topcrm_base + 0x74) -#define clk_mux1 (topcrm_base + 0x8c) - -#define clk_sdmmc1 (lsp0crpm_base + 0x0c) -#define clk_gpio (lsp0crpm_base + 0x2c) -#define clk_spdif0 (lsp0crpm_base + 0x10) -#define spdif0_div (lsp0crpm_base + 0x14) -#define clk_i2s0 (lsp0crpm_base + 0x18) -#define i2s0_div (lsp0crpm_base + 0x1c) -#define clk_i2s1 (lsp0crpm_base + 0x20) -#define i2s1_div (lsp0crpm_base + 0x24) -#define clk_i2s2 (lsp0crpm_base + 0x34) -#define i2s2_div (lsp0crpm_base + 0x38) - -#define clk_uart0 (lsp1crpm_base + 0x20) -#define clk_uart1 (lsp1crpm_base + 0x24) -#define clk_sdmmc0 (lsp1crpm_base + 0x2c) -#define clk_spdif1 (lsp1crpm_base + 0x30) -#define spdif1_div (lsp1crpm_base + 0x34) - -static const struct zx_pll_config pll_a9_config[] = { - { .rate = 700000000, .cfg0 = 0x800405d1, .cfg1 = 0x04555555 }, - { .rate = 800000000, .cfg0 = 0x80040691, .cfg1 = 0x04aaaaaa }, - { .rate = 900000000, .cfg0 = 0x80040791, .cfg1 = 0x04000000 }, - { .rate = 1000000000, .cfg0 = 0x80040851, .cfg1 = 0x04555555 }, - { .rate = 1100000000, .cfg0 = 0x80040911, .cfg1 = 0x04aaaaaa }, - { .rate = 1200000000, .cfg0 = 0x80040a11, .cfg1 = 0x04000000 }, -}; - -static const struct clk_div_table main_hlk_div[] = { - { .val = 1, .div = 2, }, - { .val = 3, .div = 4, }, - { /* sentinel */ } -}; - -static const struct clk_div_table a9_as1_aclk_divider[] = { - { .val = 0, .div = 1, }, - { .val = 1, .div = 2, }, - { .val = 3, .div = 4, }, - { /* sentinel */ } -}; - -static const struct clk_div_table sec_wclk_divider[] = { - { .val = 0, .div = 1, }, - { .val = 1, .div = 2, }, - { .val = 3, .div = 4, }, - { .val = 5, .div = 6, }, - { .val = 7, .div = 8, }, - { /* sentinel */ } -}; - -static const char * const matrix_aclk_sel[] = { - "pll_mm0_198m", - "osc", - "clk_148m5", - "pll_lsp_104m", -}; - -static const char * const a9_wclk_sel[] = { - "pll_a9", - "osc", - "clk_500", - "clk_250", -}; - -static const char * const a9_as1_aclk_sel[] = { - "clk_250", - "osc", - "pll_mm0_396m", - "pll_mac_333m", -}; - -static const char * const a9_trace_clkin_sel[] = { - "clk_74m25", - "pll_mm1_108m", - "clk_125", - "clk_148m5", -}; - -static const char * const decppu_aclk_sel[] = { - "clk_250", - "pll_mm0_198m", - "pll_lsp_104m", - "pll_audio_294m912", -}; - -static const char * const vou_main_wclk_sel[] = { - "clk_148m5", - "clk_74m25", - "clk_27", - "pll_mm1_54m", -}; - -static const char * const vou_scaler_wclk_sel[] = { - "clk_250", - "pll_mac_333m", - "pll_audio_294m912", - "pll_mm0_198m", -}; - -static const char * const r2d_wclk_sel[] = { - "pll_audio_294m912", - "pll_mac_333m", - "pll_a9_350m", - "pll_mm0_396m", -}; - -static const char * const ddr_wclk_sel[] = { - "pll_mac_333m", - "pll_ddr_266m", - "pll_audio_294m912", - "pll_mm0_198m", -}; - -static const char * const nand_wclk_sel[] = { - "pll_lsp_104m", - "osc", -}; - -static const char * const lsp_26_wclk_sel[] = { - "pll_lsp_26m", - "osc", -}; - -static const char * const vl0_sel[] = { - "vou_main_channel_div", - "vou_aux_channel_div", -}; - -static const char * const hdmi_sel[] = { - "vou_main_channel_wclk", - "vou_aux_channel_wclk", -}; - -static const char * const sdmmc0_wclk_sel[] = { - "lsp1_104m_wclk", - "lsp1_26m_wclk", -}; - -static const char * const sdmmc1_wclk_sel[] = { - "lsp0_104m_wclk", - "lsp0_26m_wclk", -}; - -static const char * const uart_wclk_sel[] = { - "lsp1_104m_wclk", - "lsp1_26m_wclk", -}; - -static const char * const spdif0_wclk_sel[] = { - "lsp0_104m_wclk", - "lsp0_26m_wclk", -}; - -static const char * const spdif1_wclk_sel[] = { - "lsp1_104m_wclk", - "lsp1_26m_wclk", -}; - -static const char * const i2s_wclk_sel[] = { - "lsp0_104m_wclk", - "lsp0_26m_wclk", -}; - -static inline struct clk *zx_divtbl(const char *name, const char *parent, - void __iomem *reg, u8 shift, u8 width, - const struct clk_div_table *table) -{ - return clk_register_divider_table(null, name, parent, 0, reg, shift, - width, 0, table, ®_lock); -} - -static inline struct clk *zx_div(const char *name, const char *parent, - void __iomem *reg, u8 shift, u8 width) -{ - return clk_register_divider(null, name, parent, 0, - reg, shift, width, 0, ®_lock); -} - -static inline struct clk *zx_mux(const char *name, const char * const *parents, - int num_parents, void __iomem *reg, u8 shift, u8 width) -{ - return clk_register_mux(null, name, parents, num_parents, - 0, reg, shift, width, 0, ®_lock); -} - -static inline struct clk *zx_gate(const char *name, const char *parent, - void __iomem *reg, u8 shift) -{ - return clk_register_gate(null, name, parent, clk_ignore_unused, - reg, shift, clk_set_rate_parent, ®_lock); -} - -static void __init zx296702_top_clocks_init(struct device_node *np) -{ - struct clk **clk = topclk; - int i; - - topcrm_base = of_iomap(np, 0); - warn_on(!topcrm_base); - - clk[zx296702_osc] = - clk_register_fixed_rate(null, "osc", null, 0, 30000000); - clk[zx296702_pll_a9] = - clk_register_zx_pll("pll_a9", "osc", 0, topcrm_base - + 0x01c, pll_a9_config, - array_size(pll_a9_config), ®_lock); - - /* todo: pll_a9_350m look like changeble follow a9 pll */ - clk[zx296702_pll_a9_350m] = - clk_register_fixed_rate(null, "pll_a9_350m", "osc", 0, - 350000000); - clk[zx296702_pll_mac_1000m] = - clk_register_fixed_rate(null, "pll_mac_1000m", "osc", 0, - 1000000000); - clk[zx296702_pll_mac_333m] = - clk_register_fixed_rate(null, "pll_mac_333m", "osc", 0, - 333000000); - clk[zx296702_pll_mm0_1188m] = - clk_register_fixed_rate(null, "pll_mm0_1188m", "osc", 0, - 1188000000); - clk[zx296702_pll_mm0_396m] = - clk_register_fixed_rate(null, "pll_mm0_396m", "osc", 0, - 396000000); - clk[zx296702_pll_mm0_198m] = - clk_register_fixed_rate(null, "pll_mm0_198m", "osc", 0, - 198000000); - clk[zx296702_pll_mm1_108m] = - clk_register_fixed_rate(null, "pll_mm1_108m", "osc", 0, - 108000000); - clk[zx296702_pll_mm1_72m] = - clk_register_fixed_rate(null, "pll_mm1_72m", "osc", 0, - 72000000); - clk[zx296702_pll_mm1_54m] = - clk_register_fixed_rate(null, "pll_mm1_54m", "osc", 0, - 54000000); - clk[zx296702_pll_lsp_104m] = - clk_register_fixed_rate(null, "pll_lsp_104m", "osc", 0, - 104000000); - clk[zx296702_pll_lsp_26m] = - clk_register_fixed_rate(null, "pll_lsp_26m", "osc", 0, - 26000000); - clk[zx296702_pll_ddr_266m] = - clk_register_fixed_rate(null, "pll_ddr_266m", "osc", 0, - 266000000); - clk[zx296702_pll_audio_294m912] = - clk_register_fixed_rate(null, "pll_audio_294m912", "osc", 0, - 294912000); - - /* bus clock */ - clk[zx296702_matrix_aclk] = - zx_mux("matrix_aclk", matrix_aclk_sel, - array_size(matrix_aclk_sel), clk_mux, 2, 2); - clk[zx296702_main_hclk] = - zx_divtbl("main_hclk", "matrix_aclk", clk_div, 0, 2, - main_hlk_div); - clk[zx296702_main_pclk] = - zx_divtbl("main_pclk", "matrix_aclk", clk_div, 2, 2, - main_hlk_div); - - /* cpu clock */ - clk[zx296702_clk_500] = - clk_register_fixed_factor(null, "clk_500", "pll_mac_1000m", 0, - 1, 2); - clk[zx296702_clk_250] = - clk_register_fixed_factor(null, "clk_250", "pll_mac_1000m", 0, - 1, 4); - clk[zx296702_clk_125] = - clk_register_fixed_factor(null, "clk_125", "clk_250", 0, 1, 2); - clk[zx296702_clk_148m5] = - clk_register_fixed_factor(null, "clk_148m5", "pll_mm0_1188m", 0, - 1, 8); - clk[zx296702_clk_74m25] = - clk_register_fixed_factor(null, "clk_74m25", "pll_mm0_1188m", 0, - 1, 16); - clk[zx296702_a9_wclk] = - zx_mux("a9_wclk", a9_wclk_sel, array_size(a9_wclk_sel), clk_mux, - 0, 2); - clk[zx296702_a9_as1_aclk_mux] = - zx_mux("a9_as1_aclk_mux", a9_as1_aclk_sel, - array_size(a9_as1_aclk_sel), clk_mux, 4, 2); - clk[zx296702_a9_trace_clkin_mux] = - zx_mux("a9_trace_clkin_mux", a9_trace_clkin_sel, - array_size(a9_trace_clkin_sel), clk_mux1, 0, 2); - clk[zx296702_a9_as1_aclk_div] = - zx_divtbl("a9_as1_aclk_div", "a9_as1_aclk_mux", clk_div, 4, 2, - a9_as1_aclk_divider); - - /* multi-media clock */ - clk[zx296702_clk_2] = - clk_register_fixed_factor(null, "clk_2", "pll_mm1_72m", 0, - 1, 36); - clk[zx296702_clk_27] = - clk_register_fixed_factor(null, "clk_27", "pll_mm1_54m", 0, - 1, 2); - clk[zx296702_decppu_aclk_mux] = - zx_mux("decppu_aclk_mux", decppu_aclk_sel, - array_size(decppu_aclk_sel), clk_mux, 6, 2); - clk[zx296702_ppu_aclk_mux] = - zx_mux("ppu_aclk_mux", decppu_aclk_sel, - array_size(decppu_aclk_sel), clk_mux, 8, 2); - clk[zx296702_mali400_aclk_mux] = - zx_mux("mali400_aclk_mux", decppu_aclk_sel, - array_size(decppu_aclk_sel), clk_mux, 12, 2); - clk[zx296702_vou_aclk_mux] = - zx_mux("vou_aclk_mux", decppu_aclk_sel, - array_size(decppu_aclk_sel), clk_mux, 10, 2); - clk[zx296702_vou_main_wclk_mux] = - zx_mux("vou_main_wclk_mux", vou_main_wclk_sel, - array_size(vou_main_wclk_sel), clk_mux, 14, 2); - clk[zx296702_vou_aux_wclk_mux] = - zx_mux("vou_aux_wclk_mux", vou_main_wclk_sel, - array_size(vou_main_wclk_sel), clk_mux, 16, 2); - clk[zx296702_vou_scaler_wclk_mux] = - zx_mux("vou_scaler_wclk_mux", vou_scaler_wclk_sel, - array_size(vou_scaler_wclk_sel), clk_mux, - 18, 2); - clk[zx296702_r2d_aclk_mux] = - zx_mux("r2d_aclk_mux", decppu_aclk_sel, - array_size(decppu_aclk_sel), clk_mux, 20, 2); - clk[zx296702_r2d_wclk_mux] = - zx_mux("r2d_wclk_mux", r2d_wclk_sel, - array_size(r2d_wclk_sel), clk_mux, 22, 2); - - /* other clock */ - clk[zx296702_clk_50] = - clk_register_fixed_factor(null, "clk_50", "pll_mac_1000m", - 0, 1, 20); - clk[zx296702_clk_25] = - clk_register_fixed_factor(null, "clk_25", "pll_mac_1000m", - 0, 1, 40); - clk[zx296702_clk_12] = - clk_register_fixed_factor(null, "clk_12", "pll_mm1_72m", - 0, 1, 6); - clk[zx296702_clk_16m384] = - clk_register_fixed_factor(null, "clk_16m384", - "pll_audio_294m912", 0, 1, 18); - clk[zx296702_clk_32k768] = - clk_register_fixed_factor(null, "clk_32k768", "clk_16m384", - 0, 1, 500); - clk[zx296702_sec_wclk_div] = - zx_divtbl("sec_wclk_div", "pll_lsp_104m", clk_div, 6, 3, - sec_wclk_divider); - clk[zx296702_ddr_wclk_mux] = - zx_mux("ddr_wclk_mux", ddr_wclk_sel, - array_size(ddr_wclk_sel), clk_mux, 24, 2); - clk[zx296702_nand_wclk_mux] = - zx_mux("nand_wclk_mux", nand_wclk_sel, - array_size(nand_wclk_sel), clk_mux, 24, 2); - clk[zx296702_lsp_26_wclk_mux] = - zx_mux("lsp_26_wclk_mux", lsp_26_wclk_sel, - array_size(lsp_26_wclk_sel), clk_mux, 27, 1); - - /* gates */ - clk[zx296702_a9_as0_aclk] = - zx_gate("a9_as0_aclk", "matrix_aclk", clk_en0, 0); - clk[zx296702_a9_as1_aclk] = - zx_gate("a9_as1_aclk", "a9_as1_aclk_div", clk_en0, 1); - clk[zx296702_a9_trace_clkin] = - zx_gate("a9_trace_clkin", "a9_trace_clkin_mux", clk_en0, 2); - clk[zx296702_decppu_axi_m_aclk] = - zx_gate("decppu_axi_m_aclk", "decppu_aclk_mux", clk_en0, 3); - clk[zx296702_decppu_ahb_s_hclk] = - zx_gate("decppu_ahb_s_hclk", "main_hclk", clk_en0, 4); - clk[zx296702_ppu_axi_m_aclk] = - zx_gate("ppu_axi_m_aclk", "ppu_aclk_mux", clk_en0, 5); - clk[zx296702_ppu_ahb_s_hclk] = - zx_gate("ppu_ahb_s_hclk", "main_hclk", clk_en0, 6); - clk[zx296702_vou_axi_m_aclk] = - zx_gate("vou_axi_m_aclk", "vou_aclk_mux", clk_en0, 7); - clk[zx296702_vou_apb_pclk] = - zx_gate("vou_apb_pclk", "main_pclk", clk_en0, 8); - clk[zx296702_vou_main_channel_wclk] = - zx_gate("vou_main_channel_wclk", "vou_main_wclk_mux", - clk_en0, 9); - clk[zx296702_vou_aux_channel_wclk] = - zx_gate("vou_aux_channel_wclk", "vou_aux_wclk_mux", - clk_en0, 10); - clk[zx296702_vou_hdmi_osclk_cec] = - zx_gate("vou_hdmi_osclk_cec", "clk_2", clk_en0, 11); - clk[zx296702_vou_scaler_wclk] = - zx_gate("vou_scaler_wclk", "vou_scaler_wclk_mux", clk_en0, 12); - clk[zx296702_mali400_axi_m_aclk] = - zx_gate("mali400_axi_m_aclk", "mali400_aclk_mux", clk_en0, 13); - clk[zx296702_mali400_apb_pclk] = - zx_gate("mali400_apb_pclk", "main_pclk", clk_en0, 14); - clk[zx296702_r2d_wclk] = - zx_gate("r2d_wclk", "r2d_wclk_mux", clk_en0, 15); - clk[zx296702_r2d_axi_m_aclk] = - zx_gate("r2d_axi_m_aclk", "r2d_aclk_mux", clk_en0, 16); - clk[zx296702_r2d_ahb_hclk] = - zx_gate("r2d_ahb_hclk", "main_hclk", clk_en0, 17); - clk[zx296702_ddr3_axi_s0_aclk] = - zx_gate("ddr3_axi_s0_aclk", "matrix_aclk", clk_en0, 18); - clk[zx296702_ddr3_apb_pclk] = - zx_gate("ddr3_apb_pclk", "main_pclk", clk_en0, 19); - clk[zx296702_ddr3_wclk] = - zx_gate("ddr3_wclk", "ddr_wclk_mux", clk_en0, 20); - clk[zx296702_usb20_0_ahb_hclk] = - zx_gate("usb20_0_ahb_hclk", "main_hclk", clk_en0, 21); - clk[zx296702_usb20_0_extrefclk] = - zx_gate("usb20_0_extrefclk", "clk_12", clk_en0, 22); - clk[zx296702_usb20_1_ahb_hclk] = - zx_gate("usb20_1_ahb_hclk", "main_hclk", clk_en0, 23); - clk[zx296702_usb20_1_extrefclk] = - zx_gate("usb20_1_extrefclk", "clk_12", clk_en0, 24); - clk[zx296702_usb20_2_ahb_hclk] = - zx_gate("usb20_2_ahb_hclk", "main_hclk", clk_en0, 25); - clk[zx296702_usb20_2_extrefclk] = - zx_gate("usb20_2_extrefclk", "clk_12", clk_en0, 26); - clk[zx296702_gmac_axi_m_aclk] = - zx_gate("gmac_axi_m_aclk", "matrix_aclk", clk_en0, 27); - clk[zx296702_gmac_apb_pclk] = - zx_gate("gmac_apb_pclk", "main_pclk", clk_en0, 28); - clk[zx296702_gmac_125_clkin] = - zx_gate("gmac_125_clkin", "clk_125", clk_en0, 29); - clk[zx296702_gmac_rmii_clkin] = - zx_gate("gmac_rmii_clkin", "clk_50", clk_en0, 30); - clk[zx296702_gmac_25m_clk] = - zx_gate("gmac_25m_clk", "clk_25", clk_en0, 31); - clk[zx296702_nandflash_ahb_hclk] = - zx_gate("nandflash_ahb_hclk", "main_hclk", clk_en1, 0); - clk[zx296702_nandflash_wclk] = - zx_gate("nandflash_wclk", "nand_wclk_mux", clk_en1, 1); - clk[zx296702_lsp0_apb_pclk] = - zx_gate("lsp0_apb_pclk", "main_pclk", clk_en1, 2); - clk[zx296702_lsp0_ahb_hclk] = - zx_gate("lsp0_ahb_hclk", "main_hclk", clk_en1, 3); - clk[zx296702_lsp0_26m_wclk] = - zx_gate("lsp0_26m_wclk", "lsp_26_wclk_mux", clk_en1, 4); - clk[zx296702_lsp0_104m_wclk] = - zx_gate("lsp0_104m_wclk", "pll_lsp_104m", clk_en1, 5); - clk[zx296702_lsp0_16m384_wclk] = - zx_gate("lsp0_16m384_wclk", "clk_16m384", clk_en1, 6); - clk[zx296702_lsp1_apb_pclk] = - zx_gate("lsp1_apb_pclk", "main_pclk", clk_en1, 7); - /* fixme: wclk enable bit is bit8. we hack it as reserved 31 for - * uart does not work after parent clk is disabled/enabled */ - clk[zx296702_lsp1_26m_wclk] = - zx_gate("lsp1_26m_wclk", "lsp_26_wclk_mux", clk_en1, 31); - clk[zx296702_lsp1_104m_wclk] = - zx_gate("lsp1_104m_wclk", "pll_lsp_104m", clk_en1, 9); - clk[zx296702_lsp1_32k_clk] = - zx_gate("lsp1_32k_clk", "clk_32k768", clk_en1, 10); - clk[zx296702_aon_hclk] = - zx_gate("aon_hclk", "main_hclk", clk_en1, 11); - clk[zx296702_sys_ctrl_pclk] = - zx_gate("sys_ctrl_pclk", "main_pclk", clk_en1, 12); - clk[zx296702_dma_pclk] = - zx_gate("dma_pclk", "main_pclk", clk_en1, 13); - clk[zx296702_dma_aclk] = - zx_gate("dma_aclk", "matrix_aclk", clk_en1, 14); - clk[zx296702_sec_hclk] = - zx_gate("sec_hclk", "main_hclk", clk_en1, 15); - clk[zx296702_aes_wclk] = - zx_gate("aes_wclk", "sec_wclk_div", clk_en1, 16); - clk[zx296702_des_wclk] = - zx_gate("des_wclk", "sec_wclk_div", clk_en1, 17); - clk[zx296702_iram_aclk] = - zx_gate("iram_aclk", "matrix_aclk", clk_en1, 18); - clk[zx296702_irom_aclk] = - zx_gate("irom_aclk", "matrix_aclk", clk_en1, 19); - clk[zx296702_boot_ctrl_hclk] = - zx_gate("boot_ctrl_hclk", "main_hclk", clk_en1, 20); - clk[zx296702_efuse_clk_30] = - zx_gate("efuse_clk_30", "osc", clk_en1, 21); - - /* todo: add vou local clocks */ - clk[zx296702_vou_main_channel_div] = - zx_div("vou_main_channel_div", "vou_main_channel_wclk", - vou_local_div2_set, 1, 1); - clk[zx296702_vou_aux_channel_div] = - zx_div("vou_aux_channel_div", "vou_aux_channel_wclk", - vou_local_div2_set, 0, 1); - clk[zx296702_vou_tv_enc_hd_div] = - zx_div("vou_tv_enc_hd_div", "vou_tv_enc_hd_mux", - vou_local_div2_set, 3, 1); - clk[zx296702_vou_tv_enc_sd_div] = - zx_div("vou_tv_enc_sd_div", "vou_tv_enc_sd_mux", - vou_local_div2_set, 2, 1); - clk[zx296702_vl0_mux] = - zx_mux("vl0_mux", vl0_sel, array_size(vl0_sel), - vou_local_clksel, 8, 1); - clk[zx296702_vl1_mux] = - zx_mux("vl1_mux", vl0_sel, array_size(vl0_sel), - vou_local_clksel, 9, 1); - clk[zx296702_vl2_mux] = - zx_mux("vl2_mux", vl0_sel, array_size(vl0_sel), - vou_local_clksel, 10, 1); - clk[zx296702_gl0_mux] = - zx_mux("gl0_mux", vl0_sel, array_size(vl0_sel), - vou_local_clksel, 5, 1); - clk[zx296702_gl1_mux] = - zx_mux("gl1_mux", vl0_sel, array_size(vl0_sel), - vou_local_clksel, 6, 1); - clk[zx296702_gl2_mux] = - zx_mux("gl2_mux", vl0_sel, array_size(vl0_sel), - vou_local_clksel, 7, 1); - clk[zx296702_wb_mux] = - zx_mux("wb_mux", vl0_sel, array_size(vl0_sel), - vou_local_clksel, 11, 1); - clk[zx296702_hdmi_mux] = - zx_mux("hdmi_mux", hdmi_sel, array_size(hdmi_sel), - vou_local_clksel, 4, 1); - clk[zx296702_vou_tv_enc_hd_mux] = - zx_mux("vou_tv_enc_hd_mux", hdmi_sel, array_size(hdmi_sel), - vou_local_clksel, 3, 1); - clk[zx296702_vou_tv_enc_sd_mux] = - zx_mux("vou_tv_enc_sd_mux", hdmi_sel, array_size(hdmi_sel), - vou_local_clksel, 2, 1); - clk[zx296702_vl0_clk] = - zx_gate("vl0_clk", "vl0_mux", vou_local_clken, 8); - clk[zx296702_vl1_clk] = - zx_gate("vl1_clk", "vl1_mux", vou_local_clken, 9); - clk[zx296702_vl2_clk] = - zx_gate("vl2_clk", "vl2_mux", vou_local_clken, 10); - clk[zx296702_gl0_clk] = - zx_gate("gl0_clk", "gl0_mux", vou_local_clken, 5); - clk[zx296702_gl1_clk] = - zx_gate("gl1_clk", "gl1_mux", vou_local_clken, 6); - clk[zx296702_gl2_clk] = - zx_gate("gl2_clk", "gl2_mux", vou_local_clken, 7); - clk[zx296702_wb_clk] = - zx_gate("wb_clk", "wb_mux", vou_local_clken, 11); - clk[zx296702_cl_clk] = - zx_gate("cl_clk", "vou_main_channel_div", vou_local_clken, 12); - clk[zx296702_main_mix_clk] = - zx_gate("main_mix_clk", "vou_main_channel_div", - vou_local_clken, 4); - clk[zx296702_aux_mix_clk] = - zx_gate("aux_mix_clk", "vou_aux_channel_div", - vou_local_clken, 3); - clk[zx296702_hdmi_clk] = - zx_gate("hdmi_clk", "hdmi_mux", vou_local_clken, 2); - clk[zx296702_vou_tv_enc_hd_dac_clk] = - zx_gate("vou_tv_enc_hd_dac_clk", "vou_tv_enc_hd_div", - vou_local_clken, 1); - clk[zx296702_vou_tv_enc_sd_dac_clk] = - zx_gate("vou_tv_enc_sd_dac_clk", "vou_tv_enc_sd_div", - vou_local_clken, 0); - - /* ca9 periphclk = a9_wclk / 2 */ - clk[zx296702_a9_periphclk] = - clk_register_fixed_factor(null, "a9_periphclk", "a9_wclk", - 0, 1, 2); - - for (i = 0; i < array_size(topclk); i++) { - if (is_err(clk[i])) { - pr_err("zx296702 clk %d: register failed with %ld ", - i, ptr_err(clk[i])); - return; - } - } - - topclk_data.clks = topclk; - topclk_data.clk_num = array_size(topclk); - of_clk_add_provider(np, of_clk_src_onecell_get, &topclk_data); -} -clk_of_declare(zx296702_top_clk, "zte,zx296702-topcrm-clk", - zx296702_top_clocks_init); - -static void __init zx296702_lsp0_clocks_init(struct device_node *np) -{ - struct clk **clk = lsp0clk; - int i; - - lsp0crpm_base = of_iomap(np, 0); - warn_on(!lsp0crpm_base); - - /* sdmmc1 */ - clk[zx296702_sdmmc1_wclk_mux] = - zx_mux("sdmmc1_wclk_mux", sdmmc1_wclk_sel, - array_size(sdmmc1_wclk_sel), clk_sdmmc1, 4, 1); - clk[zx296702_sdmmc1_wclk_div] = - zx_div("sdmmc1_wclk_div", "sdmmc1_wclk_mux", clk_sdmmc1, 12, 4); - clk[zx296702_sdmmc1_wclk] = - zx_gate("sdmmc1_wclk", "sdmmc1_wclk_div", clk_sdmmc1, 1); - clk[zx296702_sdmmc1_pclk] = - zx_gate("sdmmc1_pclk", "lsp0_apb_pclk", clk_sdmmc1, 0); - - clk[zx296702_gpio_clk] = - zx_gate("gpio_clk", "lsp0_apb_pclk", clk_gpio, 0); - - /* spdif */ - clk[zx296702_spdif0_wclk_mux] = - zx_mux("spdif0_wclk_mux", spdif0_wclk_sel, - array_size(spdif0_wclk_sel), clk_spdif0, 4, 1); - clk[zx296702_spdif0_wclk] = - zx_gate("spdif0_wclk", "spdif0_wclk_mux", clk_spdif0, 1); - clk[zx296702_spdif0_pclk] = - zx_gate("spdif0_pclk", "lsp0_apb_pclk", clk_spdif0, 0); - - clk[zx296702_spdif0_div] = - clk_register_zx_audio("spdif0_div", "spdif0_wclk", 0, - spdif0_div); - - /* i2s */ - clk[zx296702_i2s0_wclk_mux] = - zx_mux("i2s0_wclk_mux", i2s_wclk_sel, - array_size(i2s_wclk_sel), clk_i2s0, 4, 1); - clk[zx296702_i2s0_wclk] = - zx_gate("i2s0_wclk", "i2s0_wclk_mux", clk_i2s0, 1); - clk[zx296702_i2s0_pclk] = - zx_gate("i2s0_pclk", "lsp0_apb_pclk", clk_i2s0, 0); - - clk[zx296702_i2s0_div] = - clk_register_zx_audio("i2s0_div", "i2s0_wclk", 0, i2s0_div); - - clk[zx296702_i2s1_wclk_mux] = - zx_mux("i2s1_wclk_mux", i2s_wclk_sel, - array_size(i2s_wclk_sel), clk_i2s1, 4, 1); - clk[zx296702_i2s1_wclk] = - zx_gate("i2s1_wclk", "i2s1_wclk_mux", clk_i2s1, 1); - clk[zx296702_i2s1_pclk] = - zx_gate("i2s1_pclk", "lsp0_apb_pclk", clk_i2s1, 0); - - clk[zx296702_i2s1_div] = - clk_register_zx_audio("i2s1_div", "i2s1_wclk", 0, i2s1_div); - - clk[zx296702_i2s2_wclk_mux] = - zx_mux("i2s2_wclk_mux", i2s_wclk_sel, - array_size(i2s_wclk_sel), clk_i2s2, 4, 1); - clk[zx296702_i2s2_wclk] = - zx_gate("i2s2_wclk", "i2s2_wclk_mux", clk_i2s2, 1); - clk[zx296702_i2s2_pclk] = - zx_gate("i2s2_pclk", "lsp0_apb_pclk", clk_i2s2, 0); - - clk[zx296702_i2s2_div] = - clk_register_zx_audio("i2s2_div", "i2s2_wclk", 0, i2s2_div); - - for (i = 0; i < array_size(lsp0clk); i++) { - if (is_err(clk[i])) { - pr_err("zx296702 clk %d: register failed with %ld ", - i, ptr_err(clk[i])); - return; - } - } - - lsp0clk_data.clks = lsp0clk; - lsp0clk_data.clk_num = array_size(lsp0clk); - of_clk_add_provider(np, of_clk_src_onecell_get, &lsp0clk_data); -} -clk_of_declare(zx296702_lsp0_clk, "zte,zx296702-lsp0crpm-clk", - zx296702_lsp0_clocks_init); - -static void __init zx296702_lsp1_clocks_init(struct device_node *np) -{ - struct clk **clk = lsp1clk; - int i; - - lsp1crpm_base = of_iomap(np, 0); - warn_on(!lsp1crpm_base); - - /* uart0 */ - clk[zx296702_uart0_wclk_mux] = - zx_mux("uart0_wclk_mux", uart_wclk_sel, - array_size(uart_wclk_sel), clk_uart0, 4, 1); - /* fixme: uart wclk enable bit is bit1 in. we hack it as reserved 31 for - * uart does not work after parent clk is disabled/enabled */ - clk[zx296702_uart0_wclk] = - zx_gate("uart0_wclk", "uart0_wclk_mux", clk_uart0, 31); - clk[zx296702_uart0_pclk] = - zx_gate("uart0_pclk", "lsp1_apb_pclk", clk_uart0, 0); - - /* uart1 */ - clk[zx296702_uart1_wclk_mux] = - zx_mux("uart1_wclk_mux", uart_wclk_sel, - array_size(uart_wclk_sel), clk_uart1, 4, 1); - clk[zx296702_uart1_wclk] = - zx_gate("uart1_wclk", "uart1_wclk_mux", clk_uart1, 1); - clk[zx296702_uart1_pclk] = - zx_gate("uart1_pclk", "lsp1_apb_pclk", clk_uart1, 0); - - /* sdmmc0 */ - clk[zx296702_sdmmc0_wclk_mux] = - zx_mux("sdmmc0_wclk_mux", sdmmc0_wclk_sel, - array_size(sdmmc0_wclk_sel), clk_sdmmc0, 4, 1); - clk[zx296702_sdmmc0_wclk_div] = - zx_div("sdmmc0_wclk_div", "sdmmc0_wclk_mux", clk_sdmmc0, 12, 4); - clk[zx296702_sdmmc0_wclk] = - zx_gate("sdmmc0_wclk", "sdmmc0_wclk_div", clk_sdmmc0, 1); - clk[zx296702_sdmmc0_pclk] = - zx_gate("sdmmc0_pclk", "lsp1_apb_pclk", clk_sdmmc0, 0); - - clk[zx296702_spdif1_wclk_mux] = - zx_mux("spdif1_wclk_mux", spdif1_wclk_sel, - array_size(spdif1_wclk_sel), clk_spdif1, 4, 1); - clk[zx296702_spdif1_wclk] = - zx_gate("spdif1_wclk", "spdif1_wclk_mux", clk_spdif1, 1); - clk[zx296702_spdif1_pclk] = - zx_gate("spdif1_pclk", "lsp1_apb_pclk", clk_spdif1, 0); - - clk[zx296702_spdif1_div] = - clk_register_zx_audio("spdif1_div", "spdif1_wclk", 0, - spdif1_div); - - for (i = 0; i < array_size(lsp1clk); i++) { - if (is_err(clk[i])) { - pr_err("zx296702 clk %d: register failed with %ld ", - i, ptr_err(clk[i])); - return; - } - } - - lsp1clk_data.clks = lsp1clk; - lsp1clk_data.clk_num = array_size(lsp1clk); - of_clk_add_provider(np, of_clk_src_onecell_get, &lsp1clk_data); -} -clk_of_declare(zx296702_lsp1_clk, "zte,zx296702-lsp1crpm-clk", - zx296702_lsp1_clocks_init); diff --git a/drivers/clk/zte/clk-zx296718.c b/drivers/clk/zte/clk-zx296718.c --- a/drivers/clk/zte/clk-zx296718.c +++ /dev/null -// spdx-license-identifier: gpl-2.0-only -/* - * copyright (c) 2015 - 2016 zte corporation. - * copyright (c) 2016 linaro ltd. - */ -#include <linux/clk-provider.h> -#include <linux/device.h> -#include <linux/kernel.h> -#include <linux/of_address.h> -#include <linux/of_device.h> -#include <linux/platform_device.h> - -#include <dt-bindings/clock/zx296718-clock.h> -#include "clk.h" - -/* top crm */ -#define top_clk_mux0 0x04 -#define top_clk_mux1 0x08 -#define top_clk_mux2 0x0c -#define top_clk_mux3 0x10 -#define top_clk_mux4 0x14 -#define top_clk_mux5 0x18 -#define top_clk_mux6 0x1c -#define top_clk_mux7 0x20 -#define top_clk_mux9 0x28 - - -#define top_clk_gate0 0x34 -#define top_clk_gate1 0x38 -#define top_clk_gate2 0x3c -#define top_clk_gate3 0x40 -#define top_clk_gate4 0x44 -#define top_clk_gate5 0x48 -#define top_clk_gate6 0x4c - -#define top_clk_div0 0x58 - -#define pll_cpu_reg 0x80 -#define pll_vga_reg 0xb0 -#define pll_ddr_reg 0xa0 - -/* lsp0 crm */ -#define lsp0_timer3_clk 0x4 -#define lsp0_timer4_clk 0x8 -#define lsp0_timer5_clk 0xc -#define lsp0_uart3_clk 0x10 -#define lsp0_uart1_clk 0x14 -#define lsp0_uart2_clk 0x18 -#define lsp0_spifc0_clk 0x1c -#define lsp0_i2c4_clk 0x20 -#define lsp0_i2c5_clk 0x24 -#define lsp0_ssp0_clk 0x28 -#define lsp0_ssp1_clk 0x2c -#define lsp0_usim0_clk 0x30 -#define lsp0_gpio_clk 0x34 -#define lsp0_i2c3_clk 0x38 - -/* lsp1 crm */ -#define lsp1_uart4_clk 0x08 -#define lsp1_uart5_clk 0x0c -#define lsp1_pwm_clk 0x10 -#define lsp1_i2c2_clk 0x14 -#define lsp1_ssp2_clk 0x1c -#define lsp1_ssp3_clk 0x20 -#define lsp1_ssp4_clk 0x24 -#define lsp1_usim1_clk 0x28 - -/* audio lsp */ -#define audio_i2s0_div_cfg1 0x10 -#define audio_i2s0_div_cfg2 0x14 -#define audio_i2s0_clk 0x18 -#define audio_i2s1_div_cfg1 0x20 -#define audio_i2s1_div_cfg2 0x24 -#define audio_i2s1_clk 0x28 -#define audio_i2s2_div_cfg1 0x30 -#define audio_i2s2_div_cfg2 0x34 -#define audio_i2s2_clk 0x38 -#define audio_i2s3_div_cfg1 0x40 -#define audio_i2s3_div_cfg2 0x44 -#define audio_i2s3_clk 0x48 -#define audio_i2c0_clk 0x50 -#define audio_spdif0_div_cfg1 0x60 -#define audio_spdif0_div_cfg2 0x64 -#define audio_spdif0_clk 0x68 -#define audio_spdif1_div_cfg1 0x70 -#define audio_spdif1_div_cfg2 0x74 -#define audio_spdif1_clk 0x78 -#define audio_timer_clk 0x80 -#define audio_tdm_clk 0x90 -#define audio_ts_clk 0xa0 - -static define_spinlock(clk_lock); - -static const struct zx_pll_config pll_cpu_table[] = { - pll_rate(1312000000, 0x00103621, 0x04aaaaaa), - pll_rate(1407000000, 0x00103a21, 0x04aaaaaa), - pll_rate(1503000000, 0x00103e21, 0x04aaaaaa), - pll_rate(1600000000, 0x00104221, 0x04aaaaaa), -}; - -static const struct zx_pll_config pll_vga_table[] = { - pll_rate(36000000, 0x00102464, 0x04000000), /* 800x600@56 */ - pll_rate(40000000, 0x00102864, 0x04000000), /* 800x600@60 */ - pll_rate(49500000, 0x00103164, 0x04800000), /* 800x600@75 */ - pll_rate(50000000, 0x00103264, 0x04000000), /* 800x600@72 */ - pll_rate(56250000, 0x00103864, 0x04400000), /* 800x600@85 */ - pll_rate(65000000, 0x00104164, 0x04000000), /* 1024x768@60 */ - pll_rate(74375000, 0x00104a64, 0x04600000), /* 1280x720@60 */ - pll_rate(75000000, 0x00104b64, 0x04800000), /* 1024x768@70 */ - pll_rate(78750000, 0x00104e64, 0x04c00000), /* 1024x768@75 */ - pll_rate(85500000, 0x00105564, 0x04800000), /* 1360x768@60 */ - pll_rate(106500000, 0x00106a64, 0x04800000), /* 1440x900@60 */ - pll_rate(108000000, 0x00106c64, 0x04000000), /* 1280x1024@60 */ - pll_rate(110000000, 0x00106e64, 0x04000000), /* 1024x768@85 */ - pll_rate(135000000, 0x00105a44, 0x04000000), /* 1280x1024@75 */ - pll_rate(136750000, 0x00104462, 0x04600000), /* 1440x900@75 */ - pll_rate(148500000, 0x00104a62, 0x04400000), /* 1920x1080@60 */ - pll_rate(157000000, 0x00104e62, 0x04800000), /* 1440x900@85 */ - pll_rate(157500000, 0x00104e62, 0x04c00000), /* 1280x1024@85 */ - pll_rate(162000000, 0x00105162, 0x04000000), /* 1600x1200@60 */ - pll_rate(193250000, 0x00106062, 0x04a00000), /* 1920x1200@60 */ -}; - -pname(osc) = { - "osc24m", - "osc32k", -}; - -pname(dbg_wclk_p) = { - "clk334m", - "clk466m", - "clk396m", - "clk250m", -}; - -pname(a72_coreclk_p) = { - "osc24m", - "pll_mm0_1188m", - "pll_mm1_1296m", - "clk1000m", - "clk648m", - "clk1600m", - "pll_audio_1800m", - "pll_vga_1800m", -}; - -pname(cpu_periclk_p) = { - "osc24m", - "clk500m", - "clk594m", - "clk466m", - "clk294m", - "clk334m", - "clk250m", - "clk125m", -}; - -pname(a53_coreclk_p) = { - "osc24m", - "clk1000m", - "pll_mm0_1188m", - "clk648m", - "clk500m", - "clk800m", - "clk1600m", - "pll_audio_1800m", -}; - -pname(sec_wclk_p) = { - "osc24m", - "clk396m", - "clk334m", - "clk297m", - "clk250m", - "clk198m", - "clk148m5", - "clk99m", -}; - -pname(sd_nand_wclk_p) = { - "osc24m", - "clk49m5", - "clk99m", - "clk198m", - "clk167m", - "clk148m5", - "clk125m", - "clk216m", -}; - -pname(emmc_wclk_p) = { - "osc24m", - "clk198m", - "clk99m", - "clk396m", - "clk334m", - "clk297m", - "clk250m", - "clk148m5", -}; - -pname(clk32_p) = { - "osc32k", - "clk32k768", -}; - -pname(usb_ref24m_p) = { - "osc32k", - "clk32k768", -}; - -pname(sys_noc_alck_p) = { - "osc24m", - "clk250m", - "clk198m", - "clk148m5", - "clk108m", - "clk54m", - "clk216m", - "clk240m", -}; - -pname(vde_aclk_p) = { - "clk334m", - "clk594m", - "clk500m", - "clk432m", - "clk480m", - "clk297m", - "clk_vga", /*600mhz*/ - "clk294m", -}; - -pname(vce_aclk_p) = { - "clk334m", - "clk594m", - "clk500m", - "clk432m", - "clk396m", - "clk297m", - "clk_vga", /*600mhz*/ - "clk294m", -}; - -pname(hde_aclk_p) = { - "clk334m", - "clk594m", - "clk500m", - "clk432m", - "clk396m", - "clk297m", - "clk_vga", /*600mhz*/ - "clk294m", -}; - -pname(gpu_aclk_p) = { - "clk334m", - "clk648m", - "clk594m", - "clk500m", - "clk396m", - "clk297m", - "clk_vga", /*600mhz*/ - "clk294m", -}; - -pname(sappu_aclk_p) = { - "clk396m", - "clk500m", - "clk250m", - "clk148m5", -}; - -pname(sappu_wclk_p) = { - "clk198m", - "clk396m", - "clk334m", - "clk297m", - "clk250m", - "clk148m5", - "clk125m", - "clk99m", -}; - -pname(vou_aclk_p) = { - "clk334m", - "clk594m", - "clk500m", - "clk432m", - "clk396m", - "clk297m", - "clk_vga", /*600mhz*/ - "clk294m", -}; - -pname(vou_main_wclk_p) = { - "clk108m", - "clk594m", - "clk297m", - "clk148m5", - "clk74m25", - "clk54m", - "clk27m", - "clk_vga", -}; - -pname(vou_aux_wclk_p) = { - "clk108m", - "clk148m5", - "clk74m25", - "clk54m", - "clk27m", - "clk_vga", - "clk54m_mm0", - "clk" -}; - -pname(vou_ppu_wclk_p) = { - "clk334m", - "clk432m", - "clk396m", - "clk297m", - "clk250m", - "clk125m", - "clk198m", - "clk99m", -}; - -pname(vga_i2c_wclk_p) = { - "osc24m", - "clk99m", -}; - -pname(viu_m0_aclk_p) = { - "clk334m", - "clk432m", - "clk396m", - "clk297m", - "clk250m", - "clk125m", - "clk198m", - "osc24m", -}; - -pname(viu_m1_aclk_p) = { - "clk198m", - "clk250m", - "clk297m", - "clk125m", - "clk396m", - "clk334m", - "clk148m5", - "osc24m", -}; - -pname(viu_clk_p) = { - "clk198m", - "clk334m", - "clk297m", - "clk250m", - "clk396m", - "clk125m", - "clk99m", - "clk148m5", -}; - -pname(viu_jpeg_clk_p) = { - "clk334m", - "clk480m", - "clk432m", - "clk396m", - "clk297m", - "clk250m", - "clk125m", - "clk198m", -}; - -pname(ts_sys_clk_p) = { - "clk192m", - "clk167m", - "clk125m", - "clk99m", -}; - -pname(wdt_ares_p) = { - "osc24m", - "clk32k" -}; - -static struct clk_zx_pll zx296718_pll_clk[] = { - zx296718_pll("pll_cpu", "osc24m", pll_cpu_reg, pll_cpu_table), - zx296718_pll("pll_vga", "osc24m", pll_vga_reg, pll_vga_table), -}; - -static struct zx_clk_fixed_factor top_ffactor_clk[] = { - ffactor(0, "clk4m", "osc24m", 1, 6, 0), - ffactor(0, "clk2m", "osc24m", 1, 12, 0), - /* pll cpu */ - ffactor(0, "clk1600m", "pll_cpu", 1, 1, clk_set_rate_parent), - ffactor(0, "clk800m", "pll_cpu", 1, 2, clk_set_rate_parent), - /* pll mac */ - ffactor(0, "clk25m", "pll_mac", 1, 40, 0), - ffactor(0, "clk125m", "pll_mac", 1, 8, 0), - ffactor(0, "clk250m", "pll_mac", 1, 4, 0), - ffactor(0, "clk50m", "pll_mac", 1, 20, 0), - ffactor(0, "clk500m", "pll_mac", 1, 2, 0), - ffactor(0, "clk1000m", "pll_mac", 1, 1, 0), - ffactor(0, "clk334m", "pll_mac", 1, 3, 0), - ffactor(0, "clk167m", "pll_mac", 1, 6, 0), - /* pll mm */ - ffactor(0, "clk54m_mm0", "pll_mm0", 1, 22, 0), - ffactor(0, "clk74m25", "pll_mm0", 1, 16, 0), - ffactor(0, "clk148m5", "pll_mm0", 1, 8, 0), - ffactor(0, "clk297m", "pll_mm0", 1, 4, 0), - ffactor(0, "clk594m", "pll_mm0", 1, 2, 0), - ffactor(0, "pll_mm0_1188m", "pll_mm0", 1, 1, 0), - ffactor(0, "clk396m", "pll_mm0", 1, 3, 0), - ffactor(0, "clk198m", "pll_mm0", 1, 6, 0), - ffactor(0, "clk99m", "pll_mm0", 1, 12, 0), - ffactor(0, "clk49m5", "pll_mm0", 1, 24, 0), - /* pll mm */ - ffactor(0, "clk324m", "pll_mm1", 1, 4, 0), - ffactor(0, "clk648m", "pll_mm1", 1, 2, 0), - ffactor(0, "pll_mm1_1296m", "pll_mm1", 1, 1, 0), - ffactor(0, "clk216m", "pll_mm1", 1, 6, 0), - ffactor(0, "clk432m", "pll_mm1", 1, 3, 0), - ffactor(0, "clk108m", "pll_mm1", 1, 12, 0), - ffactor(0, "clk72m", "pll_mm1", 1, 18, 0), - ffactor(0, "clk27m", "pll_mm1", 1, 48, 0), - ffactor(0, "clk54m", "pll_mm1", 1, 24, 0), - /* vga */ - ffactor(0, "pll_vga_1800m", "pll_vga", 1, 1, 0), - ffactor(0, "clk_vga", "pll_vga", 1, 1, clk_set_rate_parent), - /* pll ddr */ - ffactor(0, "clk466m", "pll_ddr", 1, 2, 0), - - /* pll audio */ - ffactor(0, "pll_audio_1800m", "pll_audio", 1, 1, 0), - ffactor(0, "clk32k768", "pll_audio", 1, 27000, 0), - ffactor(0, "clk16m384", "pll_audio", 1, 54, 0), - ffactor(0, "clk294m", "pll_audio", 1, 3, 0), - - /* pll hsic*/ - ffactor(0, "clk240m", "pll_hsic", 1, 4, 0), - ffactor(0, "clk480m", "pll_hsic", 1, 2, 0), - ffactor(0, "clk192m", "pll_hsic", 1, 5, 0), - ffactor(0, "clk_pll_24m", "pll_hsic", 1, 40, 0), - ffactor(0, "emmc_mux_div2", "emmc_mux", 1, 2, clk_set_rate_parent), -}; - -static const struct clk_div_table noc_div_table[] = { - { .val = 1, .div = 2, }, - { .val = 3, .div = 4, }, -}; -static struct zx_clk_div top_div_clk[] = { - div_t(0, "sys_noc_hclk", "sys_noc_aclk", top_clk_div0, 0, 2, 0, noc_div_table), - div_t(0, "sys_noc_pclk", "sys_noc_aclk", top_clk_div0, 4, 2, 0, noc_div_table), -}; - -static struct zx_clk_mux top_mux_clk[] = { - mux(0, "dbg_mux", dbg_wclk_p, top_clk_mux0, 12, 2), - mux(0, "a72_mux", a72_coreclk_p, top_clk_mux0, 8, 3), - mux(0, "cpu_peri_mux", cpu_periclk_p, top_clk_mux0, 4, 3), - mux_f(0, "a53_mux", a53_coreclk_p, top_clk_mux0, 0, 3, clk_set_rate_parent, 0), - mux(0, "sys_noc_aclk", sys_noc_alck_p, top_clk_mux1, 0, 3), - mux(0, "sec_mux", sec_wclk_p, top_clk_mux2, 16, 3), - mux(0, "sd1_mux", sd_nand_wclk_p, top_clk_mux2, 12, 3), - mux(0, "sd0_mux", sd_nand_wclk_p, top_clk_mux2, 8, 3), - mux(0, "emmc_mux", emmc_wclk_p, top_clk_mux2, 4, 3), - mux(0, "nand_mux", sd_nand_wclk_p, top_clk_mux2, 0, 3), - mux(0, "usb_ref24m_mux", usb_ref24m_p, top_clk_mux9, 16, 1), - mux(0, "clk32k", clk32_p, top_clk_mux9, 12, 1), - mux_f(0, "wdt_mux", wdt_ares_p, top_clk_mux9, 8, 1, clk_set_rate_parent, 0), - mux(0, "timer_mux", osc, top_clk_mux9, 4, 1), - mux(0, "vde_mux", vde_aclk_p, top_clk_mux4, 0, 3), - mux(0, "vce_mux", vce_aclk_p, top_clk_mux4, 4, 3), - mux(0, "hde_mux", hde_aclk_p, top_clk_mux4, 8, 3), - mux(0, "gpu_mux", gpu_aclk_p, top_clk_mux5, 0, 3), - mux(0, "sappu_a_mux", sappu_aclk_p, top_clk_mux5, 4, 2), - mux(0, "sappu_w_mux", sappu_wclk_p, top_clk_mux5, 8, 3), - mux(0, "vou_a_mux", vou_aclk_p, top_clk_mux7, 0, 3), - mux_f(0, "vou_main_w_mux", vou_main_wclk_p, top_clk_mux7, 4, 3, clk_set_rate_parent, 0), - mux_f(0, "vou_aux_w_mux", vou_aux_wclk_p, top_clk_mux7, 8, 3, clk_set_rate_parent, 0), - mux(0, "vou_ppu_w_mux", vou_ppu_wclk_p, top_clk_mux7, 12, 3), - mux(0, "vga_i2c_mux", vga_i2c_wclk_p, top_clk_mux7, 16, 1), - mux(0, "viu_m0_a_mux", viu_m0_aclk_p, top_clk_mux6, 0, 3), - mux(0, "viu_m1_a_mux", viu_m1_aclk_p, top_clk_mux6, 4, 3), - mux(0, "viu_w_mux", viu_clk_p, top_clk_mux6, 8, 3), - mux(0, "viu_jpeg_w_mux", viu_jpeg_clk_p, top_clk_mux6, 12, 3), - mux(0, "ts_sys_mux", ts_sys_clk_p, top_clk_mux6, 16, 2), -}; - -static struct zx_clk_gate top_gate_clk[] = { - gate(cpu_dbg_gate, "dbg_wclk", "dbg_mux", top_clk_gate0, 4, clk_set_rate_parent, 0), - gate(a72_gate, "a72_coreclk", "a72_mux", top_clk_gate0, 3, clk_set_rate_parent, 0), - gate(cpu_peri_gate, "cpu_peri", "cpu_peri_mux", top_clk_gate0, 1, clk_set_rate_parent, 0), - gate(a53_gate, "a53_coreclk", "a53_mux", top_clk_gate0, 0, clk_set_rate_parent, 0), - gate(sd1_wclk, "sd1_wclk", "sd1_mux", top_clk_gate1, 13, clk_set_rate_parent, 0), - gate(sd0_wclk, "sd0_wclk", "sd0_mux", top_clk_gate1, 9, clk_set_rate_parent, 0), - gate(emmc_wclk, "emmc_wclk", "emmc_mux_div2", top_clk_gate0, 5, clk_set_rate_parent, 0), - gate(emmc_nand_axi, "emmc_nand_aclk", "sys_noc_aclk", top_clk_gate1, 4, clk_set_rate_parent, 0), - gate(nand_wclk, "nand_wclk", "nand_mux", top_clk_gate0, 1, clk_set_rate_parent, 0), - gate(emmc_nand_ahb, "emmc_nand_hclk", "sys_noc_hclk", top_clk_gate1, 0, clk_set_rate_parent, 0), - gate(0, "lsp1_pclk", "sys_noc_pclk", top_clk_gate2, 31, 0, 0), - gate(lsp1_148m5, "lsp1_148m5", "clk148m5", top_clk_gate2, 30, 0, 0), - gate(lsp1_99m, "lsp1_99m", "clk99m", top_clk_gate2, 29, 0, 0), - gate(lsp1_24m, "lsp1_24m", "osc24m", top_clk_gate2, 28, 0, 0), - gate(lsp0_74m25, "lsp0_74m25", "clk74m25", top_clk_gate2, 25, 0, 0), - gate(0, "lsp0_pclk", "sys_noc_pclk", top_clk_gate2, 24, 0, 0), - gate(lsp0_32k, "lsp0_32k", "osc32k", top_clk_gate2, 23, 0, 0), - gate(lsp0_148m5, "lsp0_148m5", "clk148m5", top_clk_gate2, 22, 0, 0), - gate(lsp0_99m, "lsp0_99m", "clk99m", top_clk_gate2, 21, 0, 0), - gate(lsp0_24m, "lsp0_24m", "osc24m", top_clk_gate2, 20, 0, 0), - gate(audio_99m, "audio_99m", "clk99m", top_clk_gate5, 27, 0, 0), - gate(audio_24m, "audio_24m", "osc24m", top_clk_gate5, 28, 0, 0), - gate(audio_16m384, "audio_16m384", "clk16m384", top_clk_gate5, 29, 0, 0), - gate(audio_32k, "audio_32k", "clk32k", top_clk_gate5, 30, 0, 0), - gate(wdt_wclk, "wdt_wclk", "wdt_mux", top_clk_gate6, 9, clk_set_rate_parent, 0), - gate(timer_wclk, "timer_wclk", "timer_mux", top_clk_gate6, 5, clk_set_rate_parent, 0), - gate(vde_aclk, "vde_aclk", "vde_mux", top_clk_gate3, 0, clk_set_rate_parent, 0), - gate(vce_aclk, "vce_aclk", "vce_mux", top_clk_gate3, 4, clk_set_rate_parent, 0), - gate(hde_aclk, "hde_aclk", "hde_mux", top_clk_gate3, 8, clk_set_rate_parent, 0), - gate(gpu_aclk, "gpu_aclk", "gpu_mux", top_clk_gate3, 16, clk_set_rate_parent, 0), - gate(sappu_aclk, "sappu_aclk", "sappu_a_mux", top_clk_gate3, 20, clk_set_rate_parent, 0), - gate(sappu_wclk, "sappu_wclk", "sappu_w_mux", top_clk_gate3, 22, clk_set_rate_parent, 0), - gate(vou_aclk, "vou_aclk", "vou_a_mux", top_clk_gate4, 16, clk_set_rate_parent, 0), - gate(vou_main_wclk, "vou_main_wclk", "vou_main_w_mux", top_clk_gate4, 18, clk_set_rate_parent, 0), - gate(vou_aux_wclk, "vou_aux_wclk", "vou_aux_w_mux", top_clk_gate4, 19, clk_set_rate_parent, 0), - gate(vou_ppu_wclk, "vou_ppu_wclk", "vou_ppu_w_mux", top_clk_gate4, 20, clk_set_rate_parent, 0), - gate(mipi_cfg_clk, "mipi_cfg_clk", "osc24m", top_clk_gate4, 21, 0, 0), - gate(vga_i2c_wclk, "vga_i2c_wclk", "vga_i2c_mux", top_clk_gate4, 23, clk_set_rate_parent, 0), - gate(mipi_ref_clk, "mipi_ref_clk", "clk27m", top_clk_gate4, 24, 0, 0), - gate(hdmi_osc_cec, "hdmi_osc_cec", "clk2m", top_clk_gate4, 22, 0, 0), - gate(hdmi_osc_clk, "hdmi_osc_clk", "clk240m", top_clk_gate4, 25, 0, 0), - gate(hdmi_xclk, "hdmi_xclk", "osc24m", top_clk_gate4, 26, 0, 0), - gate(viu_m0_aclk, "viu_m0_aclk", "viu_m0_a_mux", top_clk_gate4, 0, clk_set_rate_parent, 0), - gate(viu_m1_aclk, "viu_m1_aclk", "viu_m1_a_mux", top_clk_gate4, 1, clk_set_rate_parent, 0), - gate(viu_wclk, "viu_wclk", "viu_w_mux", top_clk_gate4, 2, clk_set_rate_parent, 0), - gate(viu_jpeg_wclk, "viu_jpeg_wclk", "viu_jpeg_w_mux", top_clk_gate4, 3, clk_set_rate_parent, 0), - gate(viu_cfg_clk, "viu_cfg_clk", "osc24m", top_clk_gate4, 6, 0, 0), - gate(ts_sys_wclk, "ts_sys_wclk", "ts_sys_mux", top_clk_gate5, 2, clk_set_rate_parent, 0), - gate(ts_sys_108m, "ts_sys_108m", "clk108m", top_clk_gate5, 3, 0, 0), - gate(usb20_hclk, "usb20_hclk", "sys_noc_hclk", top_clk_gate2, 12, 0, 0), - gate(usb20_phy_clk, "usb20_phy_clk", "usb_ref24m_mux", top_clk_gate2, 13, 0, 0), - gate(usb21_hclk, "usb21_hclk", "sys_noc_hclk", top_clk_gate2, 14, 0, 0), - gate(usb21_phy_clk, "usb21_phy_clk", "usb_ref24m_mux", top_clk_gate2, 15, 0, 0), - gate(gmac_rmiiclk, "gmac_rmii_clk", "clk50m", top_clk_gate2, 3, 0, 0), - gate(gmac_pclk, "gmac_pclk", "clk198m", top_clk_gate2, 1, 0, 0), - gate(gmac_aclk, "gmac_aclk", "clk49m5", top_clk_gate2, 0, 0, 0), - gate(gmac_rfclk, "gmac_refclk", "clk25m", top_clk_gate2, 4, 0, 0), - gate(sd1_ahb, "sd1_hclk", "sys_noc_hclk", top_clk_gate1, 12, 0, 0), - gate(sd0_ahb, "sd0_hclk", "sys_noc_hclk", top_clk_gate1, 8, 0, 0), - gate(tempsensor_gate, "tempsensor_gate", "clk4m", top_clk_gate5, 31, 0, 0), -}; - -static struct clk_hw_onecell_data top_hw_onecell_data = { - .num = top_nr_clks, - .hws = { - [top_nr_clks - 1] = null, - }, -}; - -static int __init top_clocks_init(struct device_node *np) -{ - void __iomem *reg_base; - int i, ret; - const char *name; - - reg_base = of_iomap(np, 0); - if (!reg_base) { - pr_err("%s: unable to map clk base ", __func__); - return -enxio; - } - - for (i = 0; i < array_size(zx296718_pll_clk); i++) { - zx296718_pll_clk[i].reg_base += (uintptr_t)reg_base; - name = zx296718_pll_clk[i].hw.init->name; - ret = clk_hw_register(null, &zx296718_pll_clk[i].hw); - if (ret) - pr_warn("top clk %s init error! ", name); - } - - for (i = 0; i < array_size(top_ffactor_clk); i++) { - if (top_ffactor_clk[i].id) - top_hw_onecell_data.hws[top_ffactor_clk[i].id] = - &top_ffactor_clk[i].factor.hw; - - name = top_ffactor_clk[i].factor.hw.init->name; - ret = clk_hw_register(null, &top_ffactor_clk[i].factor.hw); - if (ret) - pr_warn("top clk %s init error! ", name); - } - - for (i = 0; i < array_size(top_mux_clk); i++) { - if (top_mux_clk[i].id) - top_hw_onecell_data.hws[top_mux_clk[i].id] = - &top_mux_clk[i].mux.hw; - - top_mux_clk[i].mux.reg += (uintptr_t)reg_base; - name = top_mux_clk[i].mux.hw.init->name; - ret = clk_hw_register(null, &top_mux_clk[i].mux.hw); - if (ret) - pr_warn("top clk %s init error! ", name); - } - - for (i = 0; i < array_size(top_gate_clk); i++) { - if (top_gate_clk[i].id) - top_hw_onecell_data.hws[top_gate_clk[i].id] = - &top_gate_clk[i].gate.hw; - - top_gate_clk[i].gate.reg += (uintptr_t)reg_base; - name = top_gate_clk[i].gate.hw.init->name; - ret = clk_hw_register(null, &top_gate_clk[i].gate.hw); - if (ret) - pr_warn("top clk %s init error! ", name); - } - - for (i = 0; i < array_size(top_div_clk); i++) { - if (top_div_clk[i].id) - top_hw_onecell_data.hws[top_div_clk[i].id] = - &top_div_clk[i].div.hw; - - top_div_clk[i].div.reg += (uintptr_t)reg_base; - name = top_div_clk[i].div.hw.init->name; - ret = clk_hw_register(null, &top_div_clk[i].div.hw); - if (ret) - pr_warn("top clk %s init error! ", name); - } - - ret = of_clk_add_hw_provider(np, of_clk_hw_onecell_get, - &top_hw_onecell_data); - if (ret) { - pr_err("failed to register top clk provider: %d ", ret); - return ret; - } - - return 0; -} - -static const struct clk_div_table common_even_div_table[] = { - { .val = 0, .div = 1, }, - { .val = 1, .div = 2, }, - { .val = 3, .div = 4, }, - { .val = 5, .div = 6, }, - { .val = 7, .div = 8, }, - { .val = 9, .div = 10, }, - { .val = 11, .div = 12, }, - { .val = 13, .div = 14, }, - { .val = 15, .div = 16, }, -}; - -static const struct clk_div_table common_div_table[] = { - { .val = 0, .div = 1, }, - { .val = 1, .div = 2, }, - { .val = 2, .div = 3, }, - { .val = 3, .div = 4, }, - { .val = 4, .div = 5, }, - { .val = 5, .div = 6, }, - { .val = 6, .div = 7, }, - { .val = 7, .div = 8, }, - { .val = 8, .div = 9, }, - { .val = 9, .div = 10, }, - { .val = 10, .div = 11, }, - { .val = 11, .div = 12, }, - { .val = 12, .div = 13, }, - { .val = 13, .div = 14, }, - { .val = 14, .div = 15, }, - { .val = 15, .div = 16, }, -}; - -pname(lsp0_wclk_common_p) = { - "lsp0_24m", - "lsp0_99m", -}; - -pname(lsp0_wclk_timer3_p) = { - "timer3_div", - "lsp0_32k" -}; - -pname(lsp0_wclk_timer4_p) = { - "timer4_div", - "lsp0_32k" -}; - -pname(lsp0_wclk_timer5_p) = { - "timer5_div", - "lsp0_32k" -}; - -pname(lsp0_wclk_spifc0_p) = { - "lsp0_148m5", - "lsp0_24m", - "lsp0_99m", - "lsp0_74m25" -}; - -pname(lsp0_wclk_ssp_p) = { - "lsp0_148m5", - "lsp0_99m", - "lsp0_24m", -}; - -static struct zx_clk_mux lsp0_mux_clk[] = { - mux(0, "timer3_wclk_mux", lsp0_wclk_timer3_p, lsp0_timer3_clk, 4, 1), - mux(0, "timer4_wclk_mux", lsp0_wclk_timer4_p, lsp0_timer4_clk, 4, 1), - mux(0, "timer5_wclk_mux", lsp0_wclk_timer5_p, lsp0_timer5_clk, 4, 1), - mux(0, "uart3_wclk_mux", lsp0_wclk_common_p, lsp0_uart3_clk, 4, 1), - mux(0, "uart1_wclk_mux", lsp0_wclk_common_p, lsp0_uart1_clk, 4, 1), - mux(0, "uart2_wclk_mux", lsp0_wclk_common_p, lsp0_uart2_clk, 4, 1), - mux(0, "spifc0_wclk_mux", lsp0_wclk_spifc0_p, lsp0_spifc0_clk, 4, 2), - mux(0, "i2c4_wclk_mux", lsp0_wclk_common_p, lsp0_i2c4_clk, 4, 1), - mux(0, "i2c5_wclk_mux", lsp0_wclk_common_p, lsp0_i2c5_clk, 4, 1), - mux(0, "ssp0_wclk_mux", lsp0_wclk_ssp_p, lsp0_ssp0_clk, 4, 1), - mux(0, "ssp1_wclk_mux", lsp0_wclk_ssp_p, lsp0_ssp1_clk, 4, 1), - mux(0, "i2c3_wclk_mux", lsp0_wclk_common_p, lsp0_i2c3_clk, 4, 1), -}; - -static struct zx_clk_gate lsp0_gate_clk[] = { - gate(lsp0_timer3_wclk, "timer3_wclk", "timer3_wclk_mux", lsp0_timer3_clk, 1, clk_set_rate_parent, 0), - gate(lsp0_timer4_wclk, "timer4_wclk", "timer4_wclk_mux", lsp0_timer4_clk, 1, clk_set_rate_parent, 0), - gate(lsp0_timer5_wclk, "timer5_wclk", "timer5_wclk_mux", lsp0_timer5_clk, 1, clk_set_rate_parent, 0), - gate(lsp0_uart3_wclk, "uart3_wclk", "uart3_wclk_mux", lsp0_uart3_clk, 1, clk_set_rate_parent, 0), - gate(lsp0_uart1_wclk, "uart1_wclk", "uart1_wclk_mux", lsp0_uart1_clk, 1, clk_set_rate_parent, 0), - gate(lsp0_uart2_wclk, "uart2_wclk", "uart2_wclk_mux", lsp0_uart2_clk, 1, clk_set_rate_parent, 0), - gate(lsp0_spifc0_wclk, "spifc0_wclk", "spifc0_wclk_mux", lsp0_spifc0_clk, 1, clk_set_rate_parent, 0), - gate(lsp0_i2c4_wclk, "i2c4_wclk", "i2c4_wclk_mux", lsp0_i2c4_clk, 1, clk_set_rate_parent, 0), - gate(lsp0_i2c5_wclk, "i2c5_wclk", "i2c5_wclk_mux", lsp0_i2c5_clk, 1, clk_set_rate_parent, 0), - gate(lsp0_ssp0_wclk, "ssp0_wclk", "ssp0_div", lsp0_ssp0_clk, 1, clk_set_rate_parent, 0), - gate(lsp0_ssp1_wclk, "ssp1_wclk", "ssp1_div", lsp0_ssp1_clk, 1, clk_set_rate_parent, 0), - gate(lsp0_i2c3_wclk, "i2c3_wclk", "i2c3_wclk_mux", lsp0_i2c3_clk, 1, clk_set_rate_parent, 0), -}; - -static struct zx_clk_div lsp0_div_clk[] = { - div_t(0, "timer3_div", "lsp0_24m", lsp0_timer3_clk, 12, 4, 0, common_even_div_table), - div_t(0, "timer4_div", "lsp0_24m", lsp0_timer4_clk, 12, 4, 0, common_even_div_table), - div_t(0, "timer5_div", "lsp0_24m", lsp0_timer5_clk, 12, 4, 0, common_even_div_table), - div_t(0, "ssp0_div", "ssp0_wclk_mux", lsp0_ssp0_clk, 12, 4, 0, common_even_div_table), - div_t(0, "ssp1_div", "ssp1_wclk_mux", lsp0_ssp1_clk, 12, 4, 0, common_even_div_table), -}; - -static struct clk_hw_onecell_data lsp0_hw_onecell_data = { - .num = lsp0_nr_clks, - .hws = { - [lsp0_nr_clks - 1] = null, - }, -}; - -static int __init lsp0_clocks_init(struct device_node *np) -{ - void __iomem *reg_base; - int i, ret; - const char *name; - - reg_base = of_iomap(np, 0); - if (!reg_base) { - pr_err("%s: unable to map clk base ", __func__); - return -enxio; - } - - for (i = 0; i < array_size(lsp0_mux_clk); i++) { - if (lsp0_mux_clk[i].id) - lsp0_hw_onecell_data.hws[lsp0_mux_clk[i].id] = - &lsp0_mux_clk[i].mux.hw; - - lsp0_mux_clk[i].mux.reg += (uintptr_t)reg_base; - name = lsp0_mux_clk[i].mux.hw.init->name; - ret = clk_hw_register(null, &lsp0_mux_clk[i].mux.hw); - if (ret) - pr_warn("lsp0 clk %s init error! ", name); - } - - for (i = 0; i < array_size(lsp0_gate_clk); i++) { - if (lsp0_gate_clk[i].id) - lsp0_hw_onecell_data.hws[lsp0_gate_clk[i].id] = - &lsp0_gate_clk[i].gate.hw; - - lsp0_gate_clk[i].gate.reg += (uintptr_t)reg_base; - name = lsp0_gate_clk[i].gate.hw.init->name; - ret = clk_hw_register(null, &lsp0_gate_clk[i].gate.hw); - if (ret) - pr_warn("lsp0 clk %s init error! ", name); - } - - for (i = 0; i < array_size(lsp0_div_clk); i++) { - if (lsp0_div_clk[i].id) - lsp0_hw_onecell_data.hws[lsp0_div_clk[i].id] = - &lsp0_div_clk[i].div.hw; - - lsp0_div_clk[i].div.reg += (uintptr_t)reg_base; - name = lsp0_div_clk[i].div.hw.init->name; - ret = clk_hw_register(null, &lsp0_div_clk[i].div.hw); - if (ret) - pr_warn("lsp0 clk %s init error! ", name); - } - - ret = of_clk_add_hw_provider(np, of_clk_hw_onecell_get, - &lsp0_hw_onecell_data); - if (ret) { - pr_err("failed to register lsp0 clk provider: %d ", ret); - return ret; - } - - return 0; -} - -pname(lsp1_wclk_common_p) = { - "lsp1_24m", - "lsp1_99m", -}; - -pname(lsp1_wclk_ssp_p) = { - "lsp1_148m5", - "lsp1_99m", - "lsp1_24m", -}; - -static struct zx_clk_mux lsp1_mux_clk[] = { - mux(0, "uart4_wclk_mux", lsp1_wclk_common_p, lsp1_uart4_clk, 4, 1), - mux(0, "uart5_wclk_mux", lsp1_wclk_common_p, lsp1_uart5_clk, 4, 1), - mux(0, "pwm_wclk_mux", lsp1_wclk_common_p, lsp1_pwm_clk, 4, 1), - mux(0, "i2c2_wclk_mux", lsp1_wclk_common_p, lsp1_i2c2_clk, 4, 1), - mux(0, "ssp2_wclk_mux", lsp1_wclk_ssp_p, lsp1_ssp2_clk, 4, 2), - mux(0, "ssp3_wclk_mux", lsp1_wclk_ssp_p, lsp1_ssp3_clk, 4, 2), - mux(0, "ssp4_wclk_mux", lsp1_wclk_ssp_p, lsp1_ssp4_clk, 4, 2), - mux(0, "usim1_wclk_mux", lsp1_wclk_common_p, lsp1_usim1_clk, 4, 1), -}; - -static struct zx_clk_div lsp1_div_clk[] = { - div_t(0, "pwm_div", "pwm_wclk_mux", lsp1_pwm_clk, 12, 4, clk_set_rate_parent, common_div_table), - div_t(0, "ssp2_div", "ssp2_wclk_mux", lsp1_ssp2_clk, 12, 4, clk_set_rate_parent, common_even_div_table), - div_t(0, "ssp3_div", "ssp3_wclk_mux", lsp1_ssp3_clk, 12, 4, clk_set_rate_parent, common_even_div_table), - div_t(0, "ssp4_div", "ssp4_wclk_mux", lsp1_ssp4_clk, 12, 4, clk_set_rate_parent, common_even_div_table), -}; - -static struct zx_clk_gate lsp1_gate_clk[] = { - gate(lsp1_uart4_wclk, "lsp1_uart4_wclk", "uart4_wclk_mux", lsp1_uart4_clk, 1, clk_set_rate_parent, 0), - gate(lsp1_uart5_wclk, "lsp1_uart5_wclk", "uart5_wclk_mux", lsp1_uart5_clk, 1, clk_set_rate_parent, 0), - gate(lsp1_pwm_wclk, "lsp1_pwm_wclk", "pwm_div", lsp1_pwm_clk, 1, clk_set_rate_parent, 0), - gate(lsp1_pwm_pclk, "lsp1_pwm_pclk", "lsp1_pclk", lsp1_pwm_clk, 0, 0, 0), - gate(lsp1_i2c2_wclk, "lsp1_i2c2_wclk", "i2c2_wclk_mux", lsp1_i2c2_clk, 1, clk_set_rate_parent, 0), - gate(lsp1_ssp2_wclk, "lsp1_ssp2_wclk", "ssp2_div", lsp1_ssp2_clk, 1, clk_set_rate_parent, 0), - gate(lsp1_ssp3_wclk, "lsp1_ssp3_wclk", "ssp3_div", lsp1_ssp3_clk, 1, clk_set_rate_parent, 0), - gate(lsp1_ssp4_wclk, "lsp1_ssp4_wclk", "ssp4_div", lsp1_ssp4_clk, 1, clk_set_rate_parent, 0), - gate(lsp1_usim1_wclk, "lsp1_usim1_wclk", "usim1_wclk_mux", lsp1_usim1_clk, 1, clk_set_rate_parent, 0), -}; - -static struct clk_hw_onecell_data lsp1_hw_onecell_data = { - .num = lsp1_nr_clks, - .hws = { - [lsp1_nr_clks - 1] = null, - }, -}; - -static int __init lsp1_clocks_init(struct device_node *np) -{ - void __iomem *reg_base; - int i, ret; - const char *name; - - reg_base = of_iomap(np, 0); - if (!reg_base) { - pr_err("%s: unable to map clk base ", __func__); - return -enxio; - } - - for (i = 0; i < array_size(lsp1_mux_clk); i++) { - if (lsp1_mux_clk[i].id) - lsp1_hw_onecell_data.hws[lsp1_mux_clk[i].id] = - &lsp0_mux_clk[i].mux.hw; - - lsp1_mux_clk[i].mux.reg += (uintptr_t)reg_base; - name = lsp1_mux_clk[i].mux.hw.init->name; - ret = clk_hw_register(null, &lsp1_mux_clk[i].mux.hw); - if (ret) - pr_warn("lsp1 clk %s init error! ", name); - } - - for (i = 0; i < array_size(lsp1_gate_clk); i++) { - if (lsp1_gate_clk[i].id) - lsp1_hw_onecell_data.hws[lsp1_gate_clk[i].id] = - &lsp1_gate_clk[i].gate.hw; - - lsp1_gate_clk[i].gate.reg += (uintptr_t)reg_base; - name = lsp1_gate_clk[i].gate.hw.init->name; - ret = clk_hw_register(null, &lsp1_gate_clk[i].gate.hw); - if (ret) - pr_warn("lsp1 clk %s init error! ", name); - } - - for (i = 0; i < array_size(lsp1_div_clk); i++) { - if (lsp1_div_clk[i].id) - lsp1_hw_onecell_data.hws[lsp1_div_clk[i].id] = - &lsp1_div_clk[i].div.hw; - - lsp1_div_clk[i].div.reg += (uintptr_t)reg_base; - name = lsp1_div_clk[i].div.hw.init->name; - ret = clk_hw_register(null, &lsp1_div_clk[i].div.hw); - if (ret) - pr_warn("lsp1 clk %s init error! ", name); - } - - ret = of_clk_add_hw_provider(np, of_clk_hw_onecell_get, - &lsp1_hw_onecell_data); - if (ret) { - pr_err("failed to register lsp1 clk provider: %d ", ret); - return ret; - } - - return 0; -} - -pname(audio_wclk_common_p) = { - "audio_99m", - "audio_24m", -}; - -pname(audio_timer_p) = { - "audio_24m", - "audio_32k", -}; - -static struct zx_clk_mux audio_mux_clk[] = { - mux(i2s0_wclk_mux, "i2s0_wclk_mux", audio_wclk_common_p, audio_i2s0_clk, 0, 1), - mux(i2s1_wclk_mux, "i2s1_wclk_mux", audio_wclk_common_p, audio_i2s1_clk, 0, 1), - mux(i2s2_wclk_mux, "i2s2_wclk_mux", audio_wclk_common_p, audio_i2s2_clk, 0, 1), - mux(i2s3_wclk_mux, "i2s3_wclk_mux", audio_wclk_common_p, audio_i2s3_clk, 0, 1), - mux(0, "i2c0_wclk_mux", audio_wclk_common_p, audio_i2c0_clk, 0, 1), - mux(0, "spdif0_wclk_mux", audio_wclk_common_p, audio_spdif0_clk, 0, 1), - mux(0, "spdif1_wclk_mux", audio_wclk_common_p, audio_spdif1_clk, 0, 1), - mux(0, "timer_wclk_mux", audio_timer_p, audio_timer_clk, 0, 1), -}; - -static struct clk_zx_audio_divider audio_adiv_clk[] = { - audio_div(0, "i2s0_wclk_div", "i2s0_wclk_mux", audio_i2s0_div_cfg1), - audio_div(0, "i2s1_wclk_div", "i2s1_wclk_mux", audio_i2s1_div_cfg1), - audio_div(0, "i2s2_wclk_div", "i2s2_wclk_mux", audio_i2s2_div_cfg1), - audio_div(0, "i2s3_wclk_div", "i2s3_wclk_mux", audio_i2s3_div_cfg1), - audio_div(0, "spdif0_wclk_div", "spdif0_wclk_mux", audio_spdif0_div_cfg1), - audio_div(0, "spdif1_wclk_div", "spdif1_wclk_mux", audio_spdif1_div_cfg1), -}; - -static struct zx_clk_div audio_div_clk[] = { - div_t(0, "tdm_wclk_div", "audio_16m384", audio_tdm_clk, 8, 4, 0, common_div_table), -}; - -static struct zx_clk_gate audio_gate_clk[] = { - gate(audio_i2s0_wclk, "i2s0_wclk", "i2s0_wclk_div", audio_i2s0_clk, 9, clk_set_rate_parent, 0), - gate(audio_i2s1_wclk, "i2s1_wclk", "i2s1_wclk_div", audio_i2s1_clk, 9, clk_set_rate_parent, 0), - gate(audio_i2s2_wclk, "i2s2_wclk", "i2s2_wclk_div", audio_i2s2_clk, 9, clk_set_rate_parent, 0), - gate(audio_i2s3_wclk, "i2s3_wclk", "i2s3_wclk_div", audio_i2s3_clk, 9, clk_set_rate_parent, 0), - gate(audio_i2s0_pclk, "i2s0_pclk", "clk49m5", audio_i2s0_clk, 8, 0, 0), - gate(audio_i2s1_pclk, "i2s1_pclk", "clk49m5", audio_i2s1_clk, 8, 0, 0), - gate(audio_i2s2_pclk, "i2s2_pclk", "clk49m5", audio_i2s2_clk, 8, 0, 0), - gate(audio_i2s3_pclk, "i2s3_pclk", "clk49m5", audio_i2s3_clk, 8, 0, 0), - gate(audio_i2c0_wclk, "i2c0_wclk", "i2c0_wclk_mux", audio_i2c0_clk, 9, clk_set_rate_parent, 0), - gate(audio_spdif0_wclk, "spdif0_wclk", "spdif0_wclk_div", audio_spdif0_clk, 9, clk_set_rate_parent, 0), - gate(audio_spdif1_wclk, "spdif1_wclk", "spdif1_wclk_div", audio_spdif1_clk, 9, clk_set_rate_parent, 0), - gate(audio_tdm_wclk, "tdm_wclk", "tdm_wclk_div", audio_tdm_clk, 17, clk_set_rate_parent, 0), - gate(audio_ts_pclk, "tempsensor_pclk", "clk49m5", audio_ts_clk, 1, 0, 0), -}; - -static struct clk_hw_onecell_data audio_hw_onecell_data = { - .num = audio_nr_clks, - .hws = { - [audio_nr_clks - 1] = null, - }, -}; - -static int __init audio_clocks_init(struct device_node *np) -{ - void __iomem *reg_base; - int i, ret; - const char *name; - - reg_base = of_iomap(np, 0); - if (!reg_base) { - pr_err("%s: unable to map audio clk base ", __func__); - return -enxio; - } - - for (i = 0; i < array_size(audio_mux_clk); i++) { - if (audio_mux_clk[i].id) - audio_hw_onecell_data.hws[audio_mux_clk[i].id] = - &audio_mux_clk[i].mux.hw; - - audio_mux_clk[i].mux.reg += (uintptr_t)reg_base; - name = audio_mux_clk[i].mux.hw.init->name; - ret = clk_hw_register(null, &audio_mux_clk[i].mux.hw); - if (ret) - pr_warn("audio clk %s init error! ", name); - } - - for (i = 0; i < array_size(audio_adiv_clk); i++) { - if (audio_adiv_clk[i].id) - audio_hw_onecell_data.hws[audio_adiv_clk[i].id] = - &audio_adiv_clk[i].hw; - - audio_adiv_clk[i].reg_base += (uintptr_t)reg_base; - name = audio_adiv_clk[i].hw.init->name; - ret = clk_hw_register(null, &audio_adiv_clk[i].hw); - if (ret) - pr_warn("audio clk %s init error! ", name); - } - - for (i = 0; i < array_size(audio_div_clk); i++) { - if (audio_div_clk[i].id) - audio_hw_onecell_data.hws[audio_div_clk[i].id] = - &audio_div_clk[i].div.hw; - - audio_div_clk[i].div.reg += (uintptr_t)reg_base; - name = audio_div_clk[i].div.hw.init->name; - ret = clk_hw_register(null, &audio_div_clk[i].div.hw); - if (ret) - pr_warn("audio clk %s init error! ", name); - } - - for (i = 0; i < array_size(audio_gate_clk); i++) { - if (audio_gate_clk[i].id) - audio_hw_onecell_data.hws[audio_gate_clk[i].id] = - &audio_gate_clk[i].gate.hw; - - audio_gate_clk[i].gate.reg += (uintptr_t)reg_base; - name = audio_gate_clk[i].gate.hw.init->name; - ret = clk_hw_register(null, &audio_gate_clk[i].gate.hw); - if (ret) - pr_warn("audio clk %s init error! ", name); - } - - ret = of_clk_add_hw_provider(np, of_clk_hw_onecell_get, - &audio_hw_onecell_data); - if (ret) { - pr_err("failed to register audio clk provider: %d ", ret); - return ret; - } - - return 0; -} - -static const struct of_device_id zx_clkc_match_table[] = { - { .compatible = "zte,zx296718-topcrm", .data = &top_clocks_init }, - { .compatible = "zte,zx296718-lsp0crm", .data = &lsp0_clocks_init }, - { .compatible = "zte,zx296718-lsp1crm", .data = &lsp1_clocks_init }, - { .compatible = "zte,zx296718-audiocrm", .data = &audio_clocks_init }, - { } -}; - -static int zx_clkc_probe(struct platform_device *pdev) -{ - int (*init_fn)(struct device_node *np); - struct device_node *np = pdev->dev.of_node; - - init_fn = of_device_get_match_data(&pdev->dev); - if (!init_fn) { - dev_err(&pdev->dev, "error: no device match found "); - return -enodev; - } - - return init_fn(np); -} - -static struct platform_driver zx_clk_driver = { - .probe = zx_clkc_probe, - .driver = { - .name = "zx296718-clkc", - .of_match_table = zx_clkc_match_table, - }, -}; - -static int __init zx_clk_init(void) -{ - return platform_driver_register(&zx_clk_driver); -} -core_initcall(zx_clk_init); diff --git a/drivers/clk/zte/clk.c b/drivers/clk/zte/clk.c --- a/drivers/clk/zte/clk.c +++ /dev/null -// spdx-license-identifier: gpl-2.0-only -/* - * copyright 2014 linaro ltd. - * copyright (c) 2014 zte corporation. - */ - -#include <linux/clk-provider.h> -#include <linux/err.h> -#include <linux/gcd.h> -#include <linux/io.h> -#include <linux/iopoll.h> -#include <linux/slab.h> -#include <linux/spinlock.h> -#include <asm/div64.h> - -#include "clk.h" - -#define to_clk_zx_pll(_hw) container_of(_hw, struct clk_zx_pll, hw) -#define to_clk_zx_audio(_hw) container_of(_hw, struct clk_zx_audio, hw) - -#define cfg0_cfg1_offset 4 -#define lock_flag 30 -#define power_down 31 - -static int rate_to_idx(struct clk_zx_pll *zx_pll, unsigned long rate) -{ - const struct zx_pll_config *config = zx_pll->lookup_table; - int i; - - for (i = 0; i < zx_pll->count; i++) { - if (config[i].rate > rate) - return i > 0 ? i - 1 : 0; - - if (config[i].rate == rate) - return i; - } - - return i - 1; -} - -static int hw_to_idx(struct clk_zx_pll *zx_pll) -{ - const struct zx_pll_config *config = zx_pll->lookup_table; - u32 hw_cfg0, hw_cfg1; - int i; - - hw_cfg0 = readl_relaxed(zx_pll->reg_base); - hw_cfg1 = readl_relaxed(zx_pll->reg_base + cfg0_cfg1_offset); - - /* for matching the value in lookup table */ - hw_cfg0 &= ~bit(zx_pll->lock_bit); - - /* check availability of pd_bit */ - if (zx_pll->pd_bit < 32) - hw_cfg0 |= bit(zx_pll->pd_bit); - - for (i = 0; i < zx_pll->count; i++) { - if (hw_cfg0 == config[i].cfg0 && hw_cfg1 == config[i].cfg1) - return i; - } - - return -einval; -} - -static unsigned long zx_pll_recalc_rate(struct clk_hw *hw, - unsigned long parent_rate) -{ - struct clk_zx_pll *zx_pll = to_clk_zx_pll(hw); - int idx; - - idx = hw_to_idx(zx_pll); - if (unlikely(idx == -einval)) - return 0; - - return zx_pll->lookup_table[idx].rate; -} - -static long zx_pll_round_rate(struct clk_hw *hw, unsigned long rate, - unsigned long *prate) -{ - struct clk_zx_pll *zx_pll = to_clk_zx_pll(hw); - int idx; - - idx = rate_to_idx(zx_pll, rate); - - return zx_pll->lookup_table[idx].rate; -} - -static int zx_pll_set_rate(struct clk_hw *hw, unsigned long rate, - unsigned long parent_rate) -{ - /* assume current cpu is not running on current pll */ - struct clk_zx_pll *zx_pll = to_clk_zx_pll(hw); - const struct zx_pll_config *config; - int idx; - - idx = rate_to_idx(zx_pll, rate); - config = &zx_pll->lookup_table[idx]; - - writel_relaxed(config->cfg0, zx_pll->reg_base); - writel_relaxed(config->cfg1, zx_pll->reg_base + cfg0_cfg1_offset); - - return 0; -} - -static int zx_pll_enable(struct clk_hw *hw) -{ - struct clk_zx_pll *zx_pll = to_clk_zx_pll(hw); - u32 reg; - - /* if pd_bit is not available, simply return success. */ - if (zx_pll->pd_bit > 31) - return 0; - - reg = readl_relaxed(zx_pll->reg_base); - writel_relaxed(reg & ~bit(zx_pll->pd_bit), zx_pll->reg_base); - - return readl_relaxed_poll_timeout(zx_pll->reg_base, reg, - reg & bit(zx_pll->lock_bit), 0, 100); -} - -static void zx_pll_disable(struct clk_hw *hw) -{ - struct clk_zx_pll *zx_pll = to_clk_zx_pll(hw); - u32 reg; - - if (zx_pll->pd_bit > 31) - return; - - reg = readl_relaxed(zx_pll->reg_base); - writel_relaxed(reg | bit(zx_pll->pd_bit), zx_pll->reg_base); -} - -static int zx_pll_is_enabled(struct clk_hw *hw) -{ - struct clk_zx_pll *zx_pll = to_clk_zx_pll(hw); - u32 reg; - - reg = readl_relaxed(zx_pll->reg_base); - - return !(reg & bit(zx_pll->pd_bit)); -} - -const struct clk_ops zx_pll_ops = { - .recalc_rate = zx_pll_recalc_rate, - .round_rate = zx_pll_round_rate, - .set_rate = zx_pll_set_rate, - .enable = zx_pll_enable, - .disable = zx_pll_disable, - .is_enabled = zx_pll_is_enabled, -}; -export_symbol(zx_pll_ops); - -struct clk *clk_register_zx_pll(const char *name, const char *parent_name, - unsigned long flags, void __iomem *reg_base, - const struct zx_pll_config *lookup_table, - int count, spinlock_t *lock) -{ - struct clk_zx_pll *zx_pll; - struct clk *clk; - struct clk_init_data init; - - zx_pll = kzalloc(sizeof(*zx_pll), gfp_kernel); - if (!zx_pll) - return err_ptr(-enomem); - - init.name = name; - init.ops = &zx_pll_ops; - init.flags = flags; - init.parent_names = parent_name ? &parent_name : null; - init.num_parents = parent_name ? 1 : 0; - - zx_pll->reg_base = reg_base; - zx_pll->lookup_table = lookup_table; - zx_pll->count = count; - zx_pll->lock_bit = lock_flag; - zx_pll->pd_bit = power_down; - zx_pll->lock = lock; - zx_pll->hw.init = &init; - - clk = clk_register(null, &zx_pll->hw); - if (is_err(clk)) - kfree(zx_pll); - - return clk; -} - -#define bpar 1000000 -static u32 calc_reg(u32 parent_rate, u32 rate) -{ - u32 sel, integ, fra_div, tmp; - u64 tmp64 = (u64)parent_rate * bpar; - - do_div(tmp64, rate); - integ = (u32)tmp64 / bpar; - integ = integ >> 1; - - tmp = (u32)tmp64 % bpar; - sel = tmp / bpar; - - tmp = tmp % bpar; - fra_div = tmp * 0xff / bpar; - tmp = (sel << 24) | (integ << 16) | (0xff << 8) | fra_div; - - /* set i2s integer divider as 1. this bit is reserved for spdif - * and do no harm. - */ - tmp |= bit(28); - return tmp; -} - -static u32 calc_rate(u32 reg, u32 parent_rate) -{ - u32 sel, integ, fra_div, tmp; - u64 tmp64 = (u64)parent_rate * bpar; - - tmp = reg; - sel = (tmp >> 24) & bit(0); - integ = (tmp >> 16) & 0xff; - fra_div = tmp & 0xff; - - tmp = fra_div * bpar; - tmp = tmp / 0xff; - tmp += sel * bpar; - tmp += 2 * integ * bpar; - do_div(tmp64, tmp); - - return (u32)tmp64; -} - -static unsigned long zx_audio_recalc_rate(struct clk_hw *hw, - unsigned long parent_rate) -{ - struct clk_zx_audio *zx_audio = to_clk_zx_audio(hw); - u32 reg; - - reg = readl_relaxed(zx_audio->reg_base); - return calc_rate(reg, parent_rate); -} - -static long zx_audio_round_rate(struct clk_hw *hw, unsigned long rate, - unsigned long *prate) -{ - u32 reg; - - if (rate * 2 > *prate) - return -einval; - - reg = calc_reg(*prate, rate); - return calc_rate(reg, *prate); -} - -static int zx_audio_set_rate(struct clk_hw *hw, unsigned long rate, - unsigned long parent_rate) -{ - struct clk_zx_audio *zx_audio = to_clk_zx_audio(hw); - u32 reg; - - reg = calc_reg(parent_rate, rate); - writel_relaxed(reg, zx_audio->reg_base); - - return 0; -} - -#define zx_audio_en bit(25) -static int zx_audio_enable(struct clk_hw *hw) -{ - struct clk_zx_audio *zx_audio = to_clk_zx_audio(hw); - u32 reg; - - reg = readl_relaxed(zx_audio->reg_base); - writel_relaxed(reg & ~zx_audio_en, zx_audio->reg_base); - return 0; -} - -static void zx_audio_disable(struct clk_hw *hw) -{ - struct clk_zx_audio *zx_audio = to_clk_zx_audio(hw); - u32 reg; - - reg = readl_relaxed(zx_audio->reg_base); - writel_relaxed(reg | zx_audio_en, zx_audio->reg_base); -} - -static const struct clk_ops zx_audio_ops = { - .recalc_rate = zx_audio_recalc_rate, - .round_rate = zx_audio_round_rate, - .set_rate = zx_audio_set_rate, - .enable = zx_audio_enable, - .disable = zx_audio_disable, -}; - -struct clk *clk_register_zx_audio(const char *name, - const char * const parent_name, - unsigned long flags, - void __iomem *reg_base) -{ - struct clk_zx_audio *zx_audio; - struct clk *clk; - struct clk_init_data init; - - zx_audio = kzalloc(sizeof(*zx_audio), gfp_kernel); - if (!zx_audio) - return err_ptr(-enomem); - - init.name = name; - init.ops = &zx_audio_ops; - init.flags = flags; - init.parent_names = parent_name ? &parent_name : null; - init.num_parents = parent_name ? 1 : 0; - - zx_audio->reg_base = reg_base; - zx_audio->hw.init = &init; - - clk = clk_register(null, &zx_audio->hw); - if (is_err(clk)) - kfree(zx_audio); - - return clk; -} - -#define clk_audio_div_frac bit(0) -#define clk_audio_div_int bit(1) -#define clk_audio_div_uncommon bit(1) - -#define clk_audio_div_frac_nshift 16 -#define clk_audio_div_int_frac_re bit(16) -#define clk_audio_div_int_frac_max (0xffff) -#define clk_audio_div_int_frac_min (0x2) -#define clk_audio_div_int_int_shift 24 -#define clk_audio_div_int_int_width 4 - -struct zx_clk_audio_div_table { - unsigned long rate; - unsigned int int_reg; - unsigned int frac_reg; -}; - -#define to_clk_zx_audio_div(_hw) container_of(_hw, struct clk_zx_audio_divider, hw) - -static unsigned long audio_calc_rate(struct clk_zx_audio_divider *audio_div, - u32 reg_frac, u32 reg_int, - unsigned long parent_rate) -{ - unsigned long rate, m, n; - - m = reg_frac & 0xffff; - n = (reg_frac >> 16) & 0xffff; - - m = (reg_int & 0xffff) * n + m; - rate = (parent_rate * n) / m; - - return rate; -} - -static void audio_calc_reg(struct clk_zx_audio_divider *audio_div, - struct zx_clk_audio_div_table *div_table, - unsigned long rate, unsigned long parent_rate) -{ - unsigned int reg_int, reg_frac; - unsigned long m, n, div; - - reg_int = parent_rate / rate; - - if (reg_int > clk_audio_div_int_frac_max) - reg_int = clk_audio_div_int_frac_max; - else if (reg_int < clk_audio_div_int_frac_min) - reg_int = 0; - m = parent_rate - rate * reg_int; - n = rate; - - div = gcd(m, n); - m = m / div; - n = n / div; - - if ((m >> 16) || (n >> 16)) { - if (m > n) { - n = n * 0xffff / m; - m = 0xffff; - } else { - m = m * 0xffff / n; - n = 0xffff; - } - } - reg_frac = m | (n << 16); - - div_table->rate = parent_rate * n / (reg_int * n + m); - div_table->int_reg = reg_int; - div_table->frac_reg = reg_frac; -} - -static unsigned long zx_audio_div_recalc_rate(struct clk_hw *hw, - unsigned long parent_rate) -{ - struct clk_zx_audio_divider *zx_audio_div = to_clk_zx_audio_div(hw); - u32 reg_frac, reg_int; - - reg_frac = readl_relaxed(zx_audio_div->reg_base); - reg_int = readl_relaxed(zx_audio_div->reg_base + 0x4); - - return audio_calc_rate(zx_audio_div, reg_frac, reg_int, parent_rate); -} - -static long zx_audio_div_round_rate(struct clk_hw *hw, unsigned long rate, - unsigned long *prate) -{ - struct clk_zx_audio_divider *zx_audio_div = to_clk_zx_audio_div(hw); - struct zx_clk_audio_div_table divt; - - audio_calc_reg(zx_audio_div, &divt, rate, *prate); - - return audio_calc_rate(zx_audio_div, divt.frac_reg, divt.int_reg, *prate); -} - -static int zx_audio_div_set_rate(struct clk_hw *hw, unsigned long rate, - unsigned long parent_rate) -{ - struct clk_zx_audio_divider *zx_audio_div = to_clk_zx_audio_div(hw); - struct zx_clk_audio_div_table divt; - unsigned int val; - - audio_calc_reg(zx_audio_div, &divt, rate, parent_rate); - if (divt.rate != rate) - pr_debug("the real rate is:%ld", divt.rate); - - writel_relaxed(divt.frac_reg, zx_audio_div->reg_base); - - val = readl_relaxed(zx_audio_div->reg_base + 0x4); - val &= ~0xffff; - val |= divt.int_reg | clk_audio_div_int_frac_re; - writel_relaxed(val, zx_audio_div->reg_base + 0x4); - - mdelay(1); - - val = readl_relaxed(zx_audio_div->reg_base + 0x4); - val &= ~clk_audio_div_int_frac_re; - writel_relaxed(val, zx_audio_div->reg_base + 0x4); - - return 0; -} - -const struct clk_ops zx_audio_div_ops = { - .recalc_rate = zx_audio_div_recalc_rate, - .round_rate = zx_audio_div_round_rate, - .set_rate = zx_audio_div_set_rate, -}; diff --git a/drivers/clk/zte/clk.h b/drivers/clk/zte/clk.h --- a/drivers/clk/zte/clk.h +++ /dev/null -/* spdx-license-identifier: gpl-2.0-only */ -/* - * copyright 2015 linaro ltd. - * copyright (c) 2014 zte corporation. - */ - -#ifndef __zte_clk_h -#define __zte_clk_h -#include <linux/clk-provider.h> -#include <linux/spinlock.h> - -#define pname(x) static const char *x[] - -struct zx_pll_config { - unsigned long rate; - u32 cfg0; - u32 cfg1; -}; - -struct clk_zx_pll { - struct clk_hw hw; - void __iomem *reg_base; - const struct zx_pll_config *lookup_table; /* order by rate asc */ - int count; - spinlock_t *lock; - u8 pd_bit; /* power down bit */ - u8 lock_bit; /* pll lock flag bit */ -}; - -#define pll_rate(_rate, _cfg0, _cfg1) \ -{ \ - .rate = _rate, \ - .cfg0 = _cfg0, \ - .cfg1 = _cfg1, \ -} - -#define zx_pll(_name, _parent, _reg, _table, _pd, _lock) \ -{ \ - .reg_base = (void __iomem *) _reg, \ - .lookup_table = _table, \ - .count = array_size(_table), \ - .pd_bit = _pd, \ - .lock_bit = _lock, \ - .hw.init = clk_hw_init(_name, _parent, &zx_pll_ops, \ - clk_get_rate_nocache), \ -} - -/* - * the pd_bit is not available on zx296718, so let's pass something - * bigger than 31, e.g. 0xff, to indicate that. - */ -#define zx296718_pll(_name, _parent, _reg, _table) \ -zx_pll(_name, _parent, _reg, _table, 0xff, 30) - -struct zx_clk_gate { - struct clk_gate gate; - u16 id; -}; - -#define gate(_id, _name, _parent, _reg, _bit, _flag, _gflags) \ -{ \ - .gate = { \ - .reg = (void __iomem *) _reg, \ - .bit_idx = (_bit), \ - .flags = _gflags, \ - .lock = &clk_lock, \ - .hw.init = clk_hw_init(_name, \ - _parent, \ - &clk_gate_ops, \ - _flag | clk_ignore_unused), \ - }, \ - .id = _id, \ -} - -struct zx_clk_fixed_factor { - struct clk_fixed_factor factor; - u16 id; -}; - -#define ffactor(_id, _name, _parent, _mult, _div, _flag) \ -{ \ - .factor = { \ - .div = _div, \ - .mult = _mult, \ - .hw.init = clk_hw_init(_name, \ - _parent, \ - &clk_fixed_factor_ops, \ - _flag), \ - }, \ - .id = _id, \ -} - -struct zx_clk_mux { - struct clk_mux mux; - u16 id; -}; - -#define mux_f(_id, _name, _parent, _reg, _shift, _width, _flag, _mflag) \ -{ \ - .mux = { \ - .reg = (void __iomem *) _reg, \ - .mask = bit(_width) - 1, \ - .shift = _shift, \ - .flags = _mflag, \ - .lock = &clk_lock, \ - .hw.init = clk_hw_init_parents(_name, \ - _parent, \ - &clk_mux_ops, \ - _flag), \ - }, \ - .id = _id, \ -} - -#define mux(_id, _name, _parent, _reg, _shift, _width) \ -mux_f(_id, _name, _parent, _reg, _shift, _width, 0, 0) - -struct zx_clk_div { - struct clk_divider div; - u16 id; -}; - -#define div_t(_id, _name, _parent, _reg, _shift, _width, _flag, _table) \ -{ \ - .div = { \ - .reg = (void __iomem *) _reg, \ - .shift = _shift, \ - .width = _width, \ - .flags = 0, \ - .table = _table, \ - .lock = &clk_lock, \ - .hw.init = clk_hw_init(_name, \ - _parent, \ - &clk_divider_ops, \ - _flag), \ - }, \ - .id = _id, \ -} - -struct clk_zx_audio_divider { - struct clk_hw hw; - void __iomem *reg_base; - unsigned int rate_count; - spinlock_t *lock; - u16 id; -}; - -#define audio_div(_id, _name, _parent, _reg) \ -{ \ - .reg_base = (void __iomem *) _reg, \ - .lock = &clk_lock, \ - .hw.init = clk_hw_init(_name, \ - _parent, \ - &zx_audio_div_ops, \ - 0), \ - .id = _id, \ -} - -struct clk *clk_register_zx_pll(const char *name, const char *parent_name, - unsigned long flags, void __iomem *reg_base, - const struct zx_pll_config *lookup_table, int count, spinlock_t *lock); - -struct clk_zx_audio { - struct clk_hw hw; - void __iomem *reg_base; -}; - -struct clk *clk_register_zx_audio(const char *name, - const char * const parent_name, - unsigned long flags, void __iomem *reg_base); - -extern const struct clk_ops zx_pll_ops; -extern const struct clk_ops zx_audio_div_ops; - -#endif diff --git a/include/dt-bindings/clock/zx296702-clock.h b/include/dt-bindings/clock/zx296702-clock.h --- a/include/dt-bindings/clock/zx296702-clock.h +++ /dev/null -/* spdx-license-identifier: gpl-2.0-only */ -/* - * copyright 2014 linaro ltd. - * copyright (c) 2014 zte corporation. - */ - -#ifndef __dt_bindings_clock_zx296702_h -#define __dt_bindings_clock_zx296702_h - -#define zx296702_osc 0 -#define zx296702_pll_a9 1 -#define zx296702_pll_a9_350m 2 -#define zx296702_pll_mac_1000m 3 -#define zx296702_pll_mac_333m 4 -#define zx296702_pll_mm0_1188m 5 -#define zx296702_pll_mm0_396m 6 -#define zx296702_pll_mm0_198m 7 -#define zx296702_pll_mm1_108m 8 -#define zx296702_pll_mm1_72m 9 -#define zx296702_pll_mm1_54m 10 -#define zx296702_pll_lsp_104m 11 -#define zx296702_pll_lsp_26m 12 -#define zx296702_pll_audio_294m912 13 -#define zx296702_pll_ddr_266m 14 -#define zx296702_clk_148m5 15 -#define zx296702_matrix_aclk 16 -#define zx296702_main_hclk 17 -#define zx296702_main_pclk 18 -#define zx296702_clk_500 19 -#define zx296702_clk_250 20 -#define zx296702_clk_125 21 -#define zx296702_clk_74m25 22 -#define zx296702_a9_wclk 23 -#define zx296702_a9_as1_aclk_mux 24 -#define zx296702_a9_trace_clkin_mux 25 -#define zx296702_a9_as1_aclk_div 26 -#define zx296702_clk_2 27 -#define zx296702_clk_27 28 -#define zx296702_decppu_aclk_mux 29 -#define zx296702_ppu_aclk_mux 30 -#define zx296702_mali400_aclk_mux 31 -#define zx296702_vou_aclk_mux 32 -#define zx296702_vou_main_wclk_mux 33 -#define zx296702_vou_aux_wclk_mux 34 -#define zx296702_vou_scaler_wclk_mux 35 -#define zx296702_r2d_aclk_mux 36 -#define zx296702_r2d_wclk_mux 37 -#define zx296702_clk_50 38 -#define zx296702_clk_25 39 -#define zx296702_clk_12 40 -#define zx296702_clk_16m384 41 -#define zx296702_clk_32k768 42 -#define zx296702_sec_wclk_div 43 -#define zx296702_ddr_wclk_mux 44 -#define zx296702_nand_wclk_mux 45 -#define zx296702_lsp_26_wclk_mux 46 -#define zx296702_a9_as0_aclk 47 -#define zx296702_a9_as1_aclk 48 -#define zx296702_a9_trace_clkin 49 -#define zx296702_decppu_axi_m_aclk 50 -#define zx296702_decppu_ahb_s_hclk 51 -#define zx296702_ppu_axi_m_aclk 52 -#define zx296702_ppu_ahb_s_hclk 53 -#define zx296702_vou_axi_m_aclk 54 -#define zx296702_vou_apb_pclk 55 -#define zx296702_vou_main_channel_wclk 56 -#define zx296702_vou_aux_channel_wclk 57 -#define zx296702_vou_hdmi_osclk_cec 58 -#define zx296702_vou_scaler_wclk 59 -#define zx296702_mali400_axi_m_aclk 60 -#define zx296702_mali400_apb_pclk 61 -#define zx296702_r2d_wclk 62 -#define zx296702_r2d_axi_m_aclk 63 -#define zx296702_r2d_ahb_hclk 64 -#define zx296702_ddr3_axi_s0_aclk 65 -#define zx296702_ddr3_apb_pclk 66 -#define zx296702_ddr3_wclk 67 -#define zx296702_usb20_0_ahb_hclk 68 -#define zx296702_usb20_0_extrefclk 69 -#define zx296702_usb20_1_ahb_hclk 70 -#define zx296702_usb20_1_extrefclk 71 -#define zx296702_usb20_2_ahb_hclk 72 -#define zx296702_usb20_2_extrefclk 73 -#define zx296702_gmac_axi_m_aclk 74 -#define zx296702_gmac_apb_pclk 75 -#define zx296702_gmac_125_clkin 76 -#define zx296702_gmac_rmii_clkin 77 -#define zx296702_gmac_25m_clk 78 -#define zx296702_nandflash_ahb_hclk 79 -#define zx296702_nandflash_wclk 80 -#define zx296702_lsp0_apb_pclk 81 -#define zx296702_lsp0_ahb_hclk 82 -#define zx296702_lsp0_26m_wclk 83 -#define zx296702_lsp0_104m_wclk 84 -#define zx296702_lsp0_16m384_wclk 85 -#define zx296702_lsp1_apb_pclk 86 -#define zx296702_lsp1_26m_wclk 87 -#define zx296702_lsp1_104m_wclk 88 -#define zx296702_lsp1_32k_clk 89 -#define zx296702_aon_hclk 90 -#define zx296702_sys_ctrl_pclk 91 -#define zx296702_dma_pclk 92 -#define zx296702_dma_aclk 93 -#define zx296702_sec_hclk 94 -#define zx296702_aes_wclk 95 -#define zx296702_des_wclk 96 -#define zx296702_iram_aclk 97 -#define zx296702_irom_aclk 98 -#define zx296702_boot_ctrl_hclk 99 -#define zx296702_efuse_clk_30 100 -#define zx296702_vou_main_channel_div 101 -#define zx296702_vou_aux_channel_div 102 -#define zx296702_vou_tv_enc_hd_div 103 -#define zx296702_vou_tv_enc_sd_div 104 -#define zx296702_vl0_mux 105 -#define zx296702_vl1_mux 106 -#define zx296702_vl2_mux 107 -#define zx296702_gl0_mux 108 -#define zx296702_gl1_mux 109 -#define zx296702_gl2_mux 110 -#define zx296702_wb_mux 111 -#define zx296702_hdmi_mux 112 -#define zx296702_vou_tv_enc_hd_mux 113 -#define zx296702_vou_tv_enc_sd_mux 114 -#define zx296702_vl0_clk 115 -#define zx296702_vl1_clk 116 -#define zx296702_vl2_clk 117 -#define zx296702_gl0_clk 118 -#define zx296702_gl1_clk 119 -#define zx296702_gl2_clk 120 -#define zx296702_wb_clk 121 -#define zx296702_cl_clk 122 -#define zx296702_main_mix_clk 123 -#define zx296702_aux_mix_clk 124 -#define zx296702_hdmi_clk 125 -#define zx296702_vou_tv_enc_hd_dac_clk 126 -#define zx296702_vou_tv_enc_sd_dac_clk 127 -#define zx296702_a9_periphclk 128 -#define zx296702_topclk_end 129 - -#define zx296702_sdmmc1_wclk_mux 0 -#define zx296702_sdmmc1_wclk_div 1 -#define zx296702_sdmmc1_wclk 2 -#define zx296702_sdmmc1_pclk 3 -#define zx296702_spdif0_wclk_mux 4 -#define zx296702_spdif0_wclk 5 -#define zx296702_spdif0_pclk 6 -#define zx296702_spdif0_div 7 -#define zx296702_i2s0_wclk_mux 8 -#define zx296702_i2s0_wclk 9 -#define zx296702_i2s0_pclk 10 -#define zx296702_i2s0_div 11 -#define zx296702_i2s1_wclk_mux 12 -#define zx296702_i2s1_wclk 13 -#define zx296702_i2s1_pclk 14 -#define zx296702_i2s1_div 15 -#define zx296702_i2s2_wclk_mux 16 -#define zx296702_i2s2_wclk 17 -#define zx296702_i2s2_pclk 18 -#define zx296702_i2s2_div 19 -#define zx296702_gpio_clk 20 -#define zx296702_lsp0clk_end 21 - -#define zx296702_uart0_wclk_mux 0 -#define zx296702_uart0_wclk 1 -#define zx296702_uart0_pclk 2 -#define zx296702_uart1_wclk_mux 3 -#define zx296702_uart1_wclk 4 -#define zx296702_uart1_pclk 5 -#define zx296702_sdmmc0_wclk_mux 6 -#define zx296702_sdmmc0_wclk_div 7 -#define zx296702_sdmmc0_wclk 8 -#define zx296702_sdmmc0_pclk 9 -#define zx296702_spdif1_wclk_mux 10 -#define zx296702_spdif1_wclk 11 -#define zx296702_spdif1_pclk 12 -#define zx296702_spdif1_div 13 -#define zx296702_lsp1clk_end 14 - -#endif /* __dt_bindings_clock_zx296702_h */
|
Clock
|
bcbe6005eb18d2cd565f202d9351737061753894
|
arnd bergmann
|
drivers
|
clk
|
bindings, clock, zte
|
dt-bindings: phy: qcom,qmp: add sm8350 ufs phy bindings
|
add the compatible strings for the ufs phy found on sm8350 soc.
|
this release allows to map an uid to a different one in a mount; it also adds support for selecting the preemption model at runtime; support for a low-overhead memory error detector designed to be used in production; support for the acrn hypervisor designed for embedded systems; btrfs initial support for zoned devices, subpage blocks sizes and performance improvements; support for eager nfs writes; support for a thermal power management to control the surface temperature of embedded devices in an unified way; the napi polling can be moved to a kernel thread; and support for non-blocking path lookups. as always, there are many other features, new drivers, improvements and fixes.
|
add support for sm8350 ufs
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'cryptography', 'security', 'networking', 'architectures x86 arm risc-v powerpc mips csky s390 pa-risc c6x']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'device voltage and frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'non-transparent bridge (ntb)', 'thunderbolt', 'clock', 'phy ("physical layer" framework)', 'cxl (compute express link)', 'various']
|
[]
|
['yaml']
| 1
| 1
| 0
|
--- diff --git a/documentation/devicetree/bindings/phy/qcom,qmp-phy.yaml b/documentation/devicetree/bindings/phy/qcom,qmp-phy.yaml --- a/documentation/devicetree/bindings/phy/qcom,qmp-phy.yaml +++ b/documentation/devicetree/bindings/phy/qcom,qmp-phy.yaml - qcom,sm8250-qmp-modem-pcie-phy - qcom,sm8250-qmp-usb3-phy - qcom,sm8250-qmp-usb3-uni-phy + - qcom,sm8350-qmp-ufs-phy - qcom,sm8350-qmp-usb3-phy - qcom,sm8350-qmp-usb3-uni-phy - qcom,sdx55-qmp-usb3-uni-phy
|
PHY ("physical layer" framework)
|
d0858167492b59297c5c2aac10cdc9904c5a1cc6
|
vinod koul bjorn andersson bjorn andersson linaro org
|
documentation
|
devicetree
|
bindings, phy
|
phy: qcom-qmp: add ufs v5 registers found in sm8350
|
add the registers for ufs found in sm8350. the ufs phy used in sm8350 seems to have same offsets as v5 phy, although documentation for that is lacking.
|
this release allows to map an uid to a different one in a mount; it also adds support for selecting the preemption model at runtime; support for a low-overhead memory error detector designed to be used in production; support for the acrn hypervisor designed for embedded systems; btrfs initial support for zoned devices, subpage blocks sizes and performance improvements; support for eager nfs writes; support for a thermal power management to control the surface temperature of embedded devices in an unified way; the napi polling can be moved to a kernel thread; and support for non-blocking path lookups. as always, there are many other features, new drivers, improvements and fixes.
|
add support for sm8350 ufs
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'cryptography', 'security', 'networking', 'architectures x86 arm risc-v powerpc mips csky s390 pa-risc c6x']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'device voltage and frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'non-transparent bridge (ntb)', 'thunderbolt', 'clock', 'phy ("physical layer" framework)', 'cxl (compute express link)', 'various']
|
[]
|
['h']
| 1
| 47
| 0
|
--- diff --git a/drivers/phy/qualcomm/phy-qcom-qmp.h b/drivers/phy/qualcomm/phy-qcom-qmp.h --- a/drivers/phy/qualcomm/phy-qcom-qmp.h +++ b/drivers/phy/qualcomm/phy-qcom-qmp.h +/* only for qmp v5 phy - qserdes com registers */ +#define qserdes_v5_com_pll_ivco 0x058 +#define qserdes_v5_com_cp_ctrl_mode0 0x074 +#define qserdes_v5_com_cp_ctrl_mode1 0x078 +#define qserdes_v5_com_pll_rctrl_mode0 0x07c +#define qserdes_v5_com_pll_rctrl_mode1 0x080 +#define qserdes_v5_com_pll_cctrl_mode0 0x084 +#define qserdes_v5_com_pll_cctrl_mode1 0x088 +#define qserdes_v5_com_sysclk_en_sel 0x094 +#define qserdes_v5_com_lock_cmp_en 0x0a4 +#define qserdes_v5_com_lock_cmp1_mode0 0x0ac +#define qserdes_v5_com_lock_cmp2_mode0 0x0b0 +#define qserdes_v5_com_lock_cmp1_mode1 0x0b4 +#define qserdes_v5_com_dec_start_mode0 0x0bc +#define qserdes_v5_com_lock_cmp2_mode1 0x0b8 +#define qserdes_v5_com_dec_start_mode1 0x0c4 +#define qserdes_v5_com_vco_tune_map 0x10c +#define qserdes_v5_com_vco_tune_initval2 0x124 +#define qserdes_v5_com_hsclk_sel 0x158 +#define qserdes_v5_com_hsclk_hs_switch_sel 0x15c +#define qserdes_v5_com_bin_vcocal_cmp_code1_mode0 0x1ac +#define qserdes_v5_com_bin_vcocal_cmp_code2_mode0 0x1b0 +#define qserdes_v5_com_bin_vcocal_cmp_code1_mode1 0x1b4 +#define qserdes_v5_com_bin_vcocal_hsclk_sel 0x1bc +#define qserdes_v5_com_bin_vcocal_cmp_code2_mode1 0x1b8 + +#define qserdes_v5_tx_pwm_gear_1_divider_band0_1 0x178 +#define qserdes_v5_tx_pwm_gear_2_divider_band0_1 0x17c +#define qserdes_v5_tx_pwm_gear_3_divider_band0_1 0x180 +#define qserdes_v5_tx_pwm_gear_4_divider_band0_1 0x184 +/* only for qmp v5 phy - ufs pcs registers */ +#define qphy_v5_pcs_ufs_timer_20us_coreclk_steps_msb 0x00c +#define qphy_v5_pcs_ufs_timer_20us_coreclk_steps_lsb 0x010 +#define qphy_v5_pcs_ufs_pll_cntl 0x02c +#define qphy_v5_pcs_ufs_tx_large_amp_drv_lvl 0x030 +#define qphy_v5_pcs_ufs_tx_small_amp_drv_lvl 0x038 +#define qphy_v5_pcs_ufs_tx_hsgear_capability 0x074 +#define qphy_v5_pcs_ufs_rx_hsgear_capability 0x0b4 +#define qphy_v5_pcs_ufs_debug_bus_clksel 0x124 +#define qphy_v5_pcs_ufs_rx_min_hibern8_time 0x150 +#define qphy_v5_pcs_ufs_rx_sigdet_ctrl1 0x154 +#define qphy_v5_pcs_ufs_rx_sigdet_ctrl2 0x158 +#define qphy_v5_pcs_ufs_tx_pwm_gear_band 0x160 +#define qphy_v5_pcs_ufs_tx_hs_gear_band 0x168 +#define qphy_v5_pcs_ufs_tx_mid_term_ctrl1 0x1d8 +#define qphy_v5_pcs_ufs_multi_lane_ctrl1 0x1e0 +
|
PHY ("physical layer" framework)
|
920abc105b5de6489d61bd8c5d0d44463665ae3f
|
vinod koul bjorn andersson bjorn andersson linaro org
|
drivers
|
phy
|
qualcomm
|
phy: qcom-qmp: add support for sm8350 ufs phy
|
add the tables for init sequences for ufs qmp phy found in sm8350 soc.
|
this release allows to map an uid to a different one in a mount; it also adds support for selecting the preemption model at runtime; support for a low-overhead memory error detector designed to be used in production; support for the acrn hypervisor designed for embedded systems; btrfs initial support for zoned devices, subpage blocks sizes and performance improvements; support for eager nfs writes; support for a thermal power management to control the surface temperature of embedded devices in an unified way; the napi polling can be moved to a kernel thread; and support for non-blocking path lookups. as always, there are many other features, new drivers, improvements and fixes.
|
add support for sm8350 ufs
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'cryptography', 'security', 'networking', 'architectures x86 arm risc-v powerpc mips csky s390 pa-risc c6x']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'device voltage and frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'non-transparent bridge (ntb)', 'thunderbolt', 'clock', 'phy ("physical layer" framework)', 'cxl (compute express link)', 'various']
|
[]
|
['c']
| 1
| 127
| 0
|
--- diff --git a/drivers/phy/qualcomm/phy-qcom-qmp.c b/drivers/phy/qualcomm/phy-qcom-qmp.c --- a/drivers/phy/qualcomm/phy-qcom-qmp.c +++ b/drivers/phy/qualcomm/phy-qcom-qmp.c +static const struct qmp_phy_init_tbl sm8350_ufsphy_serdes_tbl[] = { + qmp_phy_init_cfg(qserdes_v5_com_sysclk_en_sel, 0xd9), + qmp_phy_init_cfg(qserdes_v5_com_hsclk_sel, 0x11), + qmp_phy_init_cfg(qserdes_v5_com_hsclk_hs_switch_sel, 0x00), + qmp_phy_init_cfg(qserdes_v5_com_lock_cmp_en, 0x42), + qmp_phy_init_cfg(qserdes_v5_com_vco_tune_map, 0x02), + qmp_phy_init_cfg(qserdes_v5_com_pll_ivco, 0x0f), + qmp_phy_init_cfg(qserdes_v5_com_vco_tune_initval2, 0x00), + qmp_phy_init_cfg(qserdes_v5_com_bin_vcocal_hsclk_sel, 0x11), + qmp_phy_init_cfg(qserdes_v5_com_dec_start_mode0, 0x82), + qmp_phy_init_cfg(qserdes_v5_com_cp_ctrl_mode0, 0x14), + qmp_phy_init_cfg(qserdes_v5_com_pll_rctrl_mode0, 0x18), + qmp_phy_init_cfg(qserdes_v5_com_pll_cctrl_mode0, 0x18), + qmp_phy_init_cfg(qserdes_v5_com_lock_cmp1_mode0, 0xff), + qmp_phy_init_cfg(qserdes_v5_com_lock_cmp2_mode0, 0x19), + qmp_phy_init_cfg(qserdes_v5_com_bin_vcocal_cmp_code1_mode0, 0xac), + qmp_phy_init_cfg(qserdes_v5_com_bin_vcocal_cmp_code2_mode0, 0x1e), + qmp_phy_init_cfg(qserdes_v5_com_dec_start_mode1, 0x98), + qmp_phy_init_cfg(qserdes_v5_com_cp_ctrl_mode1, 0x14), + qmp_phy_init_cfg(qserdes_v5_com_pll_rctrl_mode1, 0x18), + qmp_phy_init_cfg(qserdes_v5_com_pll_cctrl_mode1, 0x18), + qmp_phy_init_cfg(qserdes_v5_com_lock_cmp1_mode1, 0x65), + qmp_phy_init_cfg(qserdes_v5_com_lock_cmp2_mode1, 0x1e), + qmp_phy_init_cfg(qserdes_v5_com_bin_vcocal_cmp_code1_mode1, 0xdd), + qmp_phy_init_cfg(qserdes_v5_com_bin_vcocal_cmp_code2_mode1, 0x23), + + /* rate b */ + qmp_phy_init_cfg(qserdes_v5_com_vco_tune_map, 0x06), +}; + +static const struct qmp_phy_init_tbl sm8350_ufsphy_tx_tbl[] = { + qmp_phy_init_cfg(qserdes_v5_tx_pwm_gear_1_divider_band0_1, 0x06), + qmp_phy_init_cfg(qserdes_v5_tx_pwm_gear_2_divider_band0_1, 0x03), + qmp_phy_init_cfg(qserdes_v5_tx_pwm_gear_3_divider_band0_1, 0x01), + qmp_phy_init_cfg(qserdes_v5_tx_pwm_gear_4_divider_band0_1, 0x00), + qmp_phy_init_cfg(qserdes_v5_tx_lane_mode_1, 0xf5), + qmp_phy_init_cfg(qserdes_v5_tx_lane_mode_3, 0x3f), + qmp_phy_init_cfg(qserdes_v5_tx_res_code_lane_offset_tx, 0x09), + qmp_phy_init_cfg(qserdes_v5_tx_res_code_lane_offset_rx, 0x09), + qmp_phy_init_cfg(qserdes_v5_tx_tran_drvr_emp_en, 0x0c), +}; + +static const struct qmp_phy_init_tbl sm8350_ufsphy_rx_tbl[] = { + qmp_phy_init_cfg(qserdes_v5_rx_sigdet_lvl, 0x24), + qmp_phy_init_cfg(qserdes_v5_rx_sigdet_cntrl, 0x0f), + qmp_phy_init_cfg(qserdes_v5_rx_sigdet_deglitch_cntrl, 0x1e), + qmp_phy_init_cfg(qserdes_v5_rx_rx_band, 0x18), + qmp_phy_init_cfg(qserdes_v5_rx_ucdr_fastlock_fo_gain, 0x0a), + qmp_phy_init_cfg(qserdes_v5_rx_ucdr_so_saturation_and_enable, 0x5a), + qmp_phy_init_cfg(qserdes_v5_rx_ucdr_pi_controls, 0xf1), + qmp_phy_init_cfg(qserdes_v5_rx_ucdr_fastlock_count_low, 0x80), + qmp_phy_init_cfg(qserdes_v5_rx_ucdr_pi_ctrl2, 0x80), + qmp_phy_init_cfg(qserdes_v5_rx_ucdr_fo_gain, 0x0e), + qmp_phy_init_cfg(qserdes_v5_rx_ucdr_so_gain, 0x04), + qmp_phy_init_cfg(qserdes_v5_rx_rx_term_bw, 0x1b), + qmp_phy_init_cfg(qserdes_v5_rx_rx_equ_adaptor_cntrl1, 0x04), + qmp_phy_init_cfg(qserdes_v5_rx_rx_equ_adaptor_cntrl2, 0x06), + qmp_phy_init_cfg(qserdes_v5_rx_rx_equ_adaptor_cntrl3, 0x04), + qmp_phy_init_cfg(qserdes_v5_rx_rx_equ_adaptor_cntrl4, 0x1a), + qmp_phy_init_cfg(qserdes_v5_rx_rx_eq_offset_adaptor_cntrl1, 0x17), + qmp_phy_init_cfg(qserdes_v5_rx_rx_offset_adaptor_cntrl2, 0x00), + qmp_phy_init_cfg(qserdes_v5_rx_rx_idac_measure_time, 0x10), + qmp_phy_init_cfg(qserdes_v5_rx_rx_idac_tsettle_low, 0xc0), + qmp_phy_init_cfg(qserdes_v5_rx_rx_idac_tsettle_high, 0x00), + qmp_phy_init_cfg(qserdes_v5_rx_rx_mode_00_low, 0x6d), + qmp_phy_init_cfg(qserdes_v5_rx_rx_mode_00_high, 0x6d), + qmp_phy_init_cfg(qserdes_v5_rx_rx_mode_00_high2, 0xed), + qmp_phy_init_cfg(qserdes_v5_rx_rx_mode_00_high3, 0x3b), + qmp_phy_init_cfg(qserdes_v5_rx_rx_mode_00_high4, 0x3c), + qmp_phy_init_cfg(qserdes_v5_rx_rx_mode_01_low, 0xe0), + qmp_phy_init_cfg(qserdes_v5_rx_rx_mode_01_high, 0xc8), + qmp_phy_init_cfg(qserdes_v5_rx_rx_mode_01_high2, 0xc8), + qmp_phy_init_cfg(qserdes_v5_rx_rx_mode_01_high3, 0x3b), + qmp_phy_init_cfg(qserdes_v5_rx_rx_mode_01_high4, 0xb7), + qmp_phy_init_cfg(qserdes_v5_rx_rx_mode_10_low, 0xe0), + qmp_phy_init_cfg(qserdes_v5_rx_rx_mode_10_high, 0xc8), + qmp_phy_init_cfg(qserdes_v5_rx_rx_mode_10_high2, 0xc8), + qmp_phy_init_cfg(qserdes_v5_rx_rx_mode_10_high3, 0x3b), + qmp_phy_init_cfg(qserdes_v5_rx_rx_mode_10_high4, 0xb7), + qmp_phy_init_cfg(qserdes_v5_rx_dcc_ctrl1, 0x0c), +}; + +static const struct qmp_phy_init_tbl sm8350_ufsphy_pcs_tbl[] = { + qmp_phy_init_cfg(qphy_v5_pcs_ufs_rx_sigdet_ctrl2, 0x6d), + qmp_phy_init_cfg(qphy_v5_pcs_ufs_tx_large_amp_drv_lvl, 0x0a), + qmp_phy_init_cfg(qphy_v5_pcs_ufs_tx_small_amp_drv_lvl, 0x02), + qmp_phy_init_cfg(qphy_v5_pcs_ufs_tx_mid_term_ctrl1, 0x43), + qmp_phy_init_cfg(qphy_v5_pcs_ufs_debug_bus_clksel, 0x1f), + qmp_phy_init_cfg(qphy_v5_pcs_ufs_rx_min_hibern8_time, 0xff), + qmp_phy_init_cfg(qphy_v5_pcs_ufs_pll_cntl, 0x03), + qmp_phy_init_cfg(qphy_v5_pcs_ufs_timer_20us_coreclk_steps_msb, 0x16), + qmp_phy_init_cfg(qphy_v5_pcs_ufs_timer_20us_coreclk_steps_lsb, 0xd8), + qmp_phy_init_cfg(qphy_v5_pcs_ufs_tx_pwm_gear_band, 0xaa), + qmp_phy_init_cfg(qphy_v5_pcs_ufs_tx_hs_gear_band, 0x06), + qmp_phy_init_cfg(qphy_v5_pcs_ufs_tx_hsgear_capability, 0x03), + qmp_phy_init_cfg(qphy_v5_pcs_ufs_rx_hsgear_capability, 0x03), + qmp_phy_init_cfg(qphy_v5_pcs_ufs_rx_sigdet_ctrl1, 0x0e), + qmp_phy_init_cfg(qphy_v5_pcs_ufs_multi_lane_ctrl1, 0x02), +}; + +static const struct qmp_phy_cfg sm8350_ufsphy_cfg = { + .type = phy_type_ufs, + .nlanes = 2, + + .serdes_tbl = sm8350_ufsphy_serdes_tbl, + .serdes_tbl_num = array_size(sm8350_ufsphy_serdes_tbl), + .tx_tbl = sm8350_ufsphy_tx_tbl, + .tx_tbl_num = array_size(sm8350_ufsphy_tx_tbl), + .rx_tbl = sm8350_ufsphy_rx_tbl, + .rx_tbl_num = array_size(sm8350_ufsphy_rx_tbl), + .pcs_tbl = sm8350_ufsphy_pcs_tbl, + .pcs_tbl_num = array_size(sm8350_ufsphy_pcs_tbl), + .clk_list = sdm845_ufs_phy_clk_l, + .num_clks = array_size(sdm845_ufs_phy_clk_l), + .vreg_list = qmp_phy_vreg_l, + .num_vregs = array_size(qmp_phy_vreg_l), + .regs = sm8150_ufsphy_regs_layout, + + .start_ctrl = serdes_start, + .pwrdn_ctrl = sw_pwrdn, + + .is_dual_lane_phy = true, +}; + + }, { + .compatible = "qcom,sm8350-qmp-ufs-phy", + .data = &sm8350_ufsphy_cfg,
|
PHY ("physical layer" framework)
|
0e43fdb94a8363cfd78e8d14580ea2f5b82789a8
|
vinod koul bjorn andersson bjorn andersson linaro org
|
drivers
|
phy
|
qualcomm
|
dt-bindings: phy: qcom,usb-snps-femto-v2: add sm8250 and sm8350 bindings
|
add the compatible strings for the usb2 phys found on qcom sm8250 & sm8350 socs.
|
this release allows to map an uid to a different one in a mount; it also adds support for selecting the preemption model at runtime; support for a low-overhead memory error detector designed to be used in production; support for the acrn hypervisor designed for embedded systems; btrfs initial support for zoned devices, subpage blocks sizes and performance improvements; support for eager nfs writes; support for a thermal power management to control the surface temperature of embedded devices in an unified way; the napi polling can be moved to a kernel thread; and support for non-blocking path lookups. as always, there are many other features, new drivers, improvements and fixes.
|
sm8350 usb phy
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'cryptography', 'security', 'networking', 'architectures x86 arm risc-v powerpc mips csky s390 pa-risc c6x']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'device voltage and frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'non-transparent bridge (ntb)', 'thunderbolt', 'clock', 'phy ("physical layer" framework)', 'cxl (compute express link)', 'various']
|
[]
|
['yaml']
| 1
| 2
| 0
|
--- diff --git a/documentation/devicetree/bindings/phy/qcom,usb-snps-femto-v2.yaml b/documentation/devicetree/bindings/phy/qcom,usb-snps-femto-v2.yaml --- a/documentation/devicetree/bindings/phy/qcom,usb-snps-femto-v2.yaml +++ b/documentation/devicetree/bindings/phy/qcom,usb-snps-femto-v2.yaml - qcom,usb-snps-hs-7nm-phy - qcom,sm8150-usb-hs-phy + - qcom,sm8250-usb-hs-phy + - qcom,sm8350-usb-hs-phy - qcom,usb-snps-femto-v2-phy
|
PHY ("physical layer" framework)
|
fcba632d8148ab3ec19f6c7dd015cca866357d7e
|
jack pham
|
documentation
|
devicetree
|
bindings, phy
|
phy: qcom-qmp: add sm8350 usb qmp phys
|
add support for the usb dp & uni phys found on sm8350. these use version 5.0.0 of the qmp phy ip and thus require new "v5" definitions of the register offset macros for the qserdes rx and tx blocks. the qserdes common and qphy pcs blocks' register offsets are largely unchanged from v4 so some of the existing macros can be reused.
|
this release allows to map an uid to a different one in a mount; it also adds support for selecting the preemption model at runtime; support for a low-overhead memory error detector designed to be used in production; support for the acrn hypervisor designed for embedded systems; btrfs initial support for zoned devices, subpage blocks sizes and performance improvements; support for eager nfs writes; support for a thermal power management to control the surface temperature of embedded devices in an unified way; the napi polling can be moved to a kernel thread; and support for non-blocking path lookups. as always, there are many other features, new drivers, improvements and fixes.
|
sm8350 usb phy
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'cryptography', 'security', 'networking', 'architectures x86 arm risc-v powerpc mips csky s390 pa-risc c6x']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'device voltage and frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'non-transparent bridge (ntb)', 'thunderbolt', 'clock', 'phy ("physical layer" framework)', 'cxl (compute express link)', 'various']
|
[]
|
['h', 'c']
| 2
| 312
| 0
|
--- diff --git a/drivers/phy/qualcomm/phy-qcom-qmp.c b/drivers/phy/qualcomm/phy-qcom-qmp.c --- a/drivers/phy/qualcomm/phy-qcom-qmp.c +++ b/drivers/phy/qualcomm/phy-qcom-qmp.c +static const unsigned int sm8350_usb3_uniphy_regs_layout[qphy_layout_size] = { + [qphy_sw_reset] = 0x00, + [qphy_start_ctrl] = 0x44, + [qphy_pcs_status] = 0x14, + [qphy_pcs_power_down_control] = 0x40, + [qphy_pcs_autonomous_mode_ctrl] = 0x1008, + [qphy_pcs_lfps_rxterm_irq_clear] = 0x1014, +}; + +static const struct qmp_phy_init_tbl sm8350_usb3_tx_tbl[] = { + qmp_phy_init_cfg(qserdes_v5_tx_res_code_lane_tx, 0x00), + qmp_phy_init_cfg(qserdes_v5_tx_res_code_lane_rx, 0x00), + qmp_phy_init_cfg(qserdes_v5_tx_res_code_lane_offset_tx, 0x16), + qmp_phy_init_cfg(qserdes_v5_tx_res_code_lane_offset_rx, 0x0e), + qmp_phy_init_cfg(qserdes_v5_tx_lane_mode_1, 0x35), + qmp_phy_init_cfg(qserdes_v5_tx_lane_mode_3, 0x3f), + qmp_phy_init_cfg(qserdes_v5_tx_lane_mode_4, 0x7f), + qmp_phy_init_cfg(qserdes_v5_tx_lane_mode_5, 0x3f), + qmp_phy_init_cfg(qserdes_v5_tx_rcv_detect_lvl_2, 0x12), + qmp_phy_init_cfg(qserdes_v5_tx_pi_qec_ctrl, 0x21), +}; + +static const struct qmp_phy_init_tbl sm8350_usb3_rx_tbl[] = { + qmp_phy_init_cfg(qserdes_v5_rx_ucdr_fo_gain, 0x0a), + qmp_phy_init_cfg(qserdes_v5_rx_ucdr_so_gain, 0x05), + qmp_phy_init_cfg(qserdes_v5_rx_ucdr_fastlock_fo_gain, 0x2f), + qmp_phy_init_cfg(qserdes_v5_rx_ucdr_so_saturation_and_enable, 0x7f), + qmp_phy_init_cfg(qserdes_v5_rx_ucdr_fastlock_count_low, 0xff), + qmp_phy_init_cfg(qserdes_v5_rx_ucdr_fastlock_count_high, 0x0f), + qmp_phy_init_cfg(qserdes_v5_rx_ucdr_pi_controls, 0x99), + qmp_phy_init_cfg(qserdes_v5_rx_ucdr_sb2_thresh1, 0x08), + qmp_phy_init_cfg(qserdes_v5_rx_ucdr_sb2_thresh2, 0x08), + qmp_phy_init_cfg(qserdes_v5_rx_ucdr_sb2_gain1, 0x00), + qmp_phy_init_cfg(qserdes_v5_rx_ucdr_sb2_gain2, 0x04), + qmp_phy_init_cfg(qserdes_v5_rx_vga_cal_cntrl1, 0x54), + qmp_phy_init_cfg(qserdes_v5_rx_vga_cal_cntrl2, 0x0f), + qmp_phy_init_cfg(qserdes_v5_rx_rx_equ_adaptor_cntrl2, 0x0f), + qmp_phy_init_cfg(qserdes_v5_rx_rx_equ_adaptor_cntrl3, 0x4a), + qmp_phy_init_cfg(qserdes_v5_rx_rx_equ_adaptor_cntrl4, 0x0a), + qmp_phy_init_cfg(qserdes_v5_rx_rx_idac_tsettle_low, 0xc0), + qmp_phy_init_cfg(qserdes_v5_rx_rx_idac_tsettle_high, 0x00), + qmp_phy_init_cfg(qserdes_v5_rx_rx_eq_offset_adaptor_cntrl1, 0x47), + qmp_phy_init_cfg(qserdes_v5_rx_sigdet_cntrl, 0x04), + qmp_phy_init_cfg(qserdes_v5_rx_sigdet_deglitch_cntrl, 0x0e), + qmp_phy_init_cfg(qserdes_v5_rx_rx_mode_00_low, 0xbb), + qmp_phy_init_cfg(qserdes_v5_rx_rx_mode_00_high, 0x7b), + qmp_phy_init_cfg(qserdes_v5_rx_rx_mode_00_high2, 0xbb), + qmp_phy_init_cfg_lane(qserdes_v5_rx_rx_mode_00_high3, 0x3d, 1), + qmp_phy_init_cfg_lane(qserdes_v5_rx_rx_mode_00_high3, 0x3c, 2), + qmp_phy_init_cfg(qserdes_v5_rx_rx_mode_00_high4, 0xdb), + qmp_phy_init_cfg(qserdes_v5_rx_rx_mode_01_low, 0x64), + qmp_phy_init_cfg(qserdes_v5_rx_rx_mode_01_high, 0x24), + qmp_phy_init_cfg(qserdes_v5_rx_rx_mode_01_high2, 0xd2), + qmp_phy_init_cfg(qserdes_v5_rx_rx_mode_01_high3, 0x13), + qmp_phy_init_cfg(qserdes_v5_rx_rx_mode_01_high4, 0xa9), + qmp_phy_init_cfg(qserdes_v5_rx_dfe_en_timer, 0x04), + qmp_phy_init_cfg(qserdes_v5_rx_dfe_ctle_post_cal_offset, 0x38), + qmp_phy_init_cfg(qserdes_v5_rx_aux_data_tcoarse_tfine, 0xa0), + qmp_phy_init_cfg(qserdes_v5_rx_dcc_ctrl1, 0x0c), + qmp_phy_init_cfg(qserdes_v5_rx_gm_cal, 0x00), + qmp_phy_init_cfg(qserdes_v5_rx_vth_code, 0x10), +}; + +static const struct qmp_phy_init_tbl sm8350_usb3_pcs_tbl[] = { + qmp_phy_init_cfg(qphy_v5_pcs_usb3_rcvr_dtct_dly_u3_l, 0x40), + qmp_phy_init_cfg(qphy_v5_pcs_usb3_rcvr_dtct_dly_u3_h, 0x00), + qmp_phy_init_cfg(qphy_v4_pcs_rcvr_dtct_dly_p1u2_l, 0xe7), + qmp_phy_init_cfg(qphy_v4_pcs_rcvr_dtct_dly_p1u2_h, 0x03), + qmp_phy_init_cfg(qphy_v4_pcs_lock_detect_config1, 0xd0), + qmp_phy_init_cfg(qphy_v4_pcs_lock_detect_config2, 0x07), + qmp_phy_init_cfg(qphy_v4_pcs_lock_detect_config3, 0x20), + qmp_phy_init_cfg(qphy_v4_pcs_lock_detect_config6, 0x13), + qmp_phy_init_cfg(qphy_v4_pcs_refgen_req_config1, 0x21), + qmp_phy_init_cfg(qphy_v4_pcs_rx_sigdet_lvl, 0xaa), + qmp_phy_init_cfg(qphy_v4_pcs_cdr_reset_time, 0x0a), + qmp_phy_init_cfg(qphy_v4_pcs_align_detect_config1, 0x88), + qmp_phy_init_cfg(qphy_v4_pcs_align_detect_config2, 0x13), + qmp_phy_init_cfg(qphy_v4_pcs_pcs_tx_rx_config, 0x0c), + qmp_phy_init_cfg(qphy_v4_pcs_eq_config1, 0x4b), + qmp_phy_init_cfg(qphy_v4_pcs_eq_config5, 0x10), + qmp_phy_init_cfg(qphy_v5_pcs_usb3_lfps_det_high_count_val, 0xf8), + qmp_phy_init_cfg(qphy_v5_pcs_usb3_rxeqtraining_dfe_time_s2, 0x07), +}; + +static const struct qmp_phy_init_tbl sm8350_usb3_uniphy_tx_tbl[] = { + qmp_phy_init_cfg(qserdes_v5_tx_lane_mode_1, 0xa5), + qmp_phy_init_cfg(qserdes_v5_tx_lane_mode_2, 0x82), + qmp_phy_init_cfg(qserdes_v5_tx_lane_mode_3, 0x3f), + qmp_phy_init_cfg(qserdes_v5_tx_lane_mode_4, 0x3f), + qmp_phy_init_cfg(qserdes_v5_tx_pi_qec_ctrl, 0x21), + qmp_phy_init_cfg(qserdes_v5_tx_res_code_lane_offset_tx, 0x10), + qmp_phy_init_cfg(qserdes_v5_tx_res_code_lane_offset_rx, 0x0e), +}; + +static const struct qmp_phy_init_tbl sm8350_usb3_uniphy_rx_tbl[] = { + qmp_phy_init_cfg(qserdes_v5_rx_rx_mode_00_high4, 0xdc), + qmp_phy_init_cfg(qserdes_v5_rx_rx_mode_00_high3, 0xbd), + qmp_phy_init_cfg(qserdes_v5_rx_rx_mode_00_high2, 0xff), + qmp_phy_init_cfg(qserdes_v5_rx_rx_mode_00_high, 0x7f), + qmp_phy_init_cfg(qserdes_v5_rx_rx_mode_00_low, 0xff), + qmp_phy_init_cfg(qserdes_v5_rx_rx_mode_01_high4, 0xa9), + qmp_phy_init_cfg(qserdes_v5_rx_rx_mode_01_high3, 0x7b), + qmp_phy_init_cfg(qserdes_v5_rx_rx_mode_01_high2, 0xe4), + qmp_phy_init_cfg(qserdes_v5_rx_rx_mode_01_high, 0x24), + qmp_phy_init_cfg(qserdes_v5_rx_rx_mode_01_low, 0x64), + qmp_phy_init_cfg(qserdes_v5_rx_ucdr_pi_controls, 0x99), + qmp_phy_init_cfg(qserdes_v5_rx_ucdr_sb2_thresh1, 0x08), + qmp_phy_init_cfg(qserdes_v5_rx_ucdr_sb2_thresh2, 0x08), + qmp_phy_init_cfg(qserdes_v5_rx_ucdr_sb2_gain1, 0x00), + qmp_phy_init_cfg(qserdes_v5_rx_ucdr_sb2_gain2, 0x04), + qmp_phy_init_cfg(qserdes_v5_rx_ucdr_fastlock_fo_gain, 0x2f), + qmp_phy_init_cfg(qserdes_v5_rx_ucdr_fastlock_count_low, 0xff), + qmp_phy_init_cfg(qserdes_v5_rx_ucdr_fastlock_count_high, 0x0f), + qmp_phy_init_cfg(qserdes_v5_rx_ucdr_fo_gain, 0x0a), + qmp_phy_init_cfg(qserdes_v5_rx_vga_cal_cntrl1, 0x54), + qmp_phy_init_cfg(qserdes_v5_rx_vga_cal_cntrl2, 0x0f), + qmp_phy_init_cfg(qserdes_v5_rx_rx_equ_adaptor_cntrl2, 0x0f), + qmp_phy_init_cfg(qserdes_v5_rx_rx_equ_adaptor_cntrl4, 0x0a), + qmp_phy_init_cfg(qserdes_v5_rx_rx_eq_offset_adaptor_cntrl1, 0x47), + qmp_phy_init_cfg(qserdes_v5_rx_rx_offset_adaptor_cntrl2, 0x80), + qmp_phy_init_cfg(qserdes_v5_rx_sigdet_cntrl, 0x04), + qmp_phy_init_cfg(qserdes_v5_rx_sigdet_deglitch_cntrl, 0x0e), + qmp_phy_init_cfg(qserdes_v5_rx_dfe_ctle_post_cal_offset, 0x38), + qmp_phy_init_cfg(qserdes_v5_rx_ucdr_so_gain, 0x05), + qmp_phy_init_cfg(qserdes_v5_rx_gm_cal, 0x00), + qmp_phy_init_cfg(qserdes_v5_rx_sigdet_enables, 0x00), +}; + +static const struct qmp_phy_init_tbl sm8350_usb3_uniphy_pcs_tbl[] = { + qmp_phy_init_cfg(qphy_v4_pcs_lock_detect_config1, 0xd0), + qmp_phy_init_cfg(qphy_v4_pcs_lock_detect_config2, 0x07), + qmp_phy_init_cfg(qphy_v4_pcs_lock_detect_config3, 0x20), + qmp_phy_init_cfg(qphy_v4_pcs_lock_detect_config6, 0x13), + qmp_phy_init_cfg(qphy_v4_pcs_rcvr_dtct_dly_p1u2_l, 0xe7), + qmp_phy_init_cfg(qphy_v4_pcs_rcvr_dtct_dly_p1u2_h, 0x03), + qmp_phy_init_cfg(qphy_v4_pcs_rx_sigdet_lvl, 0xaa), + qmp_phy_init_cfg(qphy_v4_pcs_pcs_tx_rx_config, 0x0c), + qmp_phy_init_cfg(qphy_v5_pcs_usb3_uni_rxeqtraining_dfe_time_s2, 0x07), + qmp_phy_init_cfg(qphy_v5_pcs_usb3_uni_lfps_det_high_count_val, 0xf8), + qmp_phy_init_cfg(qphy_v4_pcs_cdr_reset_time, 0x0a), + qmp_phy_init_cfg(qphy_v4_pcs_align_detect_config1, 0x88), + qmp_phy_init_cfg(qphy_v4_pcs_align_detect_config2, 0x13), + qmp_phy_init_cfg(qphy_v4_pcs_eq_config1, 0x4b), + qmp_phy_init_cfg(qphy_v4_pcs_eq_config5, 0x10), + qmp_phy_init_cfg(qphy_v4_pcs_refgen_req_config1, 0x21), +}; + +static const struct qmp_phy_cfg sm8350_usb3phy_cfg = { + .type = phy_type_usb3, + .nlanes = 1, + + .serdes_tbl = sm8150_usb3_serdes_tbl, + .serdes_tbl_num = array_size(sm8150_usb3_serdes_tbl), + .tx_tbl = sm8350_usb3_tx_tbl, + .tx_tbl_num = array_size(sm8350_usb3_tx_tbl), + .rx_tbl = sm8350_usb3_rx_tbl, + .rx_tbl_num = array_size(sm8350_usb3_rx_tbl), + .pcs_tbl = sm8350_usb3_pcs_tbl, + .pcs_tbl_num = array_size(sm8350_usb3_pcs_tbl), + .clk_list = qmp_v4_sm8250_usbphy_clk_l, + .num_clks = array_size(qmp_v4_sm8250_usbphy_clk_l), + .reset_list = msm8996_usb3phy_reset_l, + .num_resets = array_size(msm8996_usb3phy_reset_l), + .vreg_list = qmp_phy_vreg_l, + .num_vregs = array_size(qmp_phy_vreg_l), + .regs = qmp_v4_usb3phy_regs_layout, + + .start_ctrl = serdes_start | pcs_start, + .pwrdn_ctrl = sw_pwrdn, + + .has_pwrdn_delay = true, + .pwrdn_delay_min = power_down_delay_us_min, + .pwrdn_delay_max = power_down_delay_us_max, + + .has_phy_dp_com_ctrl = true, + .is_dual_lane_phy = true, +}; + +static const struct qmp_phy_cfg sm8350_usb3_uniphy_cfg = { + .type = phy_type_usb3, + .nlanes = 1, + + .serdes_tbl = sm8150_usb3_uniphy_serdes_tbl, + .serdes_tbl_num = array_size(sm8150_usb3_uniphy_serdes_tbl), + .tx_tbl = sm8350_usb3_uniphy_tx_tbl, + .tx_tbl_num = array_size(sm8350_usb3_uniphy_tx_tbl), + .rx_tbl = sm8350_usb3_uniphy_rx_tbl, + .rx_tbl_num = array_size(sm8350_usb3_uniphy_rx_tbl), + .pcs_tbl = sm8350_usb3_uniphy_pcs_tbl, + .pcs_tbl_num = array_size(sm8350_usb3_uniphy_pcs_tbl), + .clk_list = qmp_v4_phy_clk_l, + .num_clks = array_size(qmp_v4_phy_clk_l), + .reset_list = msm8996_usb3phy_reset_l, + .num_resets = array_size(msm8996_usb3phy_reset_l), + .vreg_list = qmp_phy_vreg_l, + .num_vregs = array_size(qmp_phy_vreg_l), + .regs = sm8350_usb3_uniphy_regs_layout, + + .start_ctrl = serdes_start | pcs_start, + .pwrdn_ctrl = sw_pwrdn, + + .has_pwrdn_delay = true, + .pwrdn_delay_min = power_down_delay_us_min, + .pwrdn_delay_max = power_down_delay_us_max, +}; + + }, { + .compatible = "qcom,sm8350-qmp-usb3-phy", + .data = &sm8350_usb3phy_cfg, + }, { + .compatible = "qcom,sm8350-qmp-usb3-uni-phy", + .data = &sm8350_usb3_uniphy_cfg, diff --git a/drivers/phy/qualcomm/phy-qcom-qmp.h b/drivers/phy/qualcomm/phy-qcom-qmp.h --- a/drivers/phy/qualcomm/phy-qcom-qmp.h +++ b/drivers/phy/qualcomm/phy-qcom-qmp.h +/* only for qmp v5 phy - tx registers */ +#define qserdes_v5_tx_res_code_lane_tx 0x34 +#define qserdes_v5_tx_res_code_lane_rx 0x38 +#define qserdes_v5_tx_res_code_lane_offset_tx 0x3c +#define qserdes_v5_tx_res_code_lane_offset_rx 0x40 +#define qserdes_v5_tx_lane_mode_1 0x84 +#define qserdes_v5_tx_lane_mode_2 0x88 +#define qserdes_v5_tx_lane_mode_3 0x8c +#define qserdes_v5_tx_lane_mode_4 0x90 +#define qserdes_v5_tx_lane_mode_5 0x94 +#define qserdes_v5_tx_rcv_detect_lvl_2 0xa4 +#define qserdes_v5_tx_tran_drvr_emp_en 0xc0 +#define qserdes_v5_tx_pi_qec_ctrl 0xe4 + +/* only for qmp v5 phy - rx registers */ +#define qserdes_v5_rx_ucdr_fo_gain 0x008 +#define qserdes_v5_rx_ucdr_so_gain 0x014 +#define qserdes_v5_rx_ucdr_fastlock_fo_gain 0x030 +#define qserdes_v5_rx_ucdr_so_saturation_and_enable 0x034 +#define qserdes_v5_rx_ucdr_fastlock_count_low 0x03c +#define qserdes_v5_rx_ucdr_fastlock_count_high 0x040 +#define qserdes_v5_rx_ucdr_pi_controls 0x044 +#define qserdes_v5_rx_ucdr_pi_ctrl2 0x048 +#define qserdes_v5_rx_ucdr_sb2_thresh1 0x04c +#define qserdes_v5_rx_ucdr_sb2_thresh2 0x050 +#define qserdes_v5_rx_ucdr_sb2_gain1 0x054 +#define qserdes_v5_rx_ucdr_sb2_gain2 0x058 +#define qserdes_v5_rx_aux_data_tcoarse_tfine 0x060 +#define qserdes_v5_rx_rclk_auxdata_sel 0x064 +#define qserdes_v5_rx_ac_jtag_enable 0x068 +#define qserdes_v5_rx_ac_jtag_mode 0x078 +#define qserdes_v5_rx_rx_term_bw 0x080 +#define qserdes_v5_rx_vga_cal_cntrl1 0x0d4 +#define qserdes_v5_rx_vga_cal_cntrl2 0x0d8 +#define qserdes_v5_rx_gm_cal 0x0dc +#define qserdes_v5_rx_rx_equ_adaptor_cntrl1 0x0e8 +#define qserdes_v5_rx_rx_equ_adaptor_cntrl2 0x0ec +#define qserdes_v5_rx_rx_equ_adaptor_cntrl3 0x0f0 +#define qserdes_v5_rx_rx_equ_adaptor_cntrl4 0x0f4 +#define qserdes_v5_rx_rx_idac_tsettle_low 0x0f8 +#define qserdes_v5_rx_rx_idac_tsettle_high 0x0fc +#define qserdes_v5_rx_rx_idac_measure_time 0x100 +#define qserdes_v5_rx_rx_eq_offset_adaptor_cntrl1 0x110 +#define qserdes_v5_rx_rx_offset_adaptor_cntrl2 0x114 +#define qserdes_v5_rx_sigdet_enables 0x118 +#define qserdes_v5_rx_sigdet_cntrl 0x11c +#define qserdes_v5_rx_sigdet_lvl 0x120 +#define qserdes_v5_rx_sigdet_deglitch_cntrl 0x124 +#define qserdes_v5_rx_rx_band 0x128 +#define qserdes_v5_rx_rx_mode_00_low 0x15c +#define qserdes_v5_rx_rx_mode_00_high 0x160 +#define qserdes_v5_rx_rx_mode_00_high2 0x164 +#define qserdes_v5_rx_rx_mode_00_high3 0x168 +#define qserdes_v5_rx_rx_mode_00_high4 0x16c +#define qserdes_v5_rx_rx_mode_01_low 0x170 +#define qserdes_v5_rx_rx_mode_01_high 0x174 +#define qserdes_v5_rx_rx_mode_01_high2 0x178 +#define qserdes_v5_rx_rx_mode_01_high3 0x17c +#define qserdes_v5_rx_rx_mode_01_high4 0x180 +#define qserdes_v5_rx_rx_mode_10_low 0x184 +#define qserdes_v5_rx_rx_mode_10_high 0x188 +#define qserdes_v5_rx_rx_mode_10_high2 0x18c +#define qserdes_v5_rx_rx_mode_10_high3 0x190 +#define qserdes_v5_rx_rx_mode_10_high4 0x194 +#define qserdes_v5_rx_dfe_en_timer 0x1a0 +#define qserdes_v5_rx_dfe_ctle_post_cal_offset 0x1a4 +#define qserdes_v5_rx_dcc_ctrl1 0x1a8 +#define qserdes_v5_rx_vth_code 0x1b0 + +/* only for qmp v5 phy - usb3 have different offsets than v4 */ +#define qphy_v5_pcs_usb3_power_state_config1 0x300 +#define qphy_v5_pcs_usb3_autonomous_mode_status 0x304 +#define qphy_v5_pcs_usb3_autonomous_mode_ctrl 0x308 +#define qphy_v5_pcs_usb3_autonomous_mode_ctrl2 0x30c +#define qphy_v5_pcs_usb3_lfps_rxterm_irq_source_status 0x310 +#define qphy_v5_pcs_usb3_lfps_rxterm_irq_clear 0x314 +#define qphy_v5_pcs_usb3_lfps_det_high_count_val 0x318 +#define qphy_v5_pcs_usb3_lfps_tx_ecstart 0x31c +#define qphy_v5_pcs_usb3_lfps_per_timer_val 0x320 +#define qphy_v5_pcs_usb3_lfps_tx_end_cnt_u3_start 0x324 +#define qphy_v5_pcs_usb3_lfps_config1 0x328 +#define qphy_v5_pcs_usb3_rxeqtraining_lock_time 0x32c +#define qphy_v5_pcs_usb3_rxeqtraining_wait_time 0x330 +#define qphy_v5_pcs_usb3_rxeqtraining_ctle_time 0x334 +#define qphy_v5_pcs_usb3_rxeqtraining_wait_time_s2 0x338 +#define qphy_v5_pcs_usb3_rxeqtraining_dfe_time_s2 0x33c +#define qphy_v5_pcs_usb3_rcvr_dtct_dly_u3_l 0x340 +#define qphy_v5_pcs_usb3_rcvr_dtct_dly_u3_h 0x344 +#define qphy_v5_pcs_usb3_arcvr_dtct_en_period 0x348 +#define qphy_v5_pcs_usb3_arcvr_dtct_cm_dly 0x34c +#define qphy_v5_pcs_usb3_txoneszeros_run_length 0x350 +#define qphy_v5_pcs_usb3_alfps_deglitch_val 0x354 +#define qphy_v5_pcs_usb3_sigdet_startup_timer_val 0x358 +#define qphy_v5_pcs_usb3_test_control 0x35c +#define qphy_v5_pcs_usb3_rxtermination_dly_sel 0x360 + +/* only for qmp v5 phy - uni has 0x1000 offset for pcs_usb3 regs */ +#define qphy_v5_pcs_usb3_uni_lfps_det_high_count_val 0x1018 +#define qphy_v5_pcs_usb3_uni_rxeqtraining_dfe_time_s2 0x103c +
|
PHY ("physical layer" framework)
|
10c744d48d7f01b0d0c954d8d71aebd07705a7b9
|
jack pham bjorn andersson bjorn andersson linaro org
|
drivers
|
phy
|
qualcomm
|
dt-bindings: usb: qcom,dwc3: add bindings for sm8150, sm8250, sm8350
|
add compatible strings for the usb dwc3 controller on qcom sm8150, sm8250 and sm8350 socs.
|
this release allows to map an uid to a different one in a mount; it also adds support for selecting the preemption model at runtime; support for a low-overhead memory error detector designed to be used in production; support for the acrn hypervisor designed for embedded systems; btrfs initial support for zoned devices, subpage blocks sizes and performance improvements; support for eager nfs writes; support for a thermal power management to control the surface temperature of embedded devices in an unified way; the napi polling can be moved to a kernel thread; and support for non-blocking path lookups. as always, there are many other features, new drivers, improvements and fixes.
|
sm8350 usb phy
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'cryptography', 'security', 'networking', 'architectures x86 arm risc-v powerpc mips csky s390 pa-risc c6x']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'device voltage and frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'non-transparent bridge (ntb)', 'thunderbolt', 'clock', 'phy ("physical layer" framework)', 'cxl (compute express link)', 'various']
|
[]
|
['yaml']
| 1
| 3
| 0
|
--- diff --git a/documentation/devicetree/bindings/usb/qcom,dwc3.yaml b/documentation/devicetree/bindings/usb/qcom,dwc3.yaml --- a/documentation/devicetree/bindings/usb/qcom,dwc3.yaml +++ b/documentation/devicetree/bindings/usb/qcom,dwc3.yaml - qcom,sc7180-dwc3 - qcom,sdm845-dwc3 - qcom,sdx55-dwc3 + - qcom,sm8150-dwc3 + - qcom,sm8250-dwc3 + - qcom,sm8350-dwc3 - const: qcom,dwc3
|
PHY ("physical layer" framework)
|
7a79f1f7f7e75e532c5a803ab3ebf42a3e79497c
|
jack pham
|
documentation
|
devicetree
|
bindings, usb
|
phy: phy-brcm-usb: support phy on the bcm4908
|
bcm4908 seems to have slightly different registers but works when programmed just like the stb one.
|
this release allows to map an uid to a different one in a mount; it also adds support for selecting the preemption model at runtime; support for a low-overhead memory error detector designed to be used in production; support for the acrn hypervisor designed for embedded systems; btrfs initial support for zoned devices, subpage blocks sizes and performance improvements; support for eager nfs writes; support for a thermal power management to control the surface temperature of embedded devices in an unified way; the napi polling can be moved to a kernel thread; and support for non-blocking path lookups. as always, there are many other features, new drivers, improvements and fixes.
|
support phy on the bcm4908
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'cryptography', 'security', 'networking', 'architectures x86 arm risc-v powerpc mips csky s390 pa-risc c6x']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'device voltage and frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'non-transparent bridge (ntb)', 'thunderbolt', 'clock', 'phy ("physical layer" framework)', 'cxl (compute express link)', 'various']
|
['phy-brcm-usb']
|
['kconfig', 'c']
| 2
| 6
| 1
|
--- diff --git a/drivers/phy/broadcom/kconfig b/drivers/phy/broadcom/kconfig --- a/drivers/phy/broadcom/kconfig +++ b/drivers/phy/broadcom/kconfig - depends on arch_brcmstb || compile_test + depends on arch_bcm4908 || arch_brcmstb || compile_test + default arch_bcm4908 diff --git a/drivers/phy/broadcom/phy-brcm-usb.c b/drivers/phy/broadcom/phy-brcm-usb.c --- a/drivers/phy/broadcom/phy-brcm-usb.c +++ b/drivers/phy/broadcom/phy-brcm-usb.c + { + .compatible = "brcm,bcm4908-usb-phy", + .data = &chip_info_7445, + },
|
PHY ("physical layer" framework)
|
4b402fa8e0b7817f3e3738d7828038f114e6899e
|
rafa mi ecki florian fainelli f fainelli gmail com
|
drivers
|
phy
|
broadcom
|
phy: qcom-qmp: add sc8180x ufs phy
|
the ufs phy found in the qualcomm sc8180x is either the same or very similar to the phy present in sm8150, so add a compatible and reuse the sm8150 configuration.
|
this release allows to map an uid to a different one in a mount; it also adds support for selecting the preemption model at runtime; support for a low-overhead memory error detector designed to be used in production; support for the acrn hypervisor designed for embedded systems; btrfs initial support for zoned devices, subpage blocks sizes and performance improvements; support for eager nfs writes; support for a thermal power management to control the surface temperature of embedded devices in an unified way; the napi polling can be moved to a kernel thread; and support for non-blocking path lookups. as always, there are many other features, new drivers, improvements and fixes.
|
add sc8180x ufs phy
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'cryptography', 'security', 'networking', 'architectures x86 arm risc-v powerpc mips csky s390 pa-risc c6x']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'device voltage and frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'non-transparent bridge (ntb)', 'thunderbolt', 'clock', 'phy ("physical layer" framework)', 'cxl (compute express link)', 'various']
|
['qcom-qmp']
|
['c']
| 1
| 3
| 0
|
--- diff --git a/drivers/phy/qualcomm/phy-qcom-qmp.c b/drivers/phy/qualcomm/phy-qcom-qmp.c --- a/drivers/phy/qualcomm/phy-qcom-qmp.c +++ b/drivers/phy/qualcomm/phy-qcom-qmp.c + }, { + .compatible = "qcom,sc8180x-qmp-ufs-phy", + .data = &sm8150_ufsphy_cfg,
|
PHY ("physical layer" framework)
|
a5a621ad0ab44aa672385e0dddc730c3e50a908f
|
bjorn andersson
|
drivers
|
phy
|
qualcomm
|
phy: qcom-qmp: add sc8180x usb phy
|
the qualcomm sc8180x has two qmp phys used for superspeed usb, which are either the same or very similar to the same found in sm8150. add a compatible for this, reusing the existing sm8150 usb phy config.
|
this release allows to map an uid to a different one in a mount; it also adds support for selecting the preemption model at runtime; support for a low-overhead memory error detector designed to be used in production; support for the acrn hypervisor designed for embedded systems; btrfs initial support for zoned devices, subpage blocks sizes and performance improvements; support for eager nfs writes; support for a thermal power management to control the surface temperature of embedded devices in an unified way; the napi polling can be moved to a kernel thread; and support for non-blocking path lookups. as always, there are many other features, new drivers, improvements and fixes.
|
add sc8180x usb phy
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'cryptography', 'security', 'networking', 'architectures x86 arm risc-v powerpc mips csky s390 pa-risc c6x']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'device voltage and frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'non-transparent bridge (ntb)', 'thunderbolt', 'clock', 'phy ("physical layer" framework)', 'cxl (compute express link)', 'various']
|
['qcom-qmp']
|
['c']
| 1
| 3
| 0
|
--- diff --git a/drivers/phy/qualcomm/phy-qcom-qmp.c b/drivers/phy/qualcomm/phy-qcom-qmp.c --- a/drivers/phy/qualcomm/phy-qcom-qmp.c +++ b/drivers/phy/qualcomm/phy-qcom-qmp.c + }, { + .compatible = "qcom,sc8180x-qmp-usb3-phy", + .data = &sm8150_usb3phy_cfg,
|
PHY ("physical layer" framework)
|
4d1a6404e91efffe3192e1405e175fc7fb3fa3de
|
bjorn andersson
|
drivers
|
phy
|
qualcomm
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.