commit_title
stringlengths 13
124
| commit_body
stringlengths 0
1.9k
| release_summary
stringclasses 52
values | changes_summary
stringlengths 1
758
| release_affected_domains
stringclasses 33
values | release_affected_drivers
stringclasses 51
values | domain_of_changes
stringlengths 2
571
| language_set
stringclasses 983
values | diffstat_files
int64 1
300
| diffstat_insertions
int64 0
309k
| diffstat_deletions
int64 0
168k
| commit_diff
stringlengths 92
23.4M
| category
stringclasses 108
values | commit_hash
stringlengths 34
40
| related_people
stringlengths 0
370
| domain
stringclasses 21
values | subdomain
stringclasses 241
values | leaf_module
stringlengths 0
912
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
acpica: iasl: decode subtable type field for viot
|
for the table disassembler, decode the subtable type field to a descriptive string.
|
this release includes the landlock security module, which aims to make easier to sandbox applications; support for the clang control flow integrity, which aims to abort the program upon detecting certain forms of undefined behavior; support for randomising the stack address offset in each syscall; support for concurrent tbl flushing; preparatory apple m1 support; support for incoming amd and intel graphics chips; bpf support for calling kernel functions directly; a virtio sound driver for improved sound experience on virtualized guests; io_uring support for multi shot mode and a misc cgroup for miscellaneous resources. as always, there are many other features, new drivers, improvements and fixes.
|
acpi 6.4 support
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'security', 'networking', 'architectures arm x86 mips powerpc riscv s390 ia64 xtensa']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'remote processors', 'clock', 'phy ("physical layer" framework)', 'various']
|
[]
|
['h']
| 1
| 1
| 0
|
--- diff --git a/include/acpi/actbl3.h b/include/acpi/actbl3.h --- a/include/acpi/actbl3.h +++ b/include/acpi/actbl3.h + acpi_viot_reserved = 0x05
|
Power Management
|
f73b8619aa39580f5f1bcb0b3816a98a17c5e8c2
|
bob moore
|
include
|
acpi
| |
acpica: acpisrc: add missing conversion for viot support
|
acpica commit 856a96fdf4b51b2b8da17529df0255e6f51f1b5b
|
this release includes the landlock security module, which aims to make easier to sandbox applications; support for the clang control flow integrity, which aims to abort the program upon detecting certain forms of undefined behavior; support for randomising the stack address offset in each syscall; support for concurrent tbl flushing; preparatory apple m1 support; support for incoming amd and intel graphics chips; bpf support for calling kernel functions directly; a virtio sound driver for improved sound experience on virtualized guests; io_uring support for multi shot mode and a misc cgroup for miscellaneous resources. as always, there are many other features, new drivers, improvements and fixes.
|
acpi 6.4 support
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'security', 'networking', 'architectures arm x86 mips powerpc riscv s390 ia64 xtensa']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'remote processors', 'clock', 'phy ("physical layer" framework)', 'various']
|
[]
|
['h']
| 1
| 4
| 4
|
--- diff --git a/include/acpi/actbl3.h b/include/acpi/actbl3.h --- a/include/acpi/actbl3.h +++ b/include/acpi/actbl3.h - acpi_viot_header header; + struct acpi_viot_header header; - acpi_viot_header header; + struct acpi_viot_header header; - acpi_viot_header header; + struct acpi_viot_header header; - acpi_viot_header header; + struct acpi_viot_header header;
|
Power Management
|
e563f6fc9ef4674c083b22d62ca4d93f0cfb1cce
|
jean philippe brucker
|
include
|
acpi
| |
acpica: iort: updates for revision e.b
|
acpica commit 8710a708faed728ea2672b8da842b2e9af1cf5bd
|
this release includes the landlock security module, which aims to make easier to sandbox applications; support for the clang control flow integrity, which aims to abort the program upon detecting certain forms of undefined behavior; support for randomising the stack address offset in each syscall; support for concurrent tbl flushing; preparatory apple m1 support; support for incoming amd and intel graphics chips; bpf support for calling kernel functions directly; a virtio sound driver for improved sound experience on virtualized guests; io_uring support for multi shot mode and a misc cgroup for miscellaneous resources. as always, there are many other features, new drivers, improvements and fixes.
|
acpi 6.4 support
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'security', 'networking', 'architectures arm x86 mips powerpc riscv s390 ia64 xtensa']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'remote processors', 'clock', 'phy ("physical layer" framework)', 'various']
|
[]
|
['h']
| 1
| 20
| 6
|
-added an identifier field in the node descriptors to aid table -introduced the reserved memory range(rmr) node. this is used -introduced a flag in the rc node to express support for pri. -added a flag in the rc node to declare support for pasid forward --- diff --git a/include/acpi/actbl2.h b/include/acpi/actbl2.h --- a/include/acpi/actbl2.h +++ b/include/acpi/actbl2.h - * document number: arm den 0049d, march 2018 + * document number: arm den 0049e.b, feb 2021 - u32 reserved; + u32 identifier; - acpi_iort_node_pmcg = 0x05 + acpi_iort_node_pmcg = 0x05, + acpi_iort_node_rmr = 0x06, -/* values for ats_attribute field above */ +/* masks for ats_attribute field above */ -#define acpi_iort_ats_supported 0x00000001 /* the root complex supports ats */ -#define acpi_iort_ats_unsupported 0x00000000 /* the root complex doesn't support ats */ +#define acpi_iort_ats_supported (1) /* the root complex ats support */ +#define acpi_iort_pri_supported (1<<1) /* the root complex pri support */ +#define acpi_iort_pasid_fwd_supported (1<<2) /* the root complex pasid forward support */ +struct acpi_iort_rmr { + u32 flags; + u32 rmr_count; + u32 rmr_offset; +}; + +struct acpi_iort_rmr_desc { + u64 base_address; + u64 length; + u32 reserved; +}; +
|
Power Management
|
8e1fdd7f1655c538fb017d0493c80d02cbc8d8d4
|
shameer kolothum
|
include
|
acpi
| |
acpica: update version to 20210331
|
acpica commit eb423b7d5440472d0d2115cb81b52b1b7c56d95a
|
this release includes the landlock security module, which aims to make easier to sandbox applications; support for the clang control flow integrity, which aims to abort the program upon detecting certain forms of undefined behavior; support for randomising the stack address offset in each syscall; support for concurrent tbl flushing; preparatory apple m1 support; support for incoming amd and intel graphics chips; bpf support for calling kernel functions directly; a virtio sound driver for improved sound experience on virtualized guests; io_uring support for multi shot mode and a misc cgroup for miscellaneous resources. as always, there are many other features, new drivers, improvements and fixes.
|
acpi 6.4 support
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'security', 'networking', 'architectures arm x86 mips powerpc riscv s390 ia64 xtensa']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'remote processors', 'clock', 'phy ("physical layer" framework)', 'various']
|
[]
|
['h']
| 1
| 1
| 1
|
--- diff --git a/include/acpi/acpixf.h b/include/acpi/acpixf.h --- a/include/acpi/acpixf.h +++ b/include/acpi/acpixf.h -#define acpi_ca_version 0x20210105 +#define acpi_ca_version 0x20210331
|
Power Management
|
c3fbd67b94b0420f33210a8a02fc4c23ec2ea13b
|
bob moore
|
include
|
acpi
| |
acpi: pm: add acpi id of alder lake fan
|
add a new unique fan acpi device id for alder lake to support it in acpi_dev_pm_attach() function.
|
this release includes the landlock security module, which aims to make easier to sandbox applications; support for the clang control flow integrity, which aims to abort the program upon detecting certain forms of undefined behavior; support for randomising the stack address offset in each syscall; support for concurrent tbl flushing; preparatory apple m1 support; support for incoming amd and intel graphics chips; bpf support for calling kernel functions directly; a virtio sound driver for improved sound experience on virtualized guests; io_uring support for multi shot mode and a misc cgroup for miscellaneous resources. as always, there are many other features, new drivers, improvements and fixes.
|
add acpi id of alder lake fan
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'security', 'networking', 'architectures arm x86 mips powerpc riscv s390 ia64 xtensa']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'remote processors', 'clock', 'phy ("physical layer" framework)', 'various']
|
['acpi', 'pm']
|
['c']
| 1
| 1
| 0
|
--- diff --git a/drivers/acpi/device_pm.c b/drivers/acpi/device_pm.c --- a/drivers/acpi/device_pm.c +++ b/drivers/acpi/device_pm.c + {"intc1048", }, /* fan for alder lake generation */
|
Power Management
|
2404b8747019184002823dba7d2f0ecf89d802b7
|
sumeet pawnikar zhang rui rui zhang intel com
|
drivers
|
acpi
| |
tools/power turbostat: add tcc offset support
|
the length of tcc offset bits varies on different platforms. decode tcc offset bits only for the platforms that we have verified. for the others, only show default tcc activation temperature.
|
this release includes the landlock security module, which aims to make easier to sandbox applications; support for the clang control flow integrity, which aims to abort the program upon detecting certain forms of undefined behavior; support for randomising the stack address offset in each syscall; support for concurrent tbl flushing; preparatory apple m1 support; support for incoming amd and intel graphics chips; bpf support for calling kernel functions directly; a virtio sound driver for improved sound experience on virtualized guests; io_uring support for multi shot mode and a misc cgroup for miscellaneous resources. as always, there are many other features, new drivers, improvements and fixes.
|
add tcc offset support
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'security', 'networking', 'architectures arm x86 mips powerpc riscv s390 ia64 xtensa']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'remote processors', 'clock', 'phy ("physical layer" framework)', 'various']
|
['tools/power turbostat ']
|
['c']
| 1
| 55
| 3
|
--- diff --git a/tools/power/x86/turbostat/turbostat.c b/tools/power/x86/turbostat/turbostat.c --- a/tools/power/x86/turbostat/turbostat.c +++ b/tools/power/x86/turbostat/turbostat.c +int tcc_offset_bits; +/* + * tcc_offset_bits: + * 0: tcc offset not supported (default) + * 6: bit 29:24 of msr_platform_info + * 4: bit 27:24 of msr_platform_info + */ +void check_tcc_offset(int model) +{ + unsigned long long msr; + + if (!genuine_intel) + return; + + switch (model) { + case intel_fam6_skylake_l: + case intel_fam6_skylake: + case intel_fam6_kabylake_l: + case intel_fam6_kabylake: + case intel_fam6_icelake_l: + case intel_fam6_icelake: + case intel_fam6_tigerlake_l: + case intel_fam6_tigerlake: + case intel_fam6_cometlake: + if (!get_msr(base_cpu, msr_platform_info, &msr)) { + msr = (msr >> 30) & 1; + if (msr) + tcc_offset_bits = 6; + } + return; + default: + return; + } +} + - unsigned int target_c_local; + unsigned int target_c_local, tcc_offset; - if (!quiet) - fprintf(outf, "cpu%d: msr_ia32_temperature_target: 0x%08llx (%d c) ", + if (!quiet) { + switch (tcc_offset_bits) { + case 4: + tcc_offset = (msr >> 24) & 0xf; + fprintf(outf, "cpu%d: msr_ia32_temperature_target: 0x%08llx (%d c) (%d default - %d offset) ", + cpu, msr, target_c_local - tcc_offset, target_c_local, tcc_offset); + break; + case 6: + tcc_offset = (msr >> 24) & 0x3f; + fprintf(outf, "cpu%d: msr_ia32_temperature_target: 0x%08llx (%d c) (%d default - %d offset) ", + cpu, msr, target_c_local - tcc_offset, target_c_local, tcc_offset); + break; + default: + fprintf(outf, "cpu%d: msr_ia32_temperature_target: 0x%08llx (%d c) ", + break; + } + } + check_tcc_offset(model_orig); +
|
Power Management
|
0b9a0b9be991656f125b58a240065cdf72077244
|
zhang rui
|
tools
|
power
|
turbostat, x86
|
tools/power turbostat: support "turbostat --hide idle"
|
as idle, in particular, can have many columns on some machines... make it easy to ignore them all at once.
|
this release includes the landlock security module, which aims to make easier to sandbox applications; support for the clang control flow integrity, which aims to abort the program upon detecting certain forms of undefined behavior; support for randomising the stack address offset in each syscall; support for concurrent tbl flushing; preparatory apple m1 support; support for incoming amd and intel graphics chips; bpf support for calling kernel functions directly; a virtio sound driver for improved sound experience on virtualized guests; io_uring support for multi shot mode and a misc cgroup for miscellaneous resources. as always, there are many other features, new drivers, improvements and fixes.
|
support "turbostat --hide idle"
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'security', 'networking', 'architectures arm x86 mips powerpc riscv s390 ia64 xtensa']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'remote processors', 'clock', 'phy ("physical layer" framework)', 'various']
|
['tools/power turbostat ']
|
['8', 'c']
| 2
| 22
| 2
|
--- diff --git a/tools/power/x86/turbostat/turbostat.8 b/tools/power/x86/turbostat/turbostat.8 --- a/tools/power/x86/turbostat/turbostat.8 +++ b/tools/power/x86/turbostat/turbostat.8 - b--hide column p do not show the specified built-in columns. may be invoked multiple times, or with a comma-separated list of column names. use "--hide sysfs" to hide the sysfs statistics columns as a group. + b--hide column p do not show the specified built-in columns. may be invoked multiple times, or with a comma-separated list of column names. - b--show column p show only the specified built-in columns. may be invoked multiple times, or with a comma-separated list of column names. use "--show sysfs" to show the sysfs statistics columns as a group. + b--show column p show only the specified built-in columns. may be invoked multiple times, or with a comma-separated list of column names. +.pp + b--show category --hide category p show and hide also accept a single category of columns: "all", "topology", "idle", "frequency", "power", "sysfs", "other". diff --git a/tools/power/x86/turbostat/turbostat.c b/tools/power/x86/turbostat/turbostat.c --- a/tools/power/x86/turbostat/turbostat.c +++ b/tools/power/x86/turbostat/turbostat.c +#define bic_topology (bic_package | bic_node | bic_corecnt | bic_pkgcnt | bic_core | bic_cpu | bic_die ) +#define bic_thermal_pwr ( bic_coretmp | bic_pkgtmp | bic_pkgwatt | bic_corwatt | bic_gfxwatt | bic_ramwatt | bic_pkg__ | bic_ram__) +#define bic_frequency ( bic_avg_mhz | bic_busy | bic_bzy_mhz | bic_tsc_mhz | bic_gfxmhz | bic_gfxactmhz ) +#define bic_idle ( bic_sysfs | bic_cpu_c1 | bic_cpu_c3 | bic_cpu_c6 | bic_cpu_c7 | bic_gfx_rc6 | bic_pkgpc2 | bic_pkgpc3 | bic_pkgpc6 | bic_pkgpc7 | bic_pkgpc8 | bic_pkgpc9 | bic_pkgpc10 | bic_cpu_lpi | bic_sys_lpi | bic_mod_c6 | bic_totl_c0 | bic_any_c0 | bic_gfx_c0 | bic_cpugfx) +#define bic_other ( bic_irq | bic_smi | bic_threadc | bic_coretmp | bic_ipc) + + if (!strcmp(name_list, "topology")) + return bic_topology; + if (!strcmp(name_list, "power")) + return bic_thermal_pwr; + if (!strcmp(name_list, "idle")) + return bic_idle; + if (!strcmp(name_list, "frequency")) + return bic_frequency; + if (!strcmp(name_list, "other")) + return bic_other; + if (!strcmp(name_list, "all")) + return 0;
|
Power Management
|
b60c573dc241ab3a8719e990d86a0011b79eebcb
|
len brown
|
tools
|
power
|
turbostat, x86
|
tools/power turbostat: support ice lake d
|
ice lake d is low-end server version of ice lake x, reuse the code accordingly.
|
this release includes the landlock security module, which aims to make easier to sandbox applications; support for the clang control flow integrity, which aims to abort the program upon detecting certain forms of undefined behavior; support for randomising the stack address offset in each syscall; support for concurrent tbl flushing; preparatory apple m1 support; support for incoming amd and intel graphics chips; bpf support for calling kernel functions directly; a virtio sound driver for improved sound experience on virtualized guests; io_uring support for multi shot mode and a misc cgroup for miscellaneous resources. as always, there are many other features, new drivers, improvements and fixes.
|
support ice lake d
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'security', 'networking', 'architectures arm x86 mips powerpc riscv s390 ia64 xtensa']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'remote processors', 'clock', 'phy ("physical layer" framework)', 'various']
|
['tools/power turbostat ']
|
['c']
| 1
| 1
| 0
|
--- diff --git a/tools/power/x86/turbostat/turbostat.c b/tools/power/x86/turbostat/turbostat.c --- a/tools/power/x86/turbostat/turbostat.c +++ b/tools/power/x86/turbostat/turbostat.c + case intel_fam6_icelake_d:
|
Power Management
|
6c5c656006cf314196faea7bd76eebbfa0941cd1
|
chen yu wendy wang wendy wang intel com
|
tools
|
power
|
turbostat, x86
|
tools/power turbostat: add built-in-counter for ipc -- instructions per cycle
|
use linux-perf to access the hardware instructions-retired counter. this is necessary because the counter is not enabled by default, and also the counter is prone to roll-over -- both of which perf manages.
|
this release includes the landlock security module, which aims to make easier to sandbox applications; support for the clang control flow integrity, which aims to abort the program upon detecting certain forms of undefined behavior; support for randomising the stack address offset in each syscall; support for concurrent tbl flushing; preparatory apple m1 support; support for incoming amd and intel graphics chips; bpf support for calling kernel functions directly; a virtio sound driver for improved sound experience on virtualized guests; io_uring support for multi shot mode and a misc cgroup for miscellaneous resources. as always, there are many other features, new drivers, improvements and fixes.
|
add built-in-counter for ipc -- instructions per cycle
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'security', 'networking', 'architectures arm x86 mips powerpc riscv s390 ia64 xtensa']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'remote processors', 'clock', 'phy ("physical layer" framework)', 'various']
|
['tools/power turbostat ']
|
['c']
| 1
| 84
| 0
|
--- diff --git a/tools/power/x86/turbostat/turbostat.c b/tools/power/x86/turbostat/turbostat.c --- a/tools/power/x86/turbostat/turbostat.c +++ b/tools/power/x86/turbostat/turbostat.c +#include <linux/perf_event.h> +#include <asm/unistd.h> +int *fd_instr_count_percpu; +unsigned int do_ipc; + unsigned long long instr_count; +static long perf_event_open(struct perf_event_attr *hw_event, pid_t pid, int cpu, int group_fd, unsigned long flags) +{ + return syscall(__nr_perf_event_open, hw_event, pid, cpu, group_fd, flags); +} + +static int perf_instr_count_open(int cpu_num) +{ + struct perf_event_attr pea; + int fd; + + memset(&pea, 0, sizeof(struct perf_event_attr)); + pea.type = perf_type_hardware; + pea.size = sizeof(struct perf_event_attr); + pea.config = perf_count_hw_instructions; + + /* counter for cpu_num, including user + kernel and all processes */ + fd = perf_event_open(&pea, -1, cpu_num, -1, 0); + if (fd == -1) + err(-1, "cpu%d: perf instruction counter ", cpu_num); + + return fd; +} + +int get_instr_count_fd(int cpu) +{ + if (fd_instr_count_percpu[cpu]) + return fd_instr_count_percpu[cpu]; + + fd_instr_count_percpu[cpu] = perf_instr_count_open(cpu); + + return fd_instr_count_percpu[cpu]; +} + + { 0x0, "ipc" }, +#define bic_ipc (1ull << 52) +#define bic_is_enabled(counter_bit) (bic_enabled & counter_bit) + if (do_bic(bic_ipc)) + outp += sprintf(outp, "%sipc", (printed++ ? delim : "")); + + if (do_bic(bic_ipc)) + outp += sprintf(outp, "ipc: %lld ", t->instr_count); + + if (do_bic(bic_ipc)) + outp += sprintf(outp, "%s%.2f", (printed++ ? delim : ""), 1.0 * t->instr_count / t->aperf); + + if (do_bic(bic_ipc)) + old->instr_count = new->instr_count - old->instr_count; + + t->instr_count = 0; + + average.threads.instr_count += t->instr_count; + + average.threads.instr_count /= topo.num_cpus; + if (do_bic(bic_ipc)) + if (read(get_instr_count_fd(cpu), &t->instr_count, sizeof(long long)) != sizeof(long long)) + return -4; + + +/* + * linux-perf manages the the hw instructions-retired counter + * by enabling when requested, and hiding rollover + */ +void linux_perf_init(void) +{ + if (!bic_is_enabled(bic_ipc)) + return; + + if (access("/proc/sys/kernel/perf_event_paranoid", f_ok)) + return; + + fd_instr_count_percpu = calloc(topo.max_cpu_num + 1, sizeof(int)); + if (fd_instr_count_percpu == null) + err(-1, "calloc fd_instr_count_percpu"); + + bic_present(bic_ipc); +} + + linux_perf_init(); + {"ipc", no_argument, 0, 'i'},
|
Power Management
|
2af4f9b8596afbbd7667a18fa71d117bac227dea
|
len brown
|
tools
|
power
|
turbostat, x86
|
tools/power turbostat: support alder lake mobile
|
share the code between alder lake mobile and alder lake desktop.
|
this release includes the landlock security module, which aims to make easier to sandbox applications; support for the clang control flow integrity, which aims to abort the program upon detecting certain forms of undefined behavior; support for randomising the stack address offset in each syscall; support for concurrent tbl flushing; preparatory apple m1 support; support for incoming amd and intel graphics chips; bpf support for calling kernel functions directly; a virtio sound driver for improved sound experience on virtualized guests; io_uring support for multi shot mode and a misc cgroup for miscellaneous resources. as always, there are many other features, new drivers, improvements and fixes.
|
support alder lake mobile
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'security', 'networking', 'architectures arm x86 mips powerpc riscv s390 ia64 xtensa']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'remote processors', 'clock', 'phy ("physical layer" framework)', 'various']
|
['tools/power turbostat ']
|
['c']
| 1
| 1
| 0
|
--- diff --git a/tools/power/x86/turbostat/turbostat.c b/tools/power/x86/turbostat/turbostat.c --- a/tools/power/x86/turbostat/turbostat.c +++ b/tools/power/x86/turbostat/turbostat.c + case intel_fam6_alderlake_l:
|
Power Management
|
5683460b85a8a14c5eec10e363635ad4660eb961
|
chen yu
|
tools
|
power
|
turbostat, x86
|
tools/power/x86/intel-speed-select: add options to force online
|
it is possible that users manually offlined cpus via sysfs interface and then started this utility. in this case we will not be able to get package and die id of the those cpus. so add an option to force online if required for some commands.
|
this release includes the landlock security module, which aims to make easier to sandbox applications; support for the clang control flow integrity, which aims to abort the program upon detecting certain forms of undefined behavior; support for randomising the stack address offset in each syscall; support for concurrent tbl flushing; preparatory apple m1 support; support for incoming amd and intel graphics chips; bpf support for calling kernel functions directly; a virtio sound driver for improved sound experience on virtualized guests; io_uring support for multi shot mode and a misc cgroup for miscellaneous resources. as always, there are many other features, new drivers, improvements and fixes.
|
add options to force online
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'security', 'networking', 'architectures arm x86 mips powerpc riscv s390 ia64 xtensa']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'remote processors', 'clock', 'phy ("physical layer" framework)', 'various']
|
['tools/power/x86/intel-speed-select']
|
['c']
| 1
| 21
| 2
|
--- diff --git a/tools/power/x86/intel-speed-select/isst-config.c b/tools/power/x86/intel-speed-select/isst-config.c --- a/tools/power/x86/intel-speed-select/isst-config.c +++ b/tools/power/x86/intel-speed-select/isst-config.c +static void force_all_cpus_online(void) +{ + int i; + + fprintf(stderr, "forcing all cpus online "); + + for (i = 0; i < topo_max_cpus; ++i) + set_cpu_online_offline(i, 1); + + unlink("/var/run/isst_cpu_topology.dat"); +} + + printf(" [-a|--all-cpus-online] : force online every cpu in the system "); - int opt; + int opt, force_cpus_online = 0; + { "all-cpus-online", no_argument, 0, 'a' }, - while ((opt = getopt_long_only(argc, argv, "+c:df:hio:v", long_options, + while ((opt = getopt_long_only(argc, argv, "+c:df:hio:va", long_options, + case 'a': + force_cpus_online = 1; + break; + if (force_cpus_online) + force_all_cpus_online();
|
Power Management
|
0d3dfd75708117cedf0cea200e9c6fa266129fb5
|
srinivas pandruvada
|
tools
|
power
|
intel-speed-select, x86
|
thermal/drivers/tsens: don't hardcode sensor slope
|
function compute_intercept_slope hardcode the sensor slope to slope_default. change this and use the default value only if a slope is not defined. this is needed for tsens ver_0 that has a hardcoded slope table.
|
this release includes the landlock security module, which aims to make easier to sandbox applications; support for the clang control flow integrity, which aims to abort the program upon detecting certain forms of undefined behavior; support for randomising the stack address offset in each syscall; support for concurrent tbl flushing; preparatory apple m1 support; support for incoming amd and intel graphics chips; bpf support for calling kernel functions directly; a virtio sound driver for improved sound experience on virtualized guests; io_uring support for multi shot mode and a misc cgroup for miscellaneous resources. as always, there are many other features, new drivers, improvements and fixes.
|
add support for ipq8064 tsens
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'security', 'networking', 'architectures arm x86 mips powerpc riscv s390 ia64 xtensa']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'remote processors', 'clock', 'phy ("physical layer" framework)', 'various']
|
['thermal ']
|
['c']
| 1
| 2
| 1
|
--- diff --git a/drivers/thermal/qcom/tsens.c b/drivers/thermal/qcom/tsens.c --- a/drivers/thermal/qcom/tsens.c +++ b/drivers/thermal/qcom/tsens.c - priv->sensor[i].slope = slope_default; + if (!priv->sensor[i].slope) + priv->sensor[i].slope = slope_default;
|
Power Management
|
9d51769b2e75bb33c56c8f9ee933eca2d92b375b
|
ansuel smith thara gopinath thara gopinath linaro org
|
drivers
|
thermal
|
qcom
|
thermal/drivers/tsens: convert msm8960 to reg_field
|
convert msm9860 driver to reg_field to use the init_common function.
|
this release includes the landlock security module, which aims to make easier to sandbox applications; support for the clang control flow integrity, which aims to abort the program upon detecting certain forms of undefined behavior; support for randomising the stack address offset in each syscall; support for concurrent tbl flushing; preparatory apple m1 support; support for incoming amd and intel graphics chips; bpf support for calling kernel functions directly; a virtio sound driver for improved sound experience on virtualized guests; io_uring support for multi shot mode and a misc cgroup for miscellaneous resources. as always, there are many other features, new drivers, improvements and fixes.
|
add support for ipq8064 tsens
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'security', 'networking', 'architectures arm x86 mips powerpc riscv s390 ia64 xtensa']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'remote processors', 'clock', 'phy ("physical layer" framework)', 'various']
|
['thermal ']
|
['c']
| 1
| 71
| 2
|
--- diff --git a/drivers/thermal/qcom/tsens-8960.c b/drivers/thermal/qcom/tsens-8960.c --- a/drivers/thermal/qcom/tsens-8960.c +++ b/drivers/thermal/qcom/tsens-8960.c -#define s0_status_addr 0x3628 +#define s0_status_off 0x3628 +#define s1_status_off 0x362c +#define s2_status_off 0x3630 +#define s3_status_off 0x3634 +#define s4_status_off 0x3638 +#define s5_status_off 0x3664 /* sensors 5-10 found on apq8064/msm8960 */ +#define s6_status_off 0x3668 +#define s7_status_off 0x366c +#define s8_status_off 0x3670 +#define s9_status_off 0x3674 +#define s10_status_off 0x3678 + - priv->sensor[i].status = s0_status_addr + 40; + priv->sensor[i].status = s0_status_off + 40; +static const struct reg_field tsens_8960_regfields[max_regfields] = { + /* ----- srot ------ */ + /* no version information */ + + /* cntl */ + [tsens_en] = reg_field(cntl_addr, 0, 0), + [tsens_sw_rst] = reg_field(cntl_addr, 1, 1), + /* 8960 has 5 sensors, 8660 has 11, we only handle 5 */ + [sensor_en] = reg_field(cntl_addr, 3, 7), + + /* ----- tm ------ */ + /* interrupt enable */ + /* no interrupt enable */ + + /* single upper/lower temperature threshold for all sensors */ + [low_thresh_0] = reg_field(threshold_addr, 0, 7), + [up_thresh_0] = reg_field(threshold_addr, 8, 15), + /* min_thresh_0 and max_thresh_0 are not present in the regfield + * recycle crit_thresh_0 and 1 to set the required regs to hardcoded temp + * min_thresh_0 -> crit_thresh_1 + * max_thresh_0 -> crit_thresh_0 + */ + [crit_thresh_1] = reg_field(threshold_addr, 16, 23), + [crit_thresh_0] = reg_field(threshold_addr, 24, 31), + + /* upper/lower interrupt [clear/status] */ + /* 1 == clear, 0 == normal operation */ + [low_int_clear_0] = reg_field(cntl_addr, 9, 9), + [up_int_clear_0] = reg_field(cntl_addr, 10, 10), + + /* no critical interrupt support on 8960 */ + + /* sn_status */ + [last_temp_0] = reg_field(s0_status_off, 0, 7), + [last_temp_1] = reg_field(s1_status_off, 0, 7), + [last_temp_2] = reg_field(s2_status_off, 0, 7), + [last_temp_3] = reg_field(s3_status_off, 0, 7), + [last_temp_4] = reg_field(s4_status_off, 0, 7), + [last_temp_5] = reg_field(s5_status_off, 0, 7), + [last_temp_6] = reg_field(s6_status_off, 0, 7), + [last_temp_7] = reg_field(s7_status_off, 0, 7), + [last_temp_8] = reg_field(s8_status_off, 0, 7), + [last_temp_9] = reg_field(s9_status_off, 0, 7), + [last_temp_10] = reg_field(s10_status_off, 0, 7), + + /* no valid field on 8960 */ + /* tsens_int_status bits: 1 == threshold violated */ + [min_status_0] = reg_field(int_status_addr, 0, 0), + [lower_status_0] = reg_field(int_status_addr, 1, 1), + [upper_status_0] = reg_field(int_status_addr, 2, 2), + /* no critical field on 8960 */ + [max_status_0] = reg_field(int_status_addr, 3, 3), + + /* trdy: 1=ready, 0=in progress */ + [trdy] = reg_field(int_status_addr, 7, 7), +}; + + .fields = tsens_8960_regfields,
|
Power Management
|
a0ed1411278db902a043e584c8ed320fe34346b6
|
ansuel smith thara gopinath thara gopinath linaro org
|
drivers
|
thermal
|
qcom
|
thermal/drivers/tsens: add ver_0 tsens version
|
ver_0 is used to describe device based on tsens version before v0.1. these device are devices based on msm8960 for example apq8064 or ipq806x. add support for ver_0 in tsens.c and set the right tsens feat in tsens-8960.c file.
|
this release includes the landlock security module, which aims to make easier to sandbox applications; support for the clang control flow integrity, which aims to abort the program upon detecting certain forms of undefined behavior; support for randomising the stack address offset in each syscall; support for concurrent tbl flushing; preparatory apple m1 support; support for incoming amd and intel graphics chips; bpf support for calling kernel functions directly; a virtio sound driver for improved sound experience on virtualized guests; io_uring support for multi shot mode and a misc cgroup for miscellaneous resources. as always, there are many other features, new drivers, improvements and fixes.
|
add support for ipq8064 tsens
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'security', 'networking', 'architectures arm x86 mips powerpc riscv s390 ia64 xtensa']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'remote processors', 'clock', 'phy ("physical layer" framework)', 'various']
|
['thermal ']
|
['h', 'c']
| 3
| 133
| 30
|
--- diff --git a/drivers/thermal/qcom/tsens-8960.c b/drivers/thermal/qcom/tsens-8960.c --- a/drivers/thermal/qcom/tsens-8960.c +++ b/drivers/thermal/qcom/tsens-8960.c +static struct tsens_features tsens_8960_feat = { + .ver_major = ver_0, + .crit_int = 0, + .adc = 1, + .srot_split = 0, + .max_sensors = 11, +}; + + .feat = &tsens_8960_feat, diff --git a/drivers/thermal/qcom/tsens.c b/drivers/thermal/qcom/tsens.c --- a/drivers/thermal/qcom/tsens.c +++ b/drivers/thermal/qcom/tsens.c +#include <linux/mfd/syscon.h> + + if (tsens_version(priv) < ver_0_1) { + /* constraint: there is only 1 interrupt control register for all + * 11 temperature sensor. so monitoring more than 1 sensor based + * on interrupts will yield inconsistent result. to overcome this + * issue we will monitor only sensor 0 which is the master sensor. + */ + break; + } + if (tsens_version(priv) < ver_0_1) { + /* pre v0.1 ip had a single register for each type of interrupt + * and thresholds + */ + hw_id = 0; + } + - ret = regmap_field_read(priv->rf[valid_idx], &valid); - if (ret) - return ret; - while (!valid) { - /* valid bit is 0 for 6 ahb clock cycles. - * at 19.2mhz, 1 ahb clock is ~60ns. - * we should enter this loop very, very rarely. - */ - ndelay(400); + /* ver_0 doesn't have valid bit */ + if (tsens_version(priv) >= ver_0_1) { + while (!valid) { + /* valid bit is 0 for 6 ahb clock cycles. + * at 19.2mhz, 1 ahb clock is ~60ns. + * we should enter this loop very, very rarely. + */ + ndelay(400); + ret = regmap_field_read(priv->rf[valid_idx], &valid); + if (ret) + return ret; + } - int last_temp = 0, ret; + int last_temp = 0, ret, trdy; + unsigned long timeout; - ret = regmap_field_read(priv->rf[last_temp_0 + hw_id], &last_temp); - if (ret) - return ret; + timeout = jiffies + usecs_to_jiffies(timeout_us); + do { + if (tsens_version(priv) == ver_0) { + ret = regmap_field_read(priv->rf[trdy], &trdy); + if (ret) + return ret; + if (!trdy) + continue; + } - *temp = code_to_degc(last_temp, s) * 1000; + ret = regmap_field_read(priv->rf[last_temp_0 + hw_id], &last_temp); + if (ret) + return ret; - return 0; + *temp = code_to_degc(last_temp, s) * 1000; + + return 0; + } while (time_before(jiffies, timeout)); + + return -etimedout; - res = platform_get_resource(op, ioresource_mem, 0); - tm_base = devm_ioremap_resource(dev, res); - if (is_err(tm_base)) { - ret = ptr_err(tm_base); - goto err_put_device; + if (tsens_version(priv) >= ver_0_1) { + res = platform_get_resource(op, ioresource_mem, 0); + tm_base = devm_ioremap_resource(dev, res); + if (is_err(tm_base)) { + ret = ptr_err(tm_base); + goto err_put_device; + } + + priv->tm_map = devm_regmap_init_mmio(dev, tm_base, &tsens_config); + } else { /* ver_0 share the same gcc regs using a syscon */ + struct device *parent = priv->dev->parent; + + if (parent) + priv->tm_map = syscon_node_to_regmap(parent->of_node); - priv->tm_map = devm_regmap_init_mmio(dev, tm_base, &tsens_config); - if (is_err(priv->tm_map)) { - ret = ptr_err(priv->tm_map); + if (is_err_or_null(priv->tm_map)) { + if (!priv->tm_map) + ret = -enodev; + else + ret = ptr_err(priv->tm_map); + /* ver_0 have only tm_map */ + if (!priv->srot_map) + priv->srot_map = priv->tm_map; + + /* in ver_0 tsens need to be explicitly enabled */ + if (tsens_version(priv) == ver_0) + regmap_field_write(priv->rf[tsens_en], 1); + + priv->rf[tsens_sw_rst] = + devm_regmap_field_alloc(dev, priv->srot_map, priv->fields[tsens_sw_rst]); + if (is_err(priv->rf[tsens_sw_rst])) { + ret = ptr_err(priv->rf[tsens_sw_rst]); + goto err_put_device; + } + + priv->rf[trdy] = devm_regmap_field_alloc(dev, priv->tm_map, priv->fields[trdy]); + if (is_err(priv->rf[trdy])) { + ret = ptr_err(priv->rf[trdy]); + goto err_put_device; + } + - if (priv->feat->crit_int) { + if (priv->feat->crit_int || tsens_version(priv) < ver_0_1) { - tsens_enable_irq(priv); + + /* ver_0 interrupt doesn't need to be enabled */ + if (tsens_version(priv) >= ver_0_1) + tsens_enable_irq(priv); + - ret = devm_request_threaded_irq(&pdev->dev, irq, - null, thread_fn, - irqf_oneshot, - dev_name(&pdev->dev), priv); + /* ver_0 interrupt is trigger_rising, ver_0_1 and up is oneshot */ + if (tsens_version(priv) == ver_0) + ret = devm_request_threaded_irq(&pdev->dev, irq, + thread_fn, null, + irqf_trigger_rising, + dev_name(&pdev->dev), + priv); + else + ret = devm_request_threaded_irq(&pdev->dev, irq, null, + thread_fn, irqf_oneshot, + dev_name(&pdev->dev), + priv); + + /* ver_0 require to set min and max thresh + * these 2 regs are set using the: + * - crit_thresh_0 for max thresh hardcoded to 120c + * - crit_thresh_1 for min thresh hardcoded to 0c + */ + if (tsens_version(priv) < ver_0_1) { + regmap_field_write(priv->rf[crit_thresh_0], + tsens_mc_to_hw(priv->sensor, 120000)); + + regmap_field_write(priv->rf[crit_thresh_1], + tsens_mc_to_hw(priv->sensor, 0)); + } + diff --git a/drivers/thermal/qcom/tsens.h b/drivers/thermal/qcom/tsens.h --- a/drivers/thermal/qcom/tsens.h +++ b/drivers/thermal/qcom/tsens.h +#define timeout_us 100 - ver_0_1 = 0, + ver_0 = 0, + ver_0_1,
|
Power Management
|
53e2a20e4c41683b695145436b34aa4a14bbcd8c
|
ansuel smith thara gopinath thara gopinath linaro org
|
drivers
|
thermal
|
qcom
|
thermal/drivers/tsens: use init_common for msm8960
|
use init_common and drop custom init for msm8960.
|
this release includes the landlock security module, which aims to make easier to sandbox applications; support for the clang control flow integrity, which aims to abort the program upon detecting certain forms of undefined behavior; support for randomising the stack address offset in each syscall; support for concurrent tbl flushing; preparatory apple m1 support; support for incoming amd and intel graphics chips; bpf support for calling kernel functions directly; a virtio sound driver for improved sound experience on virtualized guests; io_uring support for multi shot mode and a misc cgroup for miscellaneous resources. as always, there are many other features, new drivers, improvements and fixes.
|
add support for ipq8064 tsens
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'security', 'networking', 'architectures arm x86 mips powerpc riscv s390 ia64 xtensa']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'remote processors', 'clock', 'phy ("physical layer" framework)', 'various']
|
['thermal ']
|
['c']
| 1
| 1
| 51
|
--- diff --git a/drivers/thermal/qcom/tsens-8960.c b/drivers/thermal/qcom/tsens-8960.c --- a/drivers/thermal/qcom/tsens-8960.c +++ b/drivers/thermal/qcom/tsens-8960.c -static int init_8960(struct tsens_priv *priv) -{ - int ret, i; - u32 reg_cntl; - - priv->tm_map = dev_get_regmap(priv->dev, null); - if (!priv->tm_map) - return -enodev; - - /* - * the status registers for each sensor are discontiguous - * because some socs have 5 sensors while others have more - * but the control registers stay in the same place, i.e - * directly after the first 5 status registers. - */ - for (i = 0; i < priv->num_sensors; i++) { - if (i >= 5) - priv->sensor[i].status = s0_status_off + 40; - priv->sensor[i].status += i * 4; - } - - reg_cntl = sw_rst; - ret = regmap_update_bits(priv->tm_map, cntl_addr, sw_rst, reg_cntl); - if (ret) - return ret; - - if (priv->num_sensors > 1) { - reg_cntl |= slp_clk_ena | (measure_period << 18); - reg_cntl &= ~sw_rst; - ret = regmap_update_bits(priv->tm_map, config_addr, - config_mask, config); - } else { - reg_cntl |= slp_clk_ena_8660 | (measure_period << 16); - reg_cntl &= ~config_mask_8660; - reg_cntl |= config_8660 << config_shift_8660; - } - - reg_cntl |= genmask(priv->num_sensors - 1, 0) << sensor0_shift; - ret = regmap_write(priv->tm_map, cntl_addr, reg_cntl); - if (ret) - return ret; - - reg_cntl |= en; - ret = regmap_write(priv->tm_map, cntl_addr, reg_cntl); - if (ret) - return ret; - - return 0; -} - - .init = init_8960, + .init = init_common,
|
Power Management
|
fdda131f8fbadee2dfc21f0787d11547b42a961e
|
ansuel smith thara gopinath thara gopinath linaro org
|
drivers
|
thermal
|
qcom
|
thermal/drivers/tsens: fix bug in sensor enable for msm8960
|
device based on tsens ver_0 contains a hardware bug that results in some problem with sensor enablement. sensor id 6-11 can't be enabled selectively and all of them must be enabled in one step.
|
this release includes the landlock security module, which aims to make easier to sandbox applications; support for the clang control flow integrity, which aims to abort the program upon detecting certain forms of undefined behavior; support for randomising the stack address offset in each syscall; support for concurrent tbl flushing; preparatory apple m1 support; support for incoming amd and intel graphics chips; bpf support for calling kernel functions directly; a virtio sound driver for improved sound experience on virtualized guests; io_uring support for multi shot mode and a misc cgroup for miscellaneous resources. as always, there are many other features, new drivers, improvements and fixes.
|
add support for ipq8064 tsens
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'security', 'networking', 'architectures arm x86 mips powerpc riscv s390 ia64 xtensa']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'remote processors', 'clock', 'phy ("physical layer" framework)', 'various']
|
['thermal ']
|
['c']
| 1
| 20
| 3
|
--- diff --git a/drivers/thermal/qcom/tsens-8960.c b/drivers/thermal/qcom/tsens-8960.c --- a/drivers/thermal/qcom/tsens-8960.c +++ b/drivers/thermal/qcom/tsens-8960.c +#define measure_period bit(18) -#define measure_period 1 - u32 reg, mask; + u32 reg, mask = bit(id); - mask = bit(id + sensor0_shift); + /* hardware bug: + * on platforms with more than 6 sensors, all remaining sensors + * must be enabled together, otherwise undefined results are expected. + * (sensor 6-7 disabled, sensor 3 disabled...) in the original driver, + * all the sensors are enabled in one step hence this bug is not + * triggered. + */ + if (id > 5) + mask = genmask(10, 6); + + mask <<= sensor0_shift; + + /* sensors already enabled. skip. */ + if ((reg & mask) == mask) + return 0; + + reg |= measure_period; +
|
Power Management
|
3d08f029fdbbd29c8b363ef4c8c4bfe3b8f79ad0
|
ansuel smith thara gopinath thara gopinath linaro org
|
drivers
|
thermal
|
qcom
|
thermal/drivers/tsens: replace custom 8960 apis with generic apis
|
rework calibrate function to use common function. derive the offset from a missing hardcoded slope table and the data from the nvmem calib efuses. drop custom get_temp function and use generic api.
|
this release includes the landlock security module, which aims to make easier to sandbox applications; support for the clang control flow integrity, which aims to abort the program upon detecting certain forms of undefined behavior; support for randomising the stack address offset in each syscall; support for concurrent tbl flushing; preparatory apple m1 support; support for incoming amd and intel graphics chips; bpf support for calling kernel functions directly; a virtio sound driver for improved sound experience on virtualized guests; io_uring support for multi shot mode and a misc cgroup for miscellaneous resources. as always, there are many other features, new drivers, improvements and fixes.
|
add support for ipq8064 tsens
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'security', 'networking', 'architectures arm x86 mips powerpc riscv s390 ia64 xtensa']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'remote processors', 'clock', 'phy ("physical layer" framework)', 'various']
|
['thermal ']
|
['c']
| 1
| 15
| 41
|
--- diff --git a/drivers/thermal/qcom/tsens-8960.c b/drivers/thermal/qcom/tsens-8960.c --- a/drivers/thermal/qcom/tsens-8960.c +++ b/drivers/thermal/qcom/tsens-8960.c +/* original slope - 350 to compensate mc to c inaccuracy */ +static u32 tsens_msm8960_slope[] = { + 826, 826, 804, 826, + 761, 782, 782, 849, + 782, 849, 782 + }; + - - ssize_t num_read = priv->num_sensors; - struct tsens_sensor *s = priv->sensor; + u32 p1[11]; - for (i = 0; i < num_read; i++, s++) - s->offset = data[i]; + for (i = 0; i < priv->num_sensors; i++) { + p1[i] = data[i]; + priv->sensor[i].slope = tsens_msm8960_slope[i]; + } + + compute_intercept_slope(priv, p1, null, one_pt_calib); -/* temperature on y axis and adc-code on x-axis */ -static inline int code_to_mdegc(u32 adc_code, const struct tsens_sensor *s) -{ - int slope, offset; - - slope = thermal_zone_get_slope(s->tzd); - offset = cal_mdegc - slope * s->offset; - - return adc_code * slope + offset; -} - -static int get_temp_8960(const struct tsens_sensor *s, int *temp) -{ - int ret; - u32 code, trdy; - struct tsens_priv *priv = s->priv; - unsigned long timeout; - - timeout = jiffies + usecs_to_jiffies(timeout_us); - do { - ret = regmap_read(priv->tm_map, int_status_addr, &trdy); - if (ret) - return ret; - if (!(trdy & trdy_mask)) - continue; - ret = regmap_read(priv->tm_map, s->status, &code); - if (ret) - return ret; - *temp = code_to_mdegc(code, s); - return 0; - } while (time_before(jiffies, timeout)); - - return -etimedout; -} - - .get_temp = get_temp_8960, + .get_temp = get_temp_common,
|
Power Management
|
dfc1193d4dbd6c3cb68c944413146c940bde290a
|
ansuel smith thara gopinath thara gopinath linaro org
|
drivers
|
thermal
|
qcom
|
thermal/drivers/tsens: drop unused define for msm8960
|
drop unused define for msm8960 replaced by generic api and reg_field.
|
this release includes the landlock security module, which aims to make easier to sandbox applications; support for the clang control flow integrity, which aims to abort the program upon detecting certain forms of undefined behavior; support for randomising the stack address offset in each syscall; support for concurrent tbl flushing; preparatory apple m1 support; support for incoming amd and intel graphics chips; bpf support for calling kernel functions directly; a virtio sound driver for improved sound experience on virtualized guests; io_uring support for multi shot mode and a misc cgroup for miscellaneous resources. as always, there are many other features, new drivers, improvements and fixes.
|
add support for ipq8064 tsens
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'security', 'networking', 'architectures arm x86 mips powerpc riscv s390 ia64 xtensa']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'remote processors', 'clock', 'phy ("physical layer" framework)', 'various']
|
['thermal ']
|
['c']
| 1
| 1
| 23
|
--- diff --git a/drivers/thermal/qcom/tsens-8960.c b/drivers/thermal/qcom/tsens-8960.c --- a/drivers/thermal/qcom/tsens-8960.c +++ b/drivers/thermal/qcom/tsens-8960.c -#define cal_mdegc 30000 - -#define status_cntl_addr_8064 0x3660 -#define sensor0_en bit(3) + -/* int_status_addr bitmasks */ -#define min_status_mask bit(0) -#define lower_status_clr bit(1) -#define upper_status_clr bit(2) -#define max_status_mask bit(3) - -/* threshold_addr bitmasks */ -#define threshold_max_limit_shift 24 -#define threshold_min_limit_shift 16 -#define threshold_upper_limit_shift 8 -#define threshold_lower_limit_shift 0 - -/* initial temperature threshold values */ -#define lower_limit_th 0x50 -#define upper_limit_th 0xdf -#define min_limit_th 0x0 -#define max_limit_th 0xff -#define trdy_mask bit(7) -#define timeout_us 100
|
Power Management
|
2ebd0982e6ba69d9f9c02a4a0aab705a5526283e
|
ansuel smith thara gopinath thara gopinath linaro org
|
drivers
|
thermal
|
qcom
|
thermal/drivers/tsens: add support for ipq8064-tsens
|
add support for tsens present in ipq806x socs based on generic msm8960 tsens driver.
|
this release includes the landlock security module, which aims to make easier to sandbox applications; support for the clang control flow integrity, which aims to abort the program upon detecting certain forms of undefined behavior; support for randomising the stack address offset in each syscall; support for concurrent tbl flushing; preparatory apple m1 support; support for incoming amd and intel graphics chips; bpf support for calling kernel functions directly; a virtio sound driver for improved sound experience on virtualized guests; io_uring support for multi shot mode and a misc cgroup for miscellaneous resources. as always, there are many other features, new drivers, improvements and fixes.
|
add support for ipq8064 tsens
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'security', 'networking', 'architectures arm x86 mips powerpc riscv s390 ia64 xtensa']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'remote processors', 'clock', 'phy ("physical layer" framework)', 'various']
|
['thermal ']
|
['c']
| 1
| 3
| 0
|
--- diff --git a/drivers/thermal/qcom/tsens.c b/drivers/thermal/qcom/tsens.c --- a/drivers/thermal/qcom/tsens.c +++ b/drivers/thermal/qcom/tsens.c + .compatible = "qcom,ipq8064-tsens", + .data = &data_8960, + }, {
|
Power Management
|
6b3aeafbc12c18036809108e301efe8056249233
|
ansuel smith thara gopinath thara gopinath linaro org
|
drivers
|
thermal
|
qcom
|
dt-bindings: thermal: tsens: document ipq8064 bindings
|
document the use of bindings used for msm8960 tsens based devices. msm8960 use the same gcc regs and is set as a child of the qcom gcc.
|
this release includes the landlock security module, which aims to make easier to sandbox applications; support for the clang control flow integrity, which aims to abort the program upon detecting certain forms of undefined behavior; support for randomising the stack address offset in each syscall; support for concurrent tbl flushing; preparatory apple m1 support; support for incoming amd and intel graphics chips; bpf support for calling kernel functions directly; a virtio sound driver for improved sound experience on virtualized guests; io_uring support for multi shot mode and a misc cgroup for miscellaneous resources. as always, there are many other features, new drivers, improvements and fixes.
|
add support for ipq8064 tsens
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'security', 'networking', 'architectures arm x86 mips powerpc riscv s390 ia64 xtensa']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'remote processors', 'clock', 'phy ("physical layer" framework)', 'various']
|
['thermal ']
|
['yaml']
| 1
| 48
| 8
|
--- diff --git a/documentation/devicetree/bindings/thermal/qcom-tsens.yaml b/documentation/devicetree/bindings/thermal/qcom-tsens.yaml --- a/documentation/devicetree/bindings/thermal/qcom-tsens.yaml +++ b/documentation/devicetree/bindings/thermal/qcom-tsens.yaml + - description: msm9860 tsens based + items: + - enum: + - qcom,ipq8064-tsens + - description: v0.1 of tsens - enum: - const: calib - - const: calib_sel + - enum: + - calib_backup + - calib_sel +required: + - compatible + - interrupts + - interrupt-names + - "#thermal-sensor-cells" + - "#qcom,sensors" + - if: + - qcom,ipq8064-tsens - qcom,mdm9607-tsens - qcom,msm8916-tsens - qcom,msm8974-tsens -required: - - compatible - - reg - - "#qcom,sensors" - - interrupts - - interrupt-names - - "#thermal-sensor-cells" + - if: + properties: + compatible: + contains: + enum: + - qcom,tsens-v0_1 + - qcom,tsens-v1 + - qcom,tsens-v2 + + then: + required: + - reg + - | + #include <dt-bindings/interrupt-controller/arm-gic.h> + // example msm9860 based soc (ipq8064): + gcc: clock-controller { + + /* ... */ + + tsens: thermal-sensor { + compatible = "qcom,ipq8064-tsens"; + + nvmem-cells = <&tsens_calib>, <&tsens_calib_backup>; + nvmem-cell-names = "calib", "calib_backup"; + interrupts = <gic_spi 178 irq_type_level_high>; + interrupt-names = "uplow"; + + #qcom,sensors = <11>; + #thermal-sensor-cells = <1>; + }; + }; + - |
|
Power Management
|
26b2f03d2adf43d0dc9aeeb3fff54dcc9fcdb1f4
|
ansuel smith rob herring robh kernel org
|
documentation
|
devicetree
|
bindings, thermal
|
thermal/drivers/qcom/tsens-v0_1: add support for mdm9607
|
mdm9607 tsens ip is very similar to the one of msm8916, with minor adjustments to various tuning values.
|
this release includes the landlock security module, which aims to make easier to sandbox applications; support for the clang control flow integrity, which aims to abort the program upon detecting certain forms of undefined behavior; support for randomising the stack address offset in each syscall; support for concurrent tbl flushing; preparatory apple m1 support; support for incoming amd and intel graphics chips; bpf support for calling kernel functions directly; a virtio sound driver for improved sound experience on virtualized guests; io_uring support for multi shot mode and a misc cgroup for miscellaneous resources. as always, there are many other features, new drivers, improvements and fixes.
|
add support for mdm9607
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'security', 'networking', 'architectures arm x86 mips powerpc riscv s390 ia64 xtensa']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'remote processors', 'clock', 'phy ("physical layer" framework)', 'various']
|
['thermal ', 'tsens-v0_1']
|
['h', 'c']
| 3
| 101
| 2
|
--- diff --git a/drivers/thermal/qcom/tsens-v0_1.c b/drivers/thermal/qcom/tsens-v0_1.c --- a/drivers/thermal/qcom/tsens-v0_1.c +++ b/drivers/thermal/qcom/tsens-v0_1.c +/* eeprom layout data for mdm9607 */ +#define mdm9607_base0_mask 0x000000ff +#define mdm9607_base1_mask 0x000ff000 +#define mdm9607_base0_shift 0 +#define mdm9607_base1_shift 12 + +#define mdm9607_s0_p1_mask 0x00003f00 +#define mdm9607_s1_p1_mask 0x03f00000 +#define mdm9607_s2_p1_mask 0x0000003f +#define mdm9607_s3_p1_mask 0x0003f000 +#define mdm9607_s4_p1_mask 0x0000003f + +#define mdm9607_s0_p2_mask 0x000fc000 +#define mdm9607_s1_p2_mask 0xfc000000 +#define mdm9607_s2_p2_mask 0x00000fc0 +#define mdm9607_s3_p2_mask 0x00fc0000 +#define mdm9607_s4_p2_mask 0x00000fc0 + +#define mdm9607_s0_p1_shift 8 +#define mdm9607_s1_p1_shift 20 +#define mdm9607_s2_p1_shift 0 +#define mdm9607_s3_p1_shift 12 +#define mdm9607_s4_p1_shift 0 + +#define mdm9607_s0_p2_shift 14 +#define mdm9607_s1_p2_shift 26 +#define mdm9607_s2_p2_shift 6 +#define mdm9607_s3_p2_shift 18 +#define mdm9607_s4_p2_shift 6 + +#define mdm9607_cal_sel_mask 0x00700000 +#define mdm9607_cal_sel_shift 20 + -/* v0.1: 8916, 8939, 8974 */ +static int calibrate_9607(struct tsens_priv *priv) +{ + int base, i; + u32 p1[5], p2[5]; + int mode = 0; + u32 *qfprom_cdata; + + qfprom_cdata = (u32 *)qfprom_read(priv->dev, "calib"); + if (is_err(qfprom_cdata)) + return ptr_err(qfprom_cdata); + + mode = (qfprom_cdata[2] & mdm9607_cal_sel_mask) >> mdm9607_cal_sel_shift; + dev_dbg(priv->dev, "calibration mode is %d ", mode); + + switch (mode) { + case two_pt_calib: + base = (qfprom_cdata[2] & mdm9607_base1_mask) >> mdm9607_base1_shift; + p2[0] = (qfprom_cdata[0] & mdm9607_s0_p2_mask) >> mdm9607_s0_p2_shift; + p2[1] = (qfprom_cdata[0] & mdm9607_s1_p2_mask) >> mdm9607_s1_p2_shift; + p2[2] = (qfprom_cdata[1] & mdm9607_s2_p2_mask) >> mdm9607_s2_p2_shift; + p2[3] = (qfprom_cdata[1] & mdm9607_s3_p2_mask) >> mdm9607_s3_p2_shift; + p2[4] = (qfprom_cdata[2] & mdm9607_s4_p2_mask) >> mdm9607_s4_p2_shift; + for (i = 0; i < priv->num_sensors; i++) + p2[i] = ((base + p2[i]) << 2); + fallthrough; + case one_pt_calib2: + base = (qfprom_cdata[0] & mdm9607_base0_mask); + p1[0] = (qfprom_cdata[0] & mdm9607_s0_p1_mask) >> mdm9607_s0_p1_shift; + p1[1] = (qfprom_cdata[0] & mdm9607_s1_p1_mask) >> mdm9607_s1_p1_shift; + p1[2] = (qfprom_cdata[1] & mdm9607_s2_p1_mask) >> mdm9607_s2_p1_shift; + p1[3] = (qfprom_cdata[1] & mdm9607_s3_p1_mask) >> mdm9607_s3_p1_shift; + p1[4] = (qfprom_cdata[2] & mdm9607_s4_p1_mask) >> mdm9607_s4_p1_shift; + for (i = 0; i < priv->num_sensors; i++) + p1[i] = ((base + p1[i]) << 2); + break; + default: + for (i = 0; i < priv->num_sensors; i++) { + p1[i] = 500; + p2[i] = 780; + } + break; + } + + compute_intercept_slope(priv, p1, p2, mode); + kfree(qfprom_cdata); + + return 0; +} + +/* v0.1: 8916, 8939, 8974, 9607 */ + +static const struct tsens_ops ops_9607 = { + .init = init_common, + .calibrate = calibrate_9607, + .get_temp = get_temp_common, +}; + +struct tsens_plat_data data_9607 = { + .num_sensors = 5, + .ops = &ops_9607, + .hw_ids = (unsigned int []){ 0, 1, 2, 3, 4 }, + .feat = &tsens_v0_1_feat, + .fields = tsens_v0_1_regfields, +}; diff --git a/drivers/thermal/qcom/tsens.c b/drivers/thermal/qcom/tsens.c --- a/drivers/thermal/qcom/tsens.c +++ b/drivers/thermal/qcom/tsens.c + .compatible = "qcom,mdm9607-tsens", + .data = &data_9607, + }, { diff --git a/drivers/thermal/qcom/tsens.h b/drivers/thermal/qcom/tsens.h --- a/drivers/thermal/qcom/tsens.h +++ b/drivers/thermal/qcom/tsens.h -extern struct tsens_plat_data data_8916, data_8939, data_8974; +extern struct tsens_plat_data data_8916, data_8939, data_8974, data_9607;
|
Power Management
|
a2149ab815fce21d0d83082818116519e44f87be
|
konrad dybcio thara gopinath thara gopinath linaro org
|
drivers
|
thermal
|
qcom
|
thermal/drivers/intel: introduce tcc cooling driver
|
on intel processors, the core frequency can be reduced below os request, when the current temperature reaches the tcc (thermal control circuit) activation temperature.
|
this release includes the landlock security module, which aims to make easier to sandbox applications; support for the clang control flow integrity, which aims to abort the program upon detecting certain forms of undefined behavior; support for randomising the stack address offset in each syscall; support for concurrent tbl flushing; preparatory apple m1 support; support for incoming amd and intel graphics chips; bpf support for calling kernel functions directly; a virtio sound driver for improved sound experience on virtualized guests; io_uring support for multi shot mode and a misc cgroup for miscellaneous resources. as always, there are many other features, new drivers, improvements and fixes.
|
introduce tcc cooling driver
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'security', 'networking', 'architectures arm x86 mips powerpc riscv s390 ia64 xtensa']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'remote processors', 'clock', 'phy ("physical layer" framework)', 'various']
|
['thermal ', 'intel']
|
['kconfig', 'c', 'makefile']
| 3
| 141
| 0
|
--- diff --git a/drivers/thermal/intel/kconfig b/drivers/thermal/intel/kconfig --- a/drivers/thermal/intel/kconfig +++ b/drivers/thermal/intel/kconfig + +config intel_tcc_cooling + tristate "intel tcc offset cooling driver" + depends on x86 + help + enable this to support system cooling by adjusting the effective tcc + activation temperature via the tcc offset register, which is widely + supported on modern intel platforms. + note that, on different platforms, the behavior might be different + on how fast the setting takes effect, and how much the cpu frequency + is reduced. diff --git a/drivers/thermal/intel/makefile b/drivers/thermal/intel/makefile --- a/drivers/thermal/intel/makefile +++ b/drivers/thermal/intel/makefile +obj-$(config_intel_tcc_cooling) += intel_tcc_cooling.o diff --git a/drivers/thermal/intel/intel_tcc_cooling.c b/drivers/thermal/intel/intel_tcc_cooling.c --- /dev/null +++ b/drivers/thermal/intel/intel_tcc_cooling.c +// spdx-license-identifier: gpl-2.0-only +/* + * cooling device driver that activates the processor throttling by + * programming the tcc offset register. + * copyright (c) 2021, intel corporation. + */ +#define pr_fmt(fmt) kbuild_modname ": " fmt + +#include <linux/device.h> +#include <linux/module.h> +#include <linux/thermal.h> +#include <asm/cpu_device_id.h> + +#define tcc_shift 24 +#define tcc_mask (0x3full<<24) +#define tcc_programmable bit(30) + +static struct thermal_cooling_device *tcc_cdev; + +static int tcc_get_max_state(struct thermal_cooling_device *cdev, unsigned long + *state) +{ + *state = tcc_mask >> tcc_shift; + return 0; +} + +static int tcc_offset_update(int tcc) +{ + u64 val; + int err; + + err = rdmsrl_safe(msr_ia32_temperature_target, &val); + if (err) + return err; + + val &= ~tcc_mask; + val |= tcc << tcc_shift; + + err = wrmsrl_safe(msr_ia32_temperature_target, val); + if (err) + return err; + + return 0; +} + +static int tcc_get_cur_state(struct thermal_cooling_device *cdev, unsigned long + *state) +{ + u64 val; + int err; + + err = rdmsrl_safe(msr_ia32_temperature_target, &val); + if (err) + return err; + + *state = (val & tcc_mask) >> tcc_shift; + return 0; +} + +static int tcc_set_cur_state(struct thermal_cooling_device *cdev, unsigned long + state) +{ + return tcc_offset_update(state); +} + +static const struct thermal_cooling_device_ops tcc_cooling_ops = { + .get_max_state = tcc_get_max_state, + .get_cur_state = tcc_get_cur_state, + .set_cur_state = tcc_set_cur_state, +}; + +static const struct x86_cpu_id tcc_ids[] __initconst = { + x86_match_intel_fam6_model(skylake, null), + x86_match_intel_fam6_model(skylake_l, null), + x86_match_intel_fam6_model(kabylake, null), + x86_match_intel_fam6_model(kabylake_l, null), + x86_match_intel_fam6_model(icelake, null), + x86_match_intel_fam6_model(icelake_l, null), + x86_match_intel_fam6_model(tigerlake, null), + x86_match_intel_fam6_model(tigerlake_l, null), + x86_match_intel_fam6_model(cometlake, null), + {} +}; + +module_device_table(x86cpu, tcc_ids); + +static int __init tcc_cooling_init(void) +{ + int ret; + u64 val; + const struct x86_cpu_id *id; + + int err; + + id = x86_match_cpu(tcc_ids); + if (!id) + return -enodev; + + err = rdmsrl_safe(msr_platform_info, &val); + if (err) + return err; + + if (!(val & tcc_programmable)) + return -enodev; + + pr_info("programmable tcc offset detected "); + + tcc_cdev = + thermal_cooling_device_register("tcc offset", null, + &tcc_cooling_ops); + if (is_err(tcc_cdev)) { + ret = ptr_err(tcc_cdev); + return ret; + } + return 0; +} + +module_init(tcc_cooling_init) + +static void __exit tcc_cooling_exit(void) +{ + thermal_cooling_device_unregister(tcc_cdev); +} + +module_exit(tcc_cooling_exit) + +module_description("tcc offset cooling device driver"); +module_author("zhang rui <rui.zhang@intel.com>"); +module_license("gpl v2");
|
Power Management
|
2eb87d75f980bcc7c2bd370661f8fcc4ec273ea5
|
zhang rui
|
drivers
|
thermal
|
intel
|
thermal/drivers/qcom/tsens_v1: enable sensor 3 on msm8976
|
the sensor *is* in fact used and does report temperature.
|
this release includes the landlock security module, which aims to make easier to sandbox applications; support for the clang control flow integrity, which aims to abort the program upon detecting certain forms of undefined behavior; support for randomising the stack address offset in each syscall; support for concurrent tbl flushing; preparatory apple m1 support; support for incoming amd and intel graphics chips; bpf support for calling kernel functions directly; a virtio sound driver for improved sound experience on virtualized guests; io_uring support for multi shot mode and a misc cgroup for miscellaneous resources. as always, there are many other features, new drivers, improvements and fixes.
|
enable sensor 3 on msm8976
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'security', 'networking', 'architectures arm x86 mips powerpc riscv s390 ia64 xtensa']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'remote processors', 'clock', 'phy ("physical layer" framework)', 'various']
|
['thermal ', 'qcom/tsens_v1']
|
['c']
| 1
| 2
| 2
|
--- diff --git a/drivers/thermal/qcom/tsens-v1.c b/drivers/thermal/qcom/tsens-v1.c --- a/drivers/thermal/qcom/tsens-v1.c +++ b/drivers/thermal/qcom/tsens-v1.c -/* valid for both msm8956 and msm8976. sensor id 3 is unused. */ +/* valid for both msm8956 and msm8976. */ - .hw_ids = (unsigned int[]){0, 1, 2, 4, 5, 6, 7, 8, 9, 10}, + .hw_ids = (unsigned int[]){0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10},
|
Power Management
|
007d81a4519f04fa5ced5e9e28bf70cd753c398d
|
konrad dybcio thara gopinath thara gopinath linaro org
|
drivers
|
thermal
|
qcom
|
thermal: rcar_gen3_thermal: add support for up to five tsc nodes
|
add support for up to five tsc nodes. the new thcode values are taken from the example in the datasheet.
|
this release includes the landlock security module, which aims to make easier to sandbox applications; support for the clang control flow integrity, which aims to abort the program upon detecting certain forms of undefined behavior; support for randomising the stack address offset in each syscall; support for concurrent tbl flushing; preparatory apple m1 support; support for incoming amd and intel graphics chips; bpf support for calling kernel functions directly; a virtio sound driver for improved sound experience on virtualized guests; io_uring support for multi shot mode and a misc cgroup for miscellaneous resources. as always, there are many other features, new drivers, improvements and fixes.
|
add support for up to five tsc nodes
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'security', 'networking', 'architectures arm x86 mips powerpc riscv s390 ia64 xtensa']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'remote processors', 'clock', 'phy ("physical layer" framework)', 'various']
|
['thermal ', 'rcar_gen3_thermal']
|
['c']
| 1
| 2
| 1
|
--- diff --git a/drivers/thermal/rcar_gen3_thermal.c b/drivers/thermal/rcar_gen3_thermal.c --- a/drivers/thermal/rcar_gen3_thermal.c +++ b/drivers/thermal/rcar_gen3_thermal.c -#define tsc_max_num 4 +#define tsc_max_num 5 + { 3356, 2724, 2244 },
|
Power Management
|
7fd49ca05be35a85c424a3ca8df931bd70c34535
|
niklas s derlund geert uytterhoeven geert renesas glider be
|
drivers
|
thermal
| |
ata: ahci_tegra: add ahci support for tegra186
|
this patch adds support for ahci-compliant serial ata controller on tegra186 soc.
|
this release includes the landlock security module, which aims to make easier to sandbox applications; support for the clang control flow integrity, which aims to abort the program upon detecting certain forms of undefined behavior; support for randomising the stack address offset in each syscall; support for concurrent tbl flushing; preparatory apple m1 support; support for incoming amd and intel graphics chips; bpf support for calling kernel functions directly; a virtio sound driver for improved sound experience on virtualized guests; io_uring support for multi shot mode and a misc cgroup for miscellaneous resources. as always, there are many other features, new drivers, improvements and fixes.
|
add ahci support for tegra186
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'security', 'networking', 'architectures arm x86 mips powerpc riscv s390 ia64 xtensa']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'remote processors', 'clock', 'phy ("physical layer" framework)', 'various']
|
['ata', 'ahci_tegra']
|
['c']
| 1
| 47
| 13
|
--- diff --git a/drivers/ata/ahci_tegra.c b/drivers/ata/ahci_tegra.c --- a/drivers/ata/ahci_tegra.c +++ b/drivers/ata/ahci_tegra.c -#define t_sata0_nvoob_comma_cnt_mask (0xff << 16) -#define t_sata0_nvoob_comma_cnt (0x07 << 16) +struct tegra_ahci_regs { + unsigned int nvoob_comma_cnt_mask; + unsigned int nvoob_comma_cnt_val; +}; + + bool has_sata_oob_rst; + const struct tegra_ahci_regs *regs; - ret = tegra_powergate_sequence_power_up(tegra_powergate_sata, - tegra->sata_clk, - tegra->sata_rst); - if (ret) - goto disable_regulators; + if (!tegra->pdev->dev.pm_domain) { + ret = tegra_powergate_sequence_power_up(tegra_powergate_sata, + tegra->sata_clk, + tegra->sata_rst); + if (ret) + goto disable_regulators; + } - val &= ~(t_sata0_nvoob_comma_cnt_mask | + val &= ~(tegra->soc->regs->nvoob_comma_cnt_mask | - val |= (t_sata0_nvoob_comma_cnt | + val |= (tegra->soc->regs->nvoob_comma_cnt_val | +static const struct tegra_ahci_regs tegra124_ahci_regs = { + .nvoob_comma_cnt_mask = genmask(30, 28), + .nvoob_comma_cnt_val = (7 << 28), +}; + + .has_sata_oob_rst = true, + .regs = &tegra124_ahci_regs, + .has_sata_oob_rst = true, + .regs = &tegra124_ahci_regs, +}; + +static const struct tegra_ahci_regs tegra186_ahci_regs = { + .nvoob_comma_cnt_mask = genmask(23, 16), + .nvoob_comma_cnt_val = (7 << 16), +}; + +static const struct tegra_ahci_soc tegra186_ahci_soc = { + .supports_devslp = false, + .has_sata_oob_rst = false, + .regs = &tegra186_ahci_regs, + { + .compatible = "nvidia,tegra186-ahci", + .data = &tegra186_ahci_soc + }, - tegra->sata_oob_rst = devm_reset_control_get(&pdev->dev, "sata-oob"); - if (is_err(tegra->sata_oob_rst)) { - dev_err(&pdev->dev, "failed to get sata-oob reset "); - return ptr_err(tegra->sata_oob_rst); + if (tegra->soc->has_sata_oob_rst) { + tegra->sata_oob_rst = devm_reset_control_get(&pdev->dev, + "sata-oob"); + if (is_err(tegra->sata_oob_rst)) { + dev_err(&pdev->dev, "failed to get sata-oob reset "); + return ptr_err(tegra->sata_oob_rst); + }
|
Storage
|
868ed7311cd81ef2fffa2cd36e72c44f226b0085
|
sowjanya komatineni
|
drivers
|
ata
| |
nvme: add 'kato' sysfs attribute
|
add a 'kato' controller sysfs attribute to display the current keep-alive timeout value (if any). this allows userspace to identify persistent discovery controllers, as these will have a non-zero kato value.
|
this release includes the landlock security module, which aims to make easier to sandbox applications; support for the clang control flow integrity, which aims to abort the program upon detecting certain forms of undefined behavior; support for randomising the stack address offset in each syscall; support for concurrent tbl flushing; preparatory apple m1 support; support for incoming amd and intel graphics chips; bpf support for calling kernel functions directly; a virtio sound driver for improved sound experience on virtualized guests; io_uring support for multi shot mode and a misc cgroup for miscellaneous resources. as always, there are many other features, new drivers, improvements and fixes.
|
add 'kato' sysfs attribute
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'security', 'networking', 'architectures arm x86 mips powerpc riscv s390 ia64 xtensa']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'remote processors', 'clock', 'phy ("physical layer" framework)', 'various']
|
['nvme ']
|
['c']
| 1
| 2
| 0
|
--- diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c --- a/drivers/nvme/host/core.c +++ b/drivers/nvme/host/core.c +nvme_show_int_function(kato); + &dev_attr_kato.attr,
|
Storage
|
74c22990f08c9f922f775939a4ebc814ca2c49eb
|
hannes reinecke sagi grimberg sagi grimberg me
|
drivers
|
nvme
|
host
|
nvme: export fast_io_fail_tmo to sysfs
|
commit 8c4dfea97f15 ("nvme-fabrics: reject i/o to offline device") introduced fast_io_fail_tmo but didn't export the value to sysfs. the value can be set during the 'nvme connect'. export the timeout value to user space via sysfs to allow runtime configuration.
|
this release includes the landlock security module, which aims to make easier to sandbox applications; support for the clang control flow integrity, which aims to abort the program upon detecting certain forms of undefined behavior; support for randomising the stack address offset in each syscall; support for concurrent tbl flushing; preparatory apple m1 support; support for incoming amd and intel graphics chips; bpf support for calling kernel functions directly; a virtio sound driver for improved sound experience on virtualized guests; io_uring support for multi shot mode and a misc cgroup for miscellaneous resources. as always, there are many other features, new drivers, improvements and fixes.
|
export fast_io_fail_tmo to sysfs
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'security', 'networking', 'architectures arm x86 mips powerpc riscv s390 ia64 xtensa']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'remote processors', 'clock', 'phy ("physical layer" framework)', 'various']
|
['nvme ']
|
['c']
| 1
| 31
| 0
|
--- diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c --- a/drivers/nvme/host/core.c +++ b/drivers/nvme/host/core.c +static ssize_t nvme_ctrl_fast_io_fail_tmo_show(struct device *dev, + struct device_attribute *attr, char *buf) +{ + struct nvme_ctrl *ctrl = dev_get_drvdata(dev); + + if (ctrl->opts->fast_io_fail_tmo == -1) + return sysfs_emit(buf, "off "); + return sysfs_emit(buf, "%d ", ctrl->opts->fast_io_fail_tmo); +} + +static ssize_t nvme_ctrl_fast_io_fail_tmo_store(struct device *dev, + struct device_attribute *attr, const char *buf, size_t count) +{ + struct nvme_ctrl *ctrl = dev_get_drvdata(dev); + struct nvmf_ctrl_options *opts = ctrl->opts; + int fast_io_fail_tmo, err; + + err = kstrtoint(buf, 10, &fast_io_fail_tmo); + if (err) + return -einval; + + if (fast_io_fail_tmo < 0) + opts->fast_io_fail_tmo = -1; + else + opts->fast_io_fail_tmo = fast_io_fail_tmo; + return count; +} +static device_attr(fast_io_fail_tmo, s_irugo | s_iwusr, + nvme_ctrl_fast_io_fail_tmo_show, nvme_ctrl_fast_io_fail_tmo_store); + + &dev_attr_fast_io_fail_tmo.attr,
|
Storage
|
09fbed636382867733c1713c9fe2fa2926dac537
|
daniel wagner ewan d milne emilne redhat com sagi grimberg sagi grimberg me himanshu madhani himanshu madhaani oracle com
|
drivers
|
nvme
|
host
|
nvme: implement non-mdts command limits
|
commands that access lba contents without a data transfer between the host historically have not had a spec defined upper limit. the driver set the queue constraints for such commands to the max data transfer size just to be safe, but this artificial constraint frequently limits devices below their capabilities.
|
this release includes the landlock security module, which aims to make easier to sandbox applications; support for the clang control flow integrity, which aims to abort the program upon detecting certain forms of undefined behavior; support for randomising the stack address offset in each syscall; support for concurrent tbl flushing; preparatory apple m1 support; support for incoming amd and intel graphics chips; bpf support for calling kernel functions directly; a virtio sound driver for improved sound experience on virtualized guests; io_uring support for multi shot mode and a misc cgroup for miscellaneous resources. as always, there are many other features, new drivers, improvements and fixes.
|
implement non-mdts command limits
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'security', 'networking', 'architectures arm x86 mips powerpc riscv s390 ia64 xtensa']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'remote processors', 'clock', 'phy ("physical layer" framework)', 'various']
|
['nvme ']
|
['h', 'c']
| 3
| 85
| 34
|
--- diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c --- a/drivers/nvme/host/core.c +++ b/drivers/nvme/host/core.c - if (!(ctrl->oncs & nvme_ctrl_oncs_dsm)) { + if (ctrl->max_discard_sectors == 0) { - blk_queue_max_discard_sectors(queue, uint_max); - blk_queue_max_discard_segments(queue, nvme_dsm_max_ranges); + blk_queue_max_discard_sectors(queue, ctrl->max_discard_sectors); + blk_queue_max_discard_segments(queue, ctrl->max_discard_segments); -static void nvme_config_write_zeroes(struct gendisk *disk, struct nvme_ns *ns) -{ - u64 max_blocks; - - if (!(ns->ctrl->oncs & nvme_ctrl_oncs_write_zeroes) || - (ns->ctrl->quirks & nvme_quirk_disable_write_zeroes)) - return; - /* - * even though nvme spec explicitly states that mdts is not - * applicable to the write-zeroes:- "the restriction does not apply to - * commands that do not transfer data between the host and the - * controller (e.g., write uncorrectable ro write zeroes command).". - * in order to be more cautious use controller's max_hw_sectors value - * to configure the maximum sectors for the write-zeroes which is - * configured based on the controller's mdts field in the - * nvme_init_ctrl_finish() if available. - */ - if (ns->ctrl->max_hw_sectors == uint_max) - max_blocks = (u64)ushrt_max + 1; - else - max_blocks = ns->ctrl->max_hw_sectors + 1; - - blk_queue_max_write_zeroes_sectors(disk->queue, - nvme_lba_to_sect(ns, max_blocks)); -} - - nvme_config_write_zeroes(disk, ns); + blk_queue_max_write_zeroes_sectors(disk->queue, + ns->ctrl->max_zeroes_sectors); +static inline u32 nvme_mps_to_sectors(struct nvme_ctrl *ctrl, u32 units) +{ + u32 page_shift = nvme_cap_mpsmin(ctrl->cap) + 12; + + return 1 << (units + page_shift - 9); +} + +static int nvme_init_non_mdts_limits(struct nvme_ctrl *ctrl) +{ + struct nvme_command c = { }; + struct nvme_id_ctrl_nvm *id; + int ret; + + if (ctrl->oncs & nvme_ctrl_oncs_dsm) { + ctrl->max_discard_sectors = uint_max; + ctrl->max_discard_segments = nvme_dsm_max_ranges; + } else { + ctrl->max_discard_sectors = 0; + ctrl->max_discard_segments = 0; + } + + /* + * even though nvme spec explicitly states that mdts is not applicable + * to the write-zeroes, we are cautious and limit the size to the + * controllers max_hw_sectors value, which is based on the mdts field + * and possibly other limiting factors. + */ + if ((ctrl->oncs & nvme_ctrl_oncs_write_zeroes) && + !(ctrl->quirks & nvme_quirk_disable_write_zeroes)) + ctrl->max_zeroes_sectors = ctrl->max_hw_sectors; + else + ctrl->max_zeroes_sectors = 0; + + if (nvme_ctrl_limited_cns(ctrl)) + return 0; + + id = kzalloc(sizeof(*id), gfp_kernel); + if (!id) + return 0; + + c.identify.opcode = nvme_admin_identify; + c.identify.cns = nvme_id_cns_cs_ctrl; + c.identify.csi = nvme_csi_nvm; + + ret = nvme_submit_sync_cmd(ctrl->admin_q, &c, id, sizeof(*id)); + if (ret) + goto free_data; + + if (id->dmrl) + ctrl->max_discard_segments = id->dmrl; + if (id->dmrsl) + ctrl->max_discard_sectors = le32_to_cpu(id->dmrsl); + if (id->wzsl) + ctrl->max_zeroes_sectors = nvme_mps_to_sectors(ctrl, id->wzsl); + +free_data: + kfree(id); + return ret; +} + - int ret, page_shift; - - page_shift = nvme_cap_mpsmin(ctrl->cap) + 12; + int ret; - max_hw_sectors = 1 << (id->mdts + page_shift - 9); + max_hw_sectors = nvme_mps_to_sectors(ctrl, id->mdts); + ret = nvme_init_non_mdts_limits(ctrl); + if (ret < 0) + return ret; + + build_bug_on(sizeof(struct nvme_id_ctrl_nvm) != nvme_identify_data_size); diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h --- a/drivers/nvme/host/nvme.h +++ b/drivers/nvme/host/nvme.h + u32 max_discard_sectors; + u32 max_discard_segments; + u32 max_zeroes_sectors; diff --git a/include/linux/nvme.h b/include/linux/nvme.h --- a/include/linux/nvme.h +++ b/include/linux/nvme.h +struct nvme_id_ctrl_nvm { + __u8 vsl; + __u8 wzsl; + __u8 wusl; + __u8 dmrl; + __le32 dmrsl; + __le64 dmsl; + __u8 rsvd16[4080]; +}; +
|
Storage
|
5befc7c26e5a98cd49789fb1beb52c62bd472dba
|
keith busch
|
drivers
|
nvme
|
host
|
nvme: introduce generic per-namespace chardev
|
userspace has not been allowed to i/o to device that's failed to be initialized. this patch introduces generic per-namespace character device to allow userspace to i/o regardless the block device is there or not.
|
this release includes the landlock security module, which aims to make easier to sandbox applications; support for the clang control flow integrity, which aims to abort the program upon detecting certain forms of undefined behavior; support for randomising the stack address offset in each syscall; support for concurrent tbl flushing; preparatory apple m1 support; support for incoming amd and intel graphics chips; bpf support for calling kernel functions directly; a virtio sound driver for improved sound experience on virtualized guests; io_uring support for multi shot mode and a misc cgroup for miscellaneous resources. as always, there are many other features, new drivers, improvements and fixes.
|
introduce generic per-namespace chardev
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'security', 'networking', 'architectures arm x86 mips powerpc riscv s390 ia64 xtensa']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'remote processors', 'clock', 'phy ("physical layer" framework)', 'various']
|
['nvme ']
|
['h', 'c']
| 4
| 180
| 9
|
- /dev/ngxny --- diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c --- a/drivers/nvme/host/core.c +++ b/drivers/nvme/host/core.c +static define_ida(nvme_ns_chr_minor_ida); +static dev_t nvme_ns_chr_devt; +static struct class *nvme_ns_chr_class; + +void nvme_cdev_del(struct cdev *cdev, struct device *cdev_device) +{ + cdev_device_del(cdev, cdev_device); + ida_simple_remove(&nvme_ns_chr_minor_ida, minor(cdev_device->devt)); +} + +int nvme_cdev_add(struct cdev *cdev, struct device *cdev_device, + const struct file_operations *fops, struct module *owner) +{ + int minor, ret; + + minor = ida_simple_get(&nvme_ns_chr_minor_ida, 0, 0, gfp_kernel); + if (minor < 0) + return minor; + cdev_device->devt = mkdev(major(nvme_ns_chr_devt), minor); + cdev_device->class = nvme_ns_chr_class; + device_initialize(cdev_device); + cdev_init(cdev, fops); + cdev->owner = owner; + ret = cdev_device_add(cdev, cdev_device); + if (ret) + ida_simple_remove(&nvme_ns_chr_minor_ida, minor); + return ret; +} + +static int nvme_ns_chr_open(struct inode *inode, struct file *file) +{ + return nvme_ns_open(container_of(inode->i_cdev, struct nvme_ns, cdev)); +} + +static int nvme_ns_chr_release(struct inode *inode, struct file *file) +{ + nvme_ns_release(container_of(inode->i_cdev, struct nvme_ns, cdev)); + return 0; +} + +static const struct file_operations nvme_ns_chr_fops = { + .owner = this_module, + .open = nvme_ns_chr_open, + .release = nvme_ns_chr_release, + .unlocked_ioctl = nvme_ns_chr_ioctl, + .compat_ioctl = compat_ptr_ioctl, +}; + +static int nvme_add_ns_cdev(struct nvme_ns *ns) +{ + int ret; + + ns->cdev_device.parent = ns->ctrl->device; + ret = dev_set_name(&ns->cdev_device, "ng%dn%d", + ns->ctrl->instance, ns->head->instance); + if (ret) + return ret; + ret = nvme_cdev_add(&ns->cdev, &ns->cdev_device, &nvme_ns_chr_fops, + ns->ctrl->ops->module); + if (ret) + kfree_const(ns->cdev_device.kobj.name); + return ret; +} + + if (!nvme_ns_head_multipath(ns->head)) + nvme_add_ns_cdev(ns); + if (!nvme_ns_head_multipath(ns->head)) + nvme_cdev_del(&ns->cdev, &ns->cdev_device); + + result = alloc_chrdev_region(&nvme_ns_chr_devt, 0, nvme_minors, + "nvme-generic"); + if (result < 0) + goto destroy_subsys_class; + + nvme_ns_chr_class = class_create(this_module, "nvme-generic"); + if (is_err(nvme_ns_chr_class)) { + result = ptr_err(nvme_ns_chr_class); + goto unregister_generic_ns; + } + +unregister_generic_ns: + unregister_chrdev_region(nvme_ns_chr_devt, nvme_minors); +destroy_subsys_class: + class_destroy(nvme_subsys_class); + class_destroy(nvme_ns_chr_class); + unregister_chrdev_region(nvme_ns_chr_devt, nvme_minors); + ida_destroy(&nvme_ns_chr_minor_ida); diff --git a/drivers/nvme/host/ioctl.c b/drivers/nvme/host/ioctl.c --- a/drivers/nvme/host/ioctl.c +++ b/drivers/nvme/host/ioctl.c +static int __nvme_ioctl(struct nvme_ns *ns, unsigned int cmd, void __user *arg) +{ + if (is_ctrl_ioctl(cmd)) + return nvme_ctrl_ioctl(ns->ctrl, cmd, arg); + return nvme_ns_ioctl(ns, cmd, arg); +} + - void __user *argp = (void __user *)arg; - if (is_ctrl_ioctl(cmd)) - return nvme_ctrl_ioctl(ns->ctrl, cmd, argp); - return nvme_ns_ioctl(ns, cmd, argp); + return __nvme_ioctl(ns, cmd, (void __user *)arg); +} + +long nvme_ns_chr_ioctl(struct file *file, unsigned int cmd, unsigned long arg) +{ + struct nvme_ns *ns = + container_of(file_inode(file)->i_cdev, struct nvme_ns, cdev); + + return __nvme_ioctl(ns, cmd, (void __user *)arg); + void __user *argp = (void __user *)arg; + + if (is_ctrl_ioctl(cmd)) + return nvme_ns_head_ctrl_ioctl(head, cmd, argp); + return nvme_ns_head_ns_ioctl(head, cmd, argp); +} + +long nvme_ns_head_chr_ioctl(struct file *file, unsigned int cmd, + unsigned long arg) +{ + struct cdev *cdev = file_inode(file)->i_cdev; + struct nvme_ns_head *head = + container_of(cdev, struct nvme_ns_head, cdev); + void __user *argp = (void __user *)arg; - return nvme_ns_head_ctrl_ioctl(head, cmd, (void __user *)arg); - return nvme_ns_head_ns_ioctl(head, cmd, (void __user *)arg); + return nvme_ns_head_ctrl_ioctl(head, cmd, argp); + return nvme_ns_head_ns_ioctl(head, cmd, argp); diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c --- a/drivers/nvme/host/multipath.c +++ b/drivers/nvme/host/multipath.c +static inline struct nvme_ns_head *cdev_to_ns_head(struct cdev *cdev) +{ + return container_of(cdev, struct nvme_ns_head, cdev); +} + +static int nvme_ns_head_chr_open(struct inode *inode, struct file *file) +{ + if (!nvme_tryget_ns_head(cdev_to_ns_head(inode->i_cdev))) + return -enxio; + return 0; +} + +static int nvme_ns_head_chr_release(struct inode *inode, struct file *file) +{ + nvme_put_ns_head(cdev_to_ns_head(inode->i_cdev)); + return 0; +} + +static const struct file_operations nvme_ns_head_chr_fops = { + .owner = this_module, + .open = nvme_ns_head_chr_open, + .release = nvme_ns_head_chr_release, + .unlocked_ioctl = nvme_ns_head_chr_ioctl, + .compat_ioctl = compat_ptr_ioctl, +}; + +static int nvme_add_ns_head_cdev(struct nvme_ns_head *head) +{ + int ret; + + head->cdev_device.parent = &head->subsys->dev; + ret = dev_set_name(&head->cdev_device, "ng%dn%d", + head->subsys->instance, head->instance); + if (ret) + return ret; + ret = nvme_cdev_add(&head->cdev, &head->cdev_device, + &nvme_ns_head_chr_fops, this_module); + if (ret) + kfree_const(head->cdev_device.kobj.name); + return ret; +} + - if (!test_and_set_bit(nvme_nshead_disk_live, &head->flags)) + if (!test_and_set_bit(nvme_nshead_disk_live, &head->flags)) { + nvme_add_ns_head_cdev(head); + } - if (head->disk->flags & genhd_fl_up) + if (head->disk->flags & genhd_fl_up) { + nvme_cdev_del(&head->cdev, &head->cdev_device); + } - diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h --- a/drivers/nvme/host/nvme.h +++ b/drivers/nvme/host/nvme.h + + struct cdev cdev; + struct device cdev_device; + + struct cdev cdev; + struct device cdev_device; + +int nvme_cdev_add(struct cdev *cdev, struct device *cdev_device, + const struct file_operations *fops, struct module *owner); +void nvme_cdev_del(struct cdev *cdev, struct device *cdev_device); +long nvme_ns_chr_ioctl(struct file *file, unsigned int cmd, unsigned long arg); +long nvme_ns_head_chr_ioctl(struct file *file, unsigned int cmd, + unsigned long arg);
|
Storage
|
2637baed78010eeaae274feb5b99ce90933fadfb
|
minwoo im
|
drivers
|
nvme
|
host
|
scsi: core: add mq_poll support to scsi layer
|
currently iopoll support is only available in block layer. this patch adds mq_poll support to the scsi layer.
|
this release includes the landlock security module, which aims to make easier to sandbox applications; support for the clang control flow integrity, which aims to abort the program upon detecting certain forms of undefined behavior; support for randomising the stack address offset in each syscall; support for concurrent tbl flushing; preparatory apple m1 support; support for incoming amd and intel graphics chips; bpf support for calling kernel functions directly; a virtio sound driver for improved sound experience on virtualized guests; io_uring support for multi shot mode and a misc cgroup for miscellaneous resources. as always, there are many other features, new drivers, improvements and fixes.
|
add mq_poll support to scsi layer
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'security', 'networking', 'architectures arm x86 mips powerpc riscv s390 ia64 xtensa']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'remote processors', 'clock', 'phy ("physical layer" framework)', 'various']
|
['scsi ']
|
['h', 'c']
| 3
| 28
| 0
|
--- diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c --- a/drivers/scsi/scsi_lib.c +++ b/drivers/scsi/scsi_lib.c + +static int scsi_mq_poll(struct blk_mq_hw_ctx *hctx) +{ + struct request_queue *q = hctx->queue; + struct scsi_device *sdev = q->queuedata; + struct scsi_host *shost = sdev->host; + + if (shost->hostt->mq_poll) + return shost->hostt->mq_poll(shost, hctx->queue_num); + + return 0; +} + + .poll = scsi_mq_poll, + .poll = scsi_mq_poll, + tag_set->nr_maps = shost->nr_maps ? : 1; diff --git a/include/scsi/scsi_cmnd.h b/include/scsi/scsi_cmnd.h --- a/include/scsi/scsi_cmnd.h +++ b/include/scsi/scsi_cmnd.h +#include <scsi/scsi_host.h> diff --git a/include/scsi/scsi_host.h b/include/scsi/scsi_host.h --- a/include/scsi/scsi_host.h +++ b/include/scsi/scsi_host.h + /* + * scsi interface of blk_poll - poll for io completions. + * only applicable if scsi lld exposes multiple h/w queues. + * + * return value: number of completed entries found. + * + * status: optional + */ + int (* mq_poll)(struct scsi_host *shost, unsigned int queue_num); + + unsigned nr_maps;
|
Storage
|
af1830956dc3dca0c87b2d679f7c91a8fe0331e1
|
kashyap desai hannes reinecke hare suse de john garry john garry huawei com
|
drivers
|
scsi
| |
scsi: megaraid_sas: mq_poll support
|
implement mq_poll interface support in megaraid_sas. this feature requires shared host tag support in kernel and driver.
|
this release includes the landlock security module, which aims to make easier to sandbox applications; support for the clang control flow integrity, which aims to abort the program upon detecting certain forms of undefined behavior; support for randomising the stack address offset in each syscall; support for concurrent tbl flushing; preparatory apple m1 support; support for incoming amd and intel graphics chips; bpf support for calling kernel functions directly; a virtio sound driver for improved sound experience on virtualized guests; io_uring support for multi shot mode and a misc cgroup for miscellaneous resources. as always, there are many other features, new drivers, improvements and fixes.
|
mq_poll support
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'security', 'networking', 'architectures arm x86 mips powerpc riscv s390 ia64 xtensa']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'remote processors', 'clock', 'phy ("physical layer" framework)', 'various']
|
['scsi ', 'megaraid_sas']
|
['h', 'c']
| 4
| 123
| 11
|
--- diff --git a/drivers/scsi/megaraid/megaraid_sas.h b/drivers/scsi/megaraid/megaraid_sas.h --- a/drivers/scsi/megaraid/megaraid_sas.h +++ b/drivers/scsi/megaraid/megaraid_sas.h + atomic_t in_used; + int iopoll_q_count; +int megasas_blk_mq_poll(struct scsi_host *shost, unsigned int queue_num); diff --git a/drivers/scsi/megaraid/megaraid_sas_base.c b/drivers/scsi/megaraid/megaraid_sas_base.c --- a/drivers/scsi/megaraid/megaraid_sas_base.c +++ b/drivers/scsi/megaraid/megaraid_sas_base.c +int poll_queues; +module_param(poll_queues, int, 0444); +module_parm_desc(poll_queues, "number of queues to be use for io_uring poll mode. " + "this parameter is effective only if host_tagset_enable=1 & " + "it is not applicable for mfi_series. & " + "driver will work in latency mode. & " + "high iops queues are not allocated & " + ); + +extern int megasas_blk_mq_poll(struct scsi_host *shost, unsigned int queue_num); + int qoff = 0, offset; + struct blk_mq_queue_map *map; - return blk_mq_pci_map_queues(&shost->tag_set.map[hctx_type_default], - instance->pdev, instance->low_latency_index_start); + offset = instance->low_latency_index_start; + + /* setup default hctx */ + map = &shost->tag_set.map[hctx_type_default]; + map->nr_queues = instance->msix_vectors - offset; + map->queue_offset = 0; + blk_mq_pci_map_queues(map, instance->pdev, offset); + qoff += map->nr_queues; + offset += map->nr_queues; + + /* setup poll hctx */ + map = &shost->tag_set.map[hctx_type_poll]; + map->nr_queues = instance->iopoll_q_count; + if (map->nr_queues) { + /* + * the poll queue(s) doesn't have an irq (and hence irq + * affinity), so use the regular blk-mq cpu mapping + */ + map->queue_offset = qoff; + blk_mq_map_queues(map); + } + + return 0; + .mq_poll = megasas_blk_mq_poll, - irq_flags |= pci_irq_affinity; + irq_flags |= pci_irq_affinity | pci_irq_all_types; + /* do not allocate msix vectors for poll_queues. + * msix_vectors is always within a range of fw supported reply queue. + */ - instance->msix_vectors, irq_flags, descp); + instance->msix_vectors - instance->iopoll_q_count, irq_flags, descp); + instance->iopoll_q_count = 0; + if ((instance->adapter_type != mfi_series) && + poll_queues) { + + instance->perf_mode = mr_latency_perf_mode; + instance->low_latency_index_start = 1; + + /* reserve for default and non-mananged pre-vector. */ + if (instance->msix_vectors > (poll_queues + 2)) + instance->iopoll_q_count = poll_queues; + else + instance->iopoll_q_count = 0; + + num_msix_req = num_online_cpus() + instance->low_latency_index_start; + instance->msix_vectors = min(num_msix_req, + instance->msix_vectors); + + } + - if ((instance->perf_mode == mr_balanced_perf_mode) && - (i != instance->msix_vectors)) { + if (((instance->perf_mode == mr_balanced_perf_mode) + || instance->iopoll_q_count) && + (i != (instance->msix_vectors - instance->iopoll_q_count))) { + instance->iopoll_q_count = 0; - "requested/available msix %d/%d ", instance->msix_vectors, i); + "requested/available msix %d/%d poll_queue %d ", + instance->msix_vectors - instance->iopoll_q_count, + i, instance->iopoll_q_count); - instance->low_latency_index_start; + instance->low_latency_index_start + instance->iopoll_q_count; + if (instance->iopoll_q_count) + host->nr_maps = 3; + } else { + instance->iopoll_q_count = 0; - "max firmware commands: %d shared with nr_hw_queues = %d ", - instance->max_fw_cmds, host->nr_hw_queues); + "max firmware commands: %d shared with default " + "hw_queues = %d poll_queues %d ", instance->max_fw_cmds, + host->nr_hw_queues - instance->iopoll_q_count, + instance->iopoll_q_count); + poll_queues = 0; diff --git a/drivers/scsi/megaraid/megaraid_sas_fusion.c b/drivers/scsi/megaraid/megaraid_sas_fusion.c --- a/drivers/scsi/megaraid/megaraid_sas_fusion.c +++ b/drivers/scsi/megaraid/megaraid_sas_fusion.c + count += instance->iopoll_q_count; + + msix_count += instance->iopoll_q_count; - iocinitmessage->hostmsixvectors = instance->msix_vectors; + iocinitmessage->hostmsixvectors = instance->msix_vectors + instance->iopoll_q_count; + count += instance->iopoll_q_count; + + for (i = 0; i < max_msix_queues_fusion; i++) + atomic_set(&fusion->busy_mq_poll[i], 0); + + if (irq_context && !atomic_add_unless(&irq_context->in_used, 1, 1)) + return 0; + + atomic_dec(&irq_context->in_used); + + if (irq_context) + atomic_dec(&irq_context->in_used); + +int megasas_blk_mq_poll(struct scsi_host *shost, unsigned int queue_num) +{ + + struct megasas_instance *instance; + int num_entries = 0; + struct fusion_context *fusion; + + instance = (struct megasas_instance *)shost->hostdata; + + fusion = instance->ctrl_context; + + queue_num = queue_num + instance->low_latency_index_start; + + if (!atomic_add_unless(&fusion->busy_mq_poll[queue_num], 1, 1)) + return 0; + + num_entries = complete_cmd_fusion(instance, queue_num, null); + atomic_dec(&fusion->busy_mq_poll[queue_num]); + + return num_entries; +} + + count += instance->iopoll_q_count; + diff --git a/drivers/scsi/megaraid/megaraid_sas_fusion.h b/drivers/scsi/megaraid/megaraid_sas_fusion.h --- a/drivers/scsi/megaraid/megaraid_sas_fusion.h +++ b/drivers/scsi/megaraid/megaraid_sas_fusion.h + atomic_t busy_mq_poll[max_msix_queues_fusion]; +
|
Storage
|
9e4bec5b2a230066a0dc9f79f24b4c1bcb668c5a
|
kashyap desai
|
drivers
|
scsi
|
megaraid
|
scsi: pm80xx: add sysfs attribute to check mpi state
|
a new sysfs variable 'ctl_mpi_state' is being introduced to check the state of mpi.
|
this release includes the landlock security module, which aims to make easier to sandbox applications; support for the clang control flow integrity, which aims to abort the program upon detecting certain forms of undefined behavior; support for randomising the stack address offset in each syscall; support for concurrent tbl flushing; preparatory apple m1 support; support for incoming amd and intel graphics chips; bpf support for calling kernel functions directly; a virtio sound driver for improved sound experience on virtualized guests; io_uring support for multi shot mode and a misc cgroup for miscellaneous resources. as always, there are many other features, new drivers, improvements and fixes.
|
add sysfs attribute to check mpi state
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'security', 'networking', 'architectures arm x86 mips powerpc riscv s390 ia64 xtensa']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'remote processors', 'clock', 'phy ("physical layer" framework)', 'various']
|
['scsi ', 'pm80xx']
|
['c']
| 1
| 31
| 1
|
--- diff --git a/drivers/scsi/pm8001/pm8001_ctl.c b/drivers/scsi/pm8001/pm8001_ctl.c --- a/drivers/scsi/pm8001/pm8001_ctl.c +++ b/drivers/scsi/pm8001/pm8001_ctl.c +#include "pm8001_chips.h" - + +/** + * ctl_mpi_state_show - controller mpi state check + * @cdev: pointer to embedded class device + * @buf: the buffer returned + * + * a sysfs 'read-only' shost attribute. + */ + +static const char *const mpistatetext[] = { + "mpi is not initialized", + "mpi is successfully initialized", + "mpi termination is in progress", + "mpi initialization failed with error in [31:16]" +}; + +static ssize_t ctl_mpi_state_show(struct device *cdev, + struct device_attribute *attr, char *buf) +{ + struct scsi_host *shost = class_to_shost(cdev); + struct sas_ha_struct *sha = shost_to_sas_ha(shost); + struct pm8001_hba_info *pm8001_ha = sha->lldd_ha; + unsigned int mpidw0; + + mpidw0 = pm8001_mr32(pm8001_ha->general_stat_tbl_addr, 0); + return sysfs_emit(buf, "%s ", mpistatetext[mpidw0 & 0x0003]); +} +static device_attr_ro(ctl_mpi_state); + + &dev_attr_ctl_mpi_state,
|
Storage
|
4ddbea1b6f51a2ac07c4b80b3c3f50ea37367828
|
vishakha channapattan
|
drivers
|
scsi
|
pm8001
|
scsi: pm80xx: add sysfs attribute to check controller hmi error
|
a new sysfs variable 'ctl_hmi_error' is being introduced to give the error details if the mpi initialization fails
|
this release includes the landlock security module, which aims to make easier to sandbox applications; support for the clang control flow integrity, which aims to abort the program upon detecting certain forms of undefined behavior; support for randomising the stack address offset in each syscall; support for concurrent tbl flushing; preparatory apple m1 support; support for incoming amd and intel graphics chips; bpf support for calling kernel functions directly; a virtio sound driver for improved sound experience on virtualized guests; io_uring support for multi shot mode and a misc cgroup for miscellaneous resources. as always, there are many other features, new drivers, improvements and fixes.
|
add sysfs attribute to check controller hmi error
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'security', 'networking', 'architectures arm x86 mips powerpc riscv s390 ia64 xtensa']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'remote processors', 'clock', 'phy ("physical layer" framework)', 'various']
|
['scsi ', 'pm80xx']
|
['c']
| 1
| 22
| 0
|
--- diff --git a/drivers/scsi/pm8001/pm8001_ctl.c b/drivers/scsi/pm8001/pm8001_ctl.c --- a/drivers/scsi/pm8001/pm8001_ctl.c +++ b/drivers/scsi/pm8001/pm8001_ctl.c +/** + * ctl_hmi_error_show - controller mpi initialization fails + * @cdev: pointer to embedded class device + * @buf: the buffer returned + * + * a sysfs 'read-only' shost attribute. + */ + +static ssize_t ctl_hmi_error_show(struct device *cdev, + struct device_attribute *attr, char *buf) +{ + struct scsi_host *shost = class_to_shost(cdev); + struct sas_ha_struct *sha = shost_to_sas_ha(shost); + struct pm8001_hba_info *pm8001_ha = sha->lldd_ha; + unsigned int mpidw0; + + mpidw0 = pm8001_mr32(pm8001_ha->general_stat_tbl_addr, 0); + return sysfs_emit(buf, "0x%08x ", (mpidw0 >> 16)); +} +static device_attr_ro(ctl_hmi_error); + + &dev_attr_ctl_hmi_error,
|
Storage
|
a4c55e16c50022825966864cf1f08b9efa3ebb86
|
vishakha channapattan
|
drivers
|
scsi
|
pm8001
|
scsi: pm80xx: add sysfs attribute to track raae count
|
a new sysfs variable 'ctl_raae_count' is being introduced that tells if the controller is alive by indicating controller ticks. if on subsequent run we see the ticks changing in raae count that indicates that controller is not dead.
|
this release includes the landlock security module, which aims to make easier to sandbox applications; support for the clang control flow integrity, which aims to abort the program upon detecting certain forms of undefined behavior; support for randomising the stack address offset in each syscall; support for concurrent tbl flushing; preparatory apple m1 support; support for incoming amd and intel graphics chips; bpf support for calling kernel functions directly; a virtio sound driver for improved sound experience on virtualized guests; io_uring support for multi shot mode and a misc cgroup for miscellaneous resources. as always, there are many other features, new drivers, improvements and fixes.
|
add sysfs attribute to track raae count
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'security', 'networking', 'architectures arm x86 mips powerpc riscv s390 ia64 xtensa']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'remote processors', 'clock', 'phy ("physical layer" framework)', 'various']
|
['scsi ', 'pm80xx']
|
['c']
| 1
| 22
| 0
|
--- diff --git a/drivers/scsi/pm8001/pm8001_ctl.c b/drivers/scsi/pm8001/pm8001_ctl.c --- a/drivers/scsi/pm8001/pm8001_ctl.c +++ b/drivers/scsi/pm8001/pm8001_ctl.c +/** + * ctl_raae_count_show - controller raae count check + * @cdev: pointer to embedded class device + * @buf: the buffer returned + * + * a sysfs 'read-only' shost attribute. + */ + +static ssize_t ctl_raae_count_show(struct device *cdev, + struct device_attribute *attr, char *buf) +{ + struct scsi_host *shost = class_to_shost(cdev); + struct sas_ha_struct *sha = shost_to_sas_ha(shost); + struct pm8001_hba_info *pm8001_ha = sha->lldd_ha; + unsigned int raaecnt; + + raaecnt = pm8001_mr32(pm8001_ha->general_stat_tbl_addr, 12); + return sysfs_emit(buf, "0x%08x ", raaecnt); +} +static device_attr_ro(ctl_raae_count); + + &dev_attr_ctl_raae_count,
|
Storage
|
dd49ded8aa432e2877e8b8bafcc00898c20ca381
|
vishakha channapattan
|
drivers
|
scsi
|
pm8001
|
scsi: pm80xx: add sysfs attribute to track iop0 count
|
a new sysfs variable 'ctl_iop0_count' is being introduced that tells if the controller is alive by indicating controller ticks. if on subsequent run we see the ticks changing that indicates that controller is not dead.
|
this release includes the landlock security module, which aims to make easier to sandbox applications; support for the clang control flow integrity, which aims to abort the program upon detecting certain forms of undefined behavior; support for randomising the stack address offset in each syscall; support for concurrent tbl flushing; preparatory apple m1 support; support for incoming amd and intel graphics chips; bpf support for calling kernel functions directly; a virtio sound driver for improved sound experience on virtualized guests; io_uring support for multi shot mode and a misc cgroup for miscellaneous resources. as always, there are many other features, new drivers, improvements and fixes.
|
add sysfs attribute to track iop0 count
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'security', 'networking', 'architectures arm x86 mips powerpc riscv s390 ia64 xtensa']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'remote processors', 'clock', 'phy ("physical layer" framework)', 'various']
|
['scsi ', 'pm80xx']
|
['c']
| 1
| 22
| 0
|
--- diff --git a/drivers/scsi/pm8001/pm8001_ctl.c b/drivers/scsi/pm8001/pm8001_ctl.c --- a/drivers/scsi/pm8001/pm8001_ctl.c +++ b/drivers/scsi/pm8001/pm8001_ctl.c +/** + * ctl_iop0_count_show - controller iop0 count check + * @cdev: pointer to embedded class device + * @buf: the buffer returned + * + * a sysfs 'read-only' shost attribute. + */ + +static ssize_t ctl_iop0_count_show(struct device *cdev, + struct device_attribute *attr, char *buf) +{ + struct scsi_host *shost = class_to_shost(cdev); + struct sas_ha_struct *sha = shost_to_sas_ha(shost); + struct pm8001_hba_info *pm8001_ha = sha->lldd_ha; + unsigned int iop0cnt; + + iop0cnt = pm8001_mr32(pm8001_ha->general_stat_tbl_addr, 16); + return sysfs_emit(buf, "0x%08x ", iop0cnt); +} +static device_attr_ro(ctl_iop0_count); + + &dev_attr_ctl_iop0_count,
|
Storage
|
0602624ace23afddb92ec842fc602df04fad97c0
|
vishakha channapattan
|
drivers
|
scsi
|
pm8001
|
scsi: pm80xx: add sysfs attribute to track iop1 count
|
a new sysfs variable 'ctl_iop1_count' is being introduced that tells if the controller is alive by indicating controller ticks. if on subsequent run we see the ticks changing that indicates that controller is not dead.
|
this release includes the landlock security module, which aims to make easier to sandbox applications; support for the clang control flow integrity, which aims to abort the program upon detecting certain forms of undefined behavior; support for randomising the stack address offset in each syscall; support for concurrent tbl flushing; preparatory apple m1 support; support for incoming amd and intel graphics chips; bpf support for calling kernel functions directly; a virtio sound driver for improved sound experience on virtualized guests; io_uring support for multi shot mode and a misc cgroup for miscellaneous resources. as always, there are many other features, new drivers, improvements and fixes.
|
add sysfs attribute to track iop1 count
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'security', 'networking', 'architectures arm x86 mips powerpc riscv s390 ia64 xtensa']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'remote processors', 'clock', 'phy ("physical layer" framework)', 'various']
|
['scsi ', 'pm80xx']
|
['c']
| 1
| 23
| 0
|
--- diff --git a/drivers/scsi/pm8001/pm8001_ctl.c b/drivers/scsi/pm8001/pm8001_ctl.c --- a/drivers/scsi/pm8001/pm8001_ctl.c +++ b/drivers/scsi/pm8001/pm8001_ctl.c +/** + * ctl_iop1_count_show - controller iop1 count check + * @cdev: pointer to embedded class device + * @buf: the buffer returned + * + * a sysfs 'read-only' shost attribute. + */ + +static ssize_t ctl_iop1_count_show(struct device *cdev, + struct device_attribute *attr, char *buf) +{ + struct scsi_host *shost = class_to_shost(cdev); + struct sas_ha_struct *sha = shost_to_sas_ha(shost); + struct pm8001_hba_info *pm8001_ha = sha->lldd_ha; + unsigned int iop1cnt; + + iop1cnt = pm8001_mr32(pm8001_ha->general_stat_tbl_addr, 20); + return sysfs_emit(buf, "0x%08x ", iop1cnt); + +} +static device_attr_ro(ctl_iop1_count); + + &dev_attr_ctl_iop1_count,
|
Storage
|
b0c306e6216749378ce43f2c5ac4f17bb5ba35ff
|
vishakha channapattan
|
drivers
|
scsi
|
pm8001
|
scsi: qedf: enable devlink support
|
devlink instance lifetime was linked to qed_dev object. that caused devlink to be recreated on each recovery.
|
this release includes the landlock security module, which aims to make easier to sandbox applications; support for the clang control flow integrity, which aims to abort the program upon detecting certain forms of undefined behavior; support for randomising the stack address offset in each syscall; support for concurrent tbl flushing; preparatory apple m1 support; support for incoming amd and intel graphics chips; bpf support for calling kernel functions directly; a virtio sound driver for improved sound experience on virtualized guests; io_uring support for multi shot mode and a misc cgroup for miscellaneous resources. as always, there are many other features, new drivers, improvements and fixes.
|
enable devlink support
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'security', 'networking', 'architectures arm x86 mips powerpc riscv s390 ia64 xtensa']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'remote processors', 'clock', 'phy ("physical layer" framework)', 'various']
|
['scsi ', 'qedf']
|
['h', 'c']
| 2
| 14
| 0
|
--- diff --git a/drivers/scsi/qedf/qedf.h b/drivers/scsi/qedf/qedf.h --- a/drivers/scsi/qedf/qedf.h +++ b/drivers/scsi/qedf/qedf.h + struct devlink *devlink; diff --git a/drivers/scsi/qedf/qedf_main.c b/drivers/scsi/qedf/qedf_main.c --- a/drivers/scsi/qedf/qedf_main.c +++ b/drivers/scsi/qedf/qedf_main.c + if (mode != qedf_mode_recovery) { + qedf->devlink = qed_ops->common->devlink_register(qedf->cdev); + if (is_err(qedf->devlink)) { + qedf_err(&qedf->dbg_ctx, "cannot register devlink "); + qedf->devlink = null; + } + } + + if (mode != qedf_mode_recovery && qedf->devlink) { + qed_ops->common->devlink_unregister(qedf->devlink); + qedf->devlink = null; + } +
|
Storage
|
4aab946f789ed7c2e44481f395ab2eab0b63824a
|
javed hasan
|
drivers
|
scsi
|
qedf
|
scsi: qla2xxx: add marginal path handling support
|
add support for eh_should_retry_cmd callback in qla2xxx host template.
|
this release includes the landlock security module, which aims to make easier to sandbox applications; support for the clang control flow integrity, which aims to abort the program upon detecting certain forms of undefined behavior; support for randomising the stack address offset in each syscall; support for concurrent tbl flushing; preparatory apple m1 support; support for incoming amd and intel graphics chips; bpf support for calling kernel functions directly; a virtio sound driver for improved sound experience on virtualized guests; io_uring support for multi shot mode and a misc cgroup for miscellaneous resources. as always, there are many other features, new drivers, improvements and fixes.
|
add marginal path handling support
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'security', 'networking', 'architectures arm x86 mips powerpc riscv s390 ia64 xtensa']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'remote processors', 'clock', 'phy ("physical layer" framework)', 'various']
|
['scsi ', 'qla2xxx']
|
['c']
| 1
| 1
| 0
|
--- diff --git a/drivers/scsi/qla2xxx/qla_os.c b/drivers/scsi/qla2xxx/qla_os.c --- a/drivers/scsi/qla2xxx/qla_os.c +++ b/drivers/scsi/qla2xxx/qla_os.c + .eh_should_retry_cmd = fc_eh_should_retry_cmd,
|
Storage
|
000e68faefe6240ea2e4c98b606c594b20974fb7
|
bikash hazarika himanshu madhani himanshu madhani oracle com
|
drivers
|
scsi
|
qla2xxx
|
scsi: scsi_debug: mq_poll support
|
add support of the mq_poll interface to scsi_debug. this feature requires shared host tag support in kernel and driver.
|
this release includes the landlock security module, which aims to make easier to sandbox applications; support for the clang control flow integrity, which aims to abort the program upon detecting certain forms of undefined behavior; support for randomising the stack address offset in each syscall; support for concurrent tbl flushing; preparatory apple m1 support; support for incoming amd and intel graphics chips; bpf support for calling kernel functions directly; a virtio sound driver for improved sound experience on virtualized guests; io_uring support for multi shot mode and a misc cgroup for miscellaneous resources. as always, there are many other features, new drivers, improvements and fixes.
|
mq_poll support
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'security', 'networking', 'architectures arm x86 mips powerpc riscv s390 ia64 xtensa']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'remote processors', 'clock', 'phy ("physical layer" framework)', 'various']
|
['scsi ', 'scsi_debug']
|
['c']
| 1
| 130
| 0
|
--- diff --git a/drivers/scsi/scsi_debug.c b/drivers/scsi/scsi_debug.c --- a/drivers/scsi/scsi_debug.c +++ b/drivers/scsi/scsi_debug.c +static int poll_queues; /* iouring iopoll interface.*/ + + /* do not complete io from default completion path. + * let it to be on queue. + * completion should happen from mq_poll interface. + */ + if ((sqp - sdebug_q_arr) >= (submit_queues - poll_queues)) + return 0; + +module_param_named(poll_queues, poll_queues, int, s_irugo); +module_parm_desc(poll_queues, "support for iouring iopoll queues (1 to max(submit_queues - 1)"); +static int sdebug_map_queues(struct scsi_host *shost) +{ + int i, qoff; + + if (shost->nr_hw_queues == 1) + return 0; + + for (i = 0, qoff = 0; i < hctx_max_types; i++) { + struct blk_mq_queue_map *map = &shost->tag_set.map[i]; + + map->nr_queues = 0; + + if (i == hctx_type_default) + map->nr_queues = submit_queues - poll_queues; + else if (i == hctx_type_poll) + map->nr_queues = poll_queues; + + if (!map->nr_queues) { + bug_on(i == hctx_type_default); + continue; + } + + map->queue_offset = qoff; + blk_mq_map_queues(map); + + qoff += map->nr_queues; + } + + return 0; + +} + +static int sdebug_blk_mq_poll(struct scsi_host *shost, unsigned int queue_num) +{ + int qc_idx; + int retiring = 0; + unsigned long iflags; + struct sdebug_queue *sqp; + struct sdebug_queued_cmd *sqcp; + struct scsi_cmnd *scp; + struct sdebug_dev_info *devip; + int num_entries = 0; + + sqp = sdebug_q_arr + queue_num; + + do { + spin_lock_irqsave(&sqp->qc_lock, iflags); + qc_idx = find_first_bit(sqp->in_use_bm, sdebug_max_queue); + if (unlikely((qc_idx < 0) || (qc_idx >= sdebug_max_queue))) + goto out; + + sqcp = &sqp->qc_arr[qc_idx]; + scp = sqcp->a_cmnd; + if (unlikely(scp == null)) { + pr_err("scp is null, queue_num=%d, qc_idx=%d from %s ", + queue_num, qc_idx, __func__); + goto out; + } + devip = (struct sdebug_dev_info *)scp->device->hostdata; + if (likely(devip)) + atomic_dec(&devip->num_in_q); + else + pr_err("devip=null from %s ", __func__); + if (unlikely(atomic_read(&retired_max_queue) > 0)) + retiring = 1; + + sqcp->a_cmnd = null; + if (unlikely(!test_and_clear_bit(qc_idx, sqp->in_use_bm))) { + pr_err("unexpected completion sqp %p queue_num=%d qc_idx=%d from %s ", + sqp, queue_num, qc_idx, __func__); + goto out; + } + + if (unlikely(retiring)) { /* user has reduced max_queue */ + int k, retval; + + retval = atomic_read(&retired_max_queue); + if (qc_idx >= retval) { + pr_err("index %d too large ", retval); + goto out; + } + k = find_last_bit(sqp->in_use_bm, retval); + if ((k < sdebug_max_queue) || (k == retval)) + atomic_set(&retired_max_queue, 0); + else + atomic_set(&retired_max_queue, k + 1); + } + spin_unlock_irqrestore(&sqp->qc_lock, iflags); + scp->scsi_done(scp); /* callback to mid level */ + num_entries++; + } while (1); + +out: + spin_unlock_irqrestore(&sqp->qc_lock, iflags); + return num_entries; +} + + + .map_queues = sdebug_map_queues, + .mq_poll = sdebug_blk_mq_poll, + /* poll queues are possible for nr_hw_queues > 1 */ + if (hpnt->nr_hw_queues == 1 || (poll_queues < 1)) { + pr_warn("%s: trim poll_queues to 0. poll_q/nr_hw = (%d/%d) ", + my_name, poll_queues, hpnt->nr_hw_queues); + poll_queues = 0; + } + + /* + * poll queues don't need interrupts, but we need at least one i/o queue + * left over for non-polled i/o. + * if condition not met, trim poll_queues to 1 (just for simplicity). + */ + if (poll_queues >= submit_queues) { + pr_warn("%s: trim poll_queues to 1 ", my_name); + poll_queues = 1; + } + if (poll_queues) + hpnt->nr_maps = 3; +
|
Storage
|
c4b57d89bad8282c9f461e6b3308df160c50ff8e
|
kashyap desai douglas gilbert dgilbert interlog com hannes reinecke hare suse de douglas gilbert dgilbert interlog com
|
drivers
|
scsi
| |
scsi: smartpqi: add new pci ids
|
add support for newer hardware.
|
this release includes the landlock security module, which aims to make easier to sandbox applications; support for the clang control flow integrity, which aims to abort the program upon detecting certain forms of undefined behavior; support for randomising the stack address offset in each syscall; support for concurrent tbl flushing; preparatory apple m1 support; support for incoming amd and intel graphics chips; bpf support for calling kernel functions directly; a virtio sound driver for improved sound experience on virtualized guests; io_uring support for multi shot mode and a misc cgroup for miscellaneous resources. as always, there are many other features, new drivers, improvements and fixes.
|
add new pci ids
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'security', 'networking', 'architectures arm x86 mips powerpc riscv s390 ia64 xtensa']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'remote processors', 'clock', 'phy ("physical layer" framework)', 'various']
|
['scsi ', 'smartpqi']
|
['c']
| 1
| 156
| 0
|
--- diff --git a/drivers/scsi/smartpqi/smartpqi_init.c b/drivers/scsi/smartpqi/smartpqi_init.c --- a/drivers/scsi/smartpqi/smartpqi_init.c +++ b/drivers/scsi/smartpqi/smartpqi_init.c + { + pci_device_sub(pci_vendor_id_adaptec2, 0x028f, + 0x193d, 0x8460) + }, + { + pci_device_sub(pci_vendor_id_adaptec2, 0x028f, + 0x1bd4, 0x0051) + }, + { + pci_device_sub(pci_vendor_id_adaptec2, 0x028f, + 0x1bd4, 0x0052) + }, + { + pci_device_sub(pci_vendor_id_adaptec2, 0x028f, + 0x1bd4, 0x0053) + }, + { + pci_device_sub(pci_vendor_id_adaptec2, 0x028f, + 0x1bd4, 0x0054) + }, + { + pci_device_sub(pci_vendor_id_adaptec2, 0x028f, + pci_vendor_id_adaptec2, 0x1400) + }, + { + pci_device_sub(pci_vendor_id_adaptec2, 0x028f, + pci_vendor_id_adaptec2, 0x1402) + }, + { + pci_device_sub(pci_vendor_id_adaptec2, 0x028f, + pci_vendor_id_adaptec2, 0x1410) + }, + { + pci_device_sub(pci_vendor_id_adaptec2, 0x028f, + pci_vendor_id_adaptec2, 0x1411) + }, + { + pci_device_sub(pci_vendor_id_adaptec2, 0x028f, + pci_vendor_id_adaptec2, 0x1412) + }, + { + pci_device_sub(pci_vendor_id_adaptec2, 0x028f, + pci_vendor_id_adaptec2, 0x1420) + }, + { + pci_device_sub(pci_vendor_id_adaptec2, 0x028f, + pci_vendor_id_adaptec2, 0x1430) + }, + { + pci_device_sub(pci_vendor_id_adaptec2, 0x028f, + pci_vendor_id_adaptec2, 0x1440) + }, + { + pci_device_sub(pci_vendor_id_adaptec2, 0x028f, + pci_vendor_id_adaptec2, 0x1441) + }, + { + pci_device_sub(pci_vendor_id_adaptec2, 0x028f, + pci_vendor_id_adaptec2, 0x1450) + }, + { + pci_device_sub(pci_vendor_id_adaptec2, 0x028f, + pci_vendor_id_adaptec2, 0x1452) + }, + { + pci_device_sub(pci_vendor_id_adaptec2, 0x028f, + pci_vendor_id_adaptec2, 0x1460) + }, + { + pci_device_sub(pci_vendor_id_adaptec2, 0x028f, + pci_vendor_id_adaptec2, 0x1461) + }, + { + pci_device_sub(pci_vendor_id_adaptec2, 0x028f, + pci_vendor_id_adaptec2, 0x1462) + }, + { + pci_device_sub(pci_vendor_id_adaptec2, 0x028f, + pci_vendor_id_adaptec2, 0x1470) + }, + { + pci_device_sub(pci_vendor_id_adaptec2, 0x028f, + pci_vendor_id_adaptec2, 0x1471) + }, + { + pci_device_sub(pci_vendor_id_adaptec2, 0x028f, + pci_vendor_id_adaptec2, 0x1472) + }, + { + pci_device_sub(pci_vendor_id_adaptec2, 0x028f, + pci_vendor_id_adaptec2, 0x1480) + }, + { + pci_device_sub(pci_vendor_id_adaptec2, 0x028f, + pci_vendor_id_adaptec2, 0x1490) + }, + { + pci_device_sub(pci_vendor_id_adaptec2, 0x028f, + pci_vendor_id_adaptec2, 0x1491) + }, + { + pci_device_sub(pci_vendor_id_adaptec2, 0x028f, + pci_vendor_id_adaptec2, 0x14a0) + }, + { + pci_device_sub(pci_vendor_id_adaptec2, 0x028f, + pci_vendor_id_adaptec2, 0x14a1) + }, + { + pci_device_sub(pci_vendor_id_adaptec2, 0x028f, + pci_vendor_id_adaptec2, 0x14b0) + }, + { + pci_device_sub(pci_vendor_id_adaptec2, 0x028f, + pci_vendor_id_adaptec2, 0x14b1) + }, + { + pci_device_sub(pci_vendor_id_adaptec2, 0x028f, + pci_vendor_id_adaptec2, 0x14c0) + }, + { + pci_device_sub(pci_vendor_id_adaptec2, 0x028f, + pci_vendor_id_adaptec2, 0x14c1) + }, + { + pci_device_sub(pci_vendor_id_adaptec2, 0x028f, + pci_vendor_id_adaptec2, 0x14d0) + }, + { + pci_device_sub(pci_vendor_id_adaptec2, 0x028f, + pci_vendor_id_adaptec2, 0x14e0) + }, + { + pci_device_sub(pci_vendor_id_adaptec2, 0x028f, + pci_vendor_id_adaptec2, 0x14f0) + }, + { + pci_device_sub(pci_vendor_id_adaptec2, 0x028f, + pci_vendor_id_hp, 0x1002) + }, + { + pci_device_sub(pci_vendor_id_adaptec2, 0x028f, + 0x1590, 0x0294) + }, + { + pci_device_sub(pci_vendor_id_adaptec2, 0x028f, + 0x1590, 0x02db) + }, + { + pci_device_sub(pci_vendor_id_adaptec2, 0x028f, + 0x1590, 0x02dc) + }, + { + pci_device_sub(pci_vendor_id_adaptec2, 0x028f, + 0x1590, 0x032e) + },
|
Storage
|
75fbeacca3ad30835e903002dba98dd909b4dfff
|
kevin barnett martin wilck mwilck suse com scott benesh scott benesh microchip com scott teel scott teel microchip com
|
drivers
|
scsi
|
smartpqi
|
scsi: smartpqi: add stream detection
|
enhance performance by adding sequential stream detection for raid5/raid6 sequential write requests. reduce stripe lock contention with full-stripe write operations.
|
this release includes the landlock security module, which aims to make easier to sandbox applications; support for the clang control flow integrity, which aims to abort the program upon detecting certain forms of undefined behavior; support for randomising the stack address offset in each syscall; support for concurrent tbl flushing; preparatory apple m1 support; support for incoming amd and intel graphics chips; bpf support for calling kernel functions directly; a virtio sound driver for improved sound experience on virtualized guests; io_uring support for multi shot mode and a misc cgroup for miscellaneous resources. as always, there are many other features, new drivers, improvements and fixes.
|
add stream detection
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'security', 'networking', 'architectures arm x86 mips powerpc riscv s390 ia64 xtensa']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'remote processors', 'clock', 'phy ("physical layer" framework)', 'various']
|
['scsi ', 'smartpqi']
|
['h', 'c']
| 2
| 90
| 6
|
--- diff --git a/drivers/scsi/smartpqi/smartpqi.h b/drivers/scsi/smartpqi/smartpqi.h --- a/drivers/scsi/smartpqi/smartpqi.h +++ b/drivers/scsi/smartpqi/smartpqi.h +#define num_streams_per_lun 8 + +struct pqi_stream_data { + u64 next_lba; + u32 last_accessed; +}; + + struct pqi_stream_data stream_data[num_streams_per_lun]; + u8 enable_stream_detection : 1; diff --git a/drivers/scsi/smartpqi/smartpqi_init.c b/drivers/scsi/smartpqi/smartpqi_init.c --- a/drivers/scsi/smartpqi/smartpqi_init.c +++ b/drivers/scsi/smartpqi/smartpqi_init.c -static int pqi_scsi_queue_command(struct scsi_host *shost, +static bool pqi_is_parity_write_stream(struct pqi_ctrl_info *ctrl_info, +{ + u32 oldest_jiffies; + u8 lru_index; + int i; + int rc; + struct pqi_scsi_dev *device; + struct pqi_stream_data *pqi_stream_data; + struct pqi_scsi_dev_raid_map_data rmd; + + if (!ctrl_info->enable_stream_detection) + return false; + + rc = pqi_get_aio_lba_and_block_count(scmd, &rmd); + if (rc) + return false; + + /* check writes only. */ + if (!rmd.is_write) + return false; + + device = scmd->device->hostdata; + + /* check for raid 5/6 streams. */ + if (device->raid_level != sa_raid_5 && device->raid_level != sa_raid_6) + return false; + + /* + * if controller does not support aio raid{5,6} writes, need to send + * requests down non-aio path. + */ + if ((device->raid_level == sa_raid_5 && !ctrl_info->enable_r5_writes) || + (device->raid_level == sa_raid_6 && !ctrl_info->enable_r6_writes)) + return true; + + lru_index = 0; + oldest_jiffies = int_max; + for (i = 0; i < num_streams_per_lun; i++) { + pqi_stream_data = &device->stream_data[i]; + /* + * check for adjacent request or request is within + * the previous request. + */ + if ((pqi_stream_data->next_lba && + rmd.first_block >= pqi_stream_data->next_lba) && + rmd.first_block <= pqi_stream_data->next_lba + + rmd.block_cnt) { + pqi_stream_data->next_lba = rmd.first_block + + rmd.block_cnt; + pqi_stream_data->last_accessed = jiffies; + return true; + } + + /* unused entry */ + if (pqi_stream_data->last_accessed == 0) { + lru_index = i; + break; + } + + /* find entry with oldest last accessed time. */ + if (pqi_stream_data->last_accessed <= oldest_jiffies) { + oldest_jiffies = pqi_stream_data->last_accessed; + lru_index = i; + } + } + + /* set lru entry. */ + pqi_stream_data = &device->stream_data[lru_index]; + pqi_stream_data->last_accessed = jiffies; + pqi_stream_data->next_lba = rmd.first_block + rmd.block_cnt; + + return false; +} + +static int pqi_scsi_queue_command(struct scsi_host *shost, struct scsi_cmnd *scmd) - rc = pqi_raid_bypass_submit_scsi_cmd(ctrl_info, device, - scmd, queue_group); - if (rc == 0 || rc == scsi_mlqueue_host_busy) { - raid_bypassed = true; - atomic_inc(&device->raid_bypass_cnt); + if (!pqi_is_parity_write_stream(ctrl_info, scmd)) { + rc = pqi_raid_bypass_submit_scsi_cmd(ctrl_info, device, scmd, queue_group); + if (rc == 0 || rc == scsi_mlqueue_host_busy) { + raid_bypassed = true; + atomic_inc(&device->raid_bypass_cnt); + }
|
Storage
|
c7ffedb3a774a835450a518566639254534e72c4
|
don brace
|
drivers
|
scsi
|
smartpqi
|
scsi: smartpqi: add support for bmic sense feature cmd and feature bits
|
determine support for supported features from bmic sense feature command instead of config table. enable features such as: raid 1/5/6 write support, sata wwid, and encryption.
|
this release includes the landlock security module, which aims to make easier to sandbox applications; support for the clang control flow integrity, which aims to abort the program upon detecting certain forms of undefined behavior; support for randomising the stack address offset in each syscall; support for concurrent tbl flushing; preparatory apple m1 support; support for incoming amd and intel graphics chips; bpf support for calling kernel functions directly; a virtio sound driver for improved sound experience on virtualized guests; io_uring support for multi shot mode and a misc cgroup for miscellaneous resources. as always, there are many other features, new drivers, improvements and fixes.
|
add support for bmic sense feature cmd and feature bits
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'security', 'networking', 'architectures arm x86 mips powerpc riscv s390 ia64 xtensa']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'remote processors', 'clock', 'phy ("physical layer" framework)', 'various']
|
['scsi ', 'smartpqi']
|
['h', 'c']
| 2
| 388
| 19
|
--- diff --git a/drivers/scsi/smartpqi/smartpqi.h b/drivers/scsi/smartpqi/smartpqi.h --- a/drivers/scsi/smartpqi/smartpqi.h +++ b/drivers/scsi/smartpqi/smartpqi.h +#define pqi_default_max_write_raid_5_6 (8 * 1024u) +#define pqi_default_max_transfer_encrypted_sas_sata (~0u) +#define pqi_default_max_transfer_encrypted_nvme (32 * 1024u) + +/* the 2 fields below are only valid if the max_known_feature bit is set. */ +/* __le16 firmware_max_known_feature; */ +/* __le16 host_max_known_feature; */ +#define pqi_firmware_feature_max_known_feature 2 +#define pqi_firmware_feature_raid_0_read_bypass 3 +#define pqi_firmware_feature_raid_1_read_bypass 4 +#define pqi_firmware_feature_raid_5_read_bypass 5 +#define pqi_firmware_feature_raid_6_read_bypass 6 +#define pqi_firmware_feature_raid_0_write_bypass 7 +#define pqi_firmware_feature_raid_1_write_bypass 8 +#define pqi_firmware_feature_raid_5_write_bypass 9 +#define pqi_firmware_feature_raid_6_write_bypass 10 +#define pqi_firmware_feature_unique_sata_wwn 12 +#define pqi_firmware_feature_raid_bypass_on_encrypted_nvme 15 +#define pqi_firmware_feature_maximum 15 + u32 max_transfer_encrypted; + u8 lv_drive_type_mix_valid : 1; + + u8 ciss_report_log_flags; + u32 max_transfer_encrypted_sas_sata; + u32 max_transfer_encrypted_nvme; + u32 max_write_raid_5_6; + u32 max_write_raid_1_10_2drive; + u32 max_write_raid_1_10_3drive; +#define bmic_sense_feature 0x61 +#define lv_get_drive_type_mix(lunid) ((lunid)[6]) + +#define lv_drive_type_mix_unknown 0 +#define lv_drive_type_mix_no_restriction 1 +#define lv_drive_type_mix_sas_hdd_only 2 +#define lv_drive_type_mix_sata_hdd_only 3 +#define lv_drive_type_mix_sas_or_sata_ssd_only 4 +#define lv_drive_type_mix_sas_ssd_only 5 +#define lv_drive_type_mix_sata_ssd_only 6 +#define lv_drive_type_mix_sas_only 7 +#define lv_drive_type_mix_sata_only 8 +#define lv_drive_type_mix_nvme_only 9 + +#define bmic_sense_feature_io_page 0x8 +#define bmic_sense_feature_io_page_aio_subpage 0x2 + +struct bmic_sense_feature_buffer_header { + u8 page_code; + u8 subpage_code; + __le16 buffer_length; +}; + +struct bmic_sense_feature_page_header { + u8 page_code; + u8 subpage_code; + __le16 page_length; +}; + +struct bmic_sense_feature_io_page_aio_subpage { + struct bmic_sense_feature_page_header header; + u8 firmware_read_support; + u8 driver_read_support; + u8 firmware_write_support; + u8 driver_write_support; + __le16 max_transfer_encrypted_sas_sata; + __le16 max_transfer_encrypted_nvme; + __le16 max_write_raid_5_6; + __le16 max_write_raid_1_10_2drive; + __le16 max_write_raid_1_10_3drive; +}; + diff --git a/drivers/scsi/smartpqi/smartpqi_init.c b/drivers/scsi/smartpqi/smartpqi_init.c --- a/drivers/scsi/smartpqi/smartpqi_init.c +++ b/drivers/scsi/smartpqi/smartpqi_init.c - cdb[1] = ciss_report_log_flag_unique_lun_id; + cdb[1] = ctrl_info->ciss_report_log_flags; + case bmic_sense_feature: +static inline u32 pqi_aio_limit_to_bytes(__le16 *limit) +{ + u32 bytes; + + bytes = get_unaligned_le16(limit); + if (bytes == 0) + bytes = ~0; + else + bytes *= 1024; + + return bytes; +} + +#pragma pack(1) + +struct bmic_sense_feature_buffer { + struct bmic_sense_feature_buffer_header header; + struct bmic_sense_feature_io_page_aio_subpage aio_subpage; +}; + +#pragma pack() + +#define minimum_aio_subpage_buffer_length \ + offsetofend(struct bmic_sense_feature_buffer, \ + aio_subpage.max_write_raid_1_10_3drive) + +#define minimum_aio_subpage_length \ + (offsetofend(struct bmic_sense_feature_io_page_aio_subpage, \ + max_write_raid_1_10_3drive) - \ + sizeof_field(struct bmic_sense_feature_io_page_aio_subpage, header)) + +static int pqi_get_advanced_raid_bypass_config(struct pqi_ctrl_info *ctrl_info) +{ + int rc; + enum dma_data_direction dir; + struct pqi_raid_path_request request; + struct bmic_sense_feature_buffer *buffer; + + buffer = kmalloc(sizeof(*buffer), gfp_kernel); + if (!buffer) + return -enomem; + + rc = pqi_build_raid_path_request(ctrl_info, &request, + bmic_sense_feature, raid_ctlr_lunid, buffer, + sizeof(*buffer), 0, &dir); + if (rc) + goto error; + + request.cdb[2] = bmic_sense_feature_io_page; + request.cdb[3] = bmic_sense_feature_io_page_aio_subpage; + + rc = pqi_submit_raid_request_synchronous(ctrl_info, &request.header, + 0, null, no_timeout); + + pqi_pci_unmap(ctrl_info->pci_dev, request.sg_descriptors, 1, dir); + + if (rc) + goto error; + + if (buffer->header.page_code != bmic_sense_feature_io_page || + buffer->header.subpage_code != + bmic_sense_feature_io_page_aio_subpage || + get_unaligned_le16(&buffer->header.buffer_length) < + minimum_aio_subpage_buffer_length || + buffer->aio_subpage.header.page_code != + bmic_sense_feature_io_page || + buffer->aio_subpage.header.subpage_code != + bmic_sense_feature_io_page_aio_subpage || + get_unaligned_le16(&buffer->aio_subpage.header.page_length) < + minimum_aio_subpage_length) { + goto error; + } + + ctrl_info->max_transfer_encrypted_sas_sata = + pqi_aio_limit_to_bytes( + &buffer->aio_subpage.max_transfer_encrypted_sas_sata); + + ctrl_info->max_transfer_encrypted_nvme = + pqi_aio_limit_to_bytes( + &buffer->aio_subpage.max_transfer_encrypted_nvme); + + ctrl_info->max_write_raid_5_6 = + pqi_aio_limit_to_bytes( + &buffer->aio_subpage.max_write_raid_5_6); + + ctrl_info->max_write_raid_1_10_2drive = + pqi_aio_limit_to_bytes( + &buffer->aio_subpage.max_write_raid_1_10_2drive); + + ctrl_info->max_write_raid_1_10_3drive = + pqi_aio_limit_to_bytes( + &buffer->aio_subpage.max_write_raid_1_10_3drive); + +error: + kfree(buffer); + + return rc; +} + +static void pqi_set_max_transfer_encrypted(struct pqi_ctrl_info *ctrl_info, + struct pqi_scsi_dev *device) +{ + if (!ctrl_info->lv_drive_type_mix_valid) { + device->max_transfer_encrypted = ~0; + return; + } + + switch (lv_get_drive_type_mix(device->scsi3addr)) { + case lv_drive_type_mix_sas_hdd_only: + case lv_drive_type_mix_sata_hdd_only: + case lv_drive_type_mix_sas_or_sata_ssd_only: + case lv_drive_type_mix_sas_ssd_only: + case lv_drive_type_mix_sata_ssd_only: + case lv_drive_type_mix_sas_only: + case lv_drive_type_mix_sata_only: + device->max_transfer_encrypted = + ctrl_info->max_transfer_encrypted_sas_sata; + break; + case lv_drive_type_mix_nvme_only: + device->max_transfer_encrypted = + ctrl_info->max_transfer_encrypted_nvme; + break; + case lv_drive_type_mix_unknown: + case lv_drive_type_mix_no_restriction: + default: + device->max_transfer_encrypted = + min(ctrl_info->max_transfer_encrypted_sas_sata, + ctrl_info->max_transfer_encrypted_nvme); + break; + } +} + - pqi_get_raid_map(ctrl_info, device) == 0) + pqi_get_raid_map(ctrl_info, device) == 0) { + if (get_unaligned_le16(&device->raid_map->flags) & + raid_map_encryption_enabled) + pqi_set_max_transfer_encrypted(ctrl_info, device); + } + if (num_logicals && + (logdev_list->header.flags & ciss_report_log_flag_drive_type_mix)) + ctrl_info->lv_drive_type_mix_valid = true; + + if (rmd->is_write && (!ctrl_info->enable_r1_writes || + rmd->data_length > ctrl_info->max_write_raid_1_10_2drive)) + is_supported = false; + break; - if (rmd->is_write && !ctrl_info->enable_r1_writes) + if (rmd->is_write && (!ctrl_info->enable_r1_writes || + rmd->data_length > ctrl_info->max_write_raid_1_10_3drive)) - if (rmd->is_write && !ctrl_info->enable_r5_writes) + if (rmd->is_write && (!ctrl_info->enable_r5_writes || + rmd->data_length > ctrl_info->max_write_raid_5_6)) - if (rmd->is_write && !ctrl_info->enable_r6_writes) + if (rmd->is_write && (!ctrl_info->enable_r6_writes || + rmd->data_length > ctrl_info->max_write_raid_5_6)) + break; - raid_map_encryption_enabled) { + raid_map_encryption_enabled) { + if (rmd.data_length > device->max_transfer_encrypted) + return pqi_raid_bypass_ineligible; - case sa_raid_0: - return pqi_aio_submit_io(ctrl_info, scmd, rmd.aio_handle, - rmd.cdb, rmd.cdb_length, queue_group, - encryption_info_ptr, true); - default: - return pqi_aio_submit_io(ctrl_info, scmd, rmd.aio_handle, - rmd.cdb, rmd.cdb_length, queue_group, - encryption_info_ptr, true); - } else { - return pqi_aio_submit_io(ctrl_info, scmd, rmd.aio_handle, - rmd.cdb, rmd.cdb_length, queue_group, - encryption_info_ptr, true); + return pqi_aio_submit_io(ctrl_info, scmd, rmd.aio_handle, + rmd.cdb, rmd.cdb_length, queue_group, + encryption_info_ptr, true); + void __iomem *host_max_known_feature_iomem_addr; + if (pqi_is_firmware_feature_supported(firmware_features, + pqi_firmware_feature_max_known_feature)) { + host_max_known_feature_iomem_addr = + features_requested_iomem_addr + + (le16_to_cpu(firmware_features->num_elements) * 2) + + sizeof(__le16); + writew(pqi_firmware_feature_maximum, + host_max_known_feature_iomem_addr); + } + + case pqi_firmware_feature_raid_1_write_bypass: + ctrl_info->enable_r1_writes = firmware_feature->enabled; + break; + case pqi_firmware_feature_raid_5_write_bypass: + ctrl_info->enable_r5_writes = firmware_feature->enabled; + break; + case pqi_firmware_feature_raid_6_write_bypass: + ctrl_info->enable_r6_writes = firmware_feature->enabled; + break; + { + .feature_name = "maximum known feature", + .feature_bit = pqi_firmware_feature_max_known_feature, + .feature_status = pqi_firmware_feature_status, + }, + { + .feature_name = "raid 0 read bypass", + .feature_bit = pqi_firmware_feature_raid_0_read_bypass, + .feature_status = pqi_firmware_feature_status, + }, + { + .feature_name = "raid 1 read bypass", + .feature_bit = pqi_firmware_feature_raid_1_read_bypass, + .feature_status = pqi_firmware_feature_status, + }, + { + .feature_name = "raid 5 read bypass", + .feature_bit = pqi_firmware_feature_raid_5_read_bypass, + .feature_status = pqi_firmware_feature_status, + }, + { + .feature_name = "raid 6 read bypass", + .feature_bit = pqi_firmware_feature_raid_6_read_bypass, + .feature_status = pqi_firmware_feature_status, + }, + { + .feature_name = "raid 0 write bypass", + .feature_bit = pqi_firmware_feature_raid_0_write_bypass, + .feature_status = pqi_firmware_feature_status, + }, + { + .feature_name = "raid 1 write bypass", + .feature_bit = pqi_firmware_feature_raid_1_write_bypass, + .feature_status = pqi_ctrl_update_feature_flags, + }, + { + .feature_name = "raid 5 write bypass", + .feature_bit = pqi_firmware_feature_raid_5_write_bypass, + .feature_status = pqi_ctrl_update_feature_flags, + }, + { + .feature_name = "raid 6 write bypass", + .feature_bit = pqi_firmware_feature_raid_6_write_bypass, + .feature_status = pqi_ctrl_update_feature_flags, + }, + { + .feature_name = "raid bypass on encrypted logical volumes on nvme", + .feature_bit = pqi_firmware_feature_raid_bypass_on_encrypted_nvme, + .feature_status = pqi_firmware_feature_status, + }, +/* + * reset all controller settings that can be initialized during the processing + * of the pqi configuration table. + */ + + bool firmware_feature_section_present; + struct pqi_config_table_section_info feature_section_info; + firmware_feature_section_present = false; - pqi_process_firmware_features_section(§ion_info); + firmware_feature_section_present = true; + feature_section_info = section_info; + /* + * we process the firmware feature section after all other sections + * have been processed so that the feature bit callbacks can take + * into account the settings configured by other sections. + */ + if (firmware_feature_section_present) + pqi_process_firmware_features_section(&feature_section_info); + + if (ctrl_info->enable_r5_writes || ctrl_info->enable_r6_writes) { + rc = pqi_get_advanced_raid_bypass_config(ctrl_info); + if (rc) { /* supported features not returned correctly. */ + dev_err(&ctrl_info->pci_dev->dev, + "error obtaining advanced raid bypass configuration "); + return rc; + } + ctrl_info->ciss_report_log_flags |= + ciss_report_log_flag_drive_type_mix; + } + + if (ctrl_info->enable_r5_writes || ctrl_info->enable_r6_writes) { + rc = pqi_get_advanced_raid_bypass_config(ctrl_info); + if (rc) { + dev_err(&ctrl_info->pci_dev->dev, + "error obtaining advanced raid bypass configuration "); + return rc; + } + ctrl_info->ciss_report_log_flags |= + ciss_report_log_flag_drive_type_mix; + } + + ctrl_info->ciss_report_log_flags = ciss_report_log_flag_unique_lun_id; + ctrl_info->max_transfer_encrypted_sas_sata = + pqi_default_max_transfer_encrypted_sas_sata; + ctrl_info->max_transfer_encrypted_nvme = + pqi_default_max_transfer_encrypted_nvme; + ctrl_info->max_write_raid_5_6 = pqi_default_max_write_raid_5_6; + ctrl_info->max_write_raid_1_10_2drive = ~0; + ctrl_info->max_write_raid_1_10_3drive = ~0; + + build_bug_on(sizeof(struct bmic_sense_feature_buffer_header) != 4); + build_bug_on(offsetof(struct bmic_sense_feature_buffer_header, + page_code) != 0); + build_bug_on(offsetof(struct bmic_sense_feature_buffer_header, + subpage_code) != 1); + build_bug_on(offsetof(struct bmic_sense_feature_buffer_header, + buffer_length) != 2); + + build_bug_on(sizeof(struct bmic_sense_feature_page_header) != 4); + build_bug_on(offsetof(struct bmic_sense_feature_page_header, + page_code) != 0); + build_bug_on(offsetof(struct bmic_sense_feature_page_header, + subpage_code) != 1); + build_bug_on(offsetof(struct bmic_sense_feature_page_header, + page_length) != 2); + + build_bug_on(sizeof(struct bmic_sense_feature_io_page_aio_subpage) + != 18); + build_bug_on(offsetof(struct bmic_sense_feature_io_page_aio_subpage, + header) != 0); + build_bug_on(offsetof(struct bmic_sense_feature_io_page_aio_subpage, + firmware_read_support) != 4); + build_bug_on(offsetof(struct bmic_sense_feature_io_page_aio_subpage, + driver_read_support) != 5); + build_bug_on(offsetof(struct bmic_sense_feature_io_page_aio_subpage, + firmware_write_support) != 6); + build_bug_on(offsetof(struct bmic_sense_feature_io_page_aio_subpage, + driver_write_support) != 7); + build_bug_on(offsetof(struct bmic_sense_feature_io_page_aio_subpage, + max_transfer_encrypted_sas_sata) != 8); + build_bug_on(offsetof(struct bmic_sense_feature_io_page_aio_subpage, + max_transfer_encrypted_nvme) != 10); + build_bug_on(offsetof(struct bmic_sense_feature_io_page_aio_subpage, + max_write_raid_5_6) != 12); + build_bug_on(offsetof(struct bmic_sense_feature_io_page_aio_subpage, + max_write_raid_1_10_2drive) != 14); + build_bug_on(offsetof(struct bmic_sense_feature_io_page_aio_subpage, + max_write_raid_1_10_3drive) != 16); +
|
Storage
|
f6cc2a774aa7f5469f381b52804bb244d4f8f4d7
|
kevin barnett scott teel scott teel microchip com mike mcgowen mike mcgowen microchip com scott benesh scott benesh microchip com
|
drivers
|
scsi
|
smartpqi
|
scsi: smartpqi: add support for raid1 writes
|
add raid1 write iu and implement raid1 write support. change brand names adm/adg to triple/raid-6.
|
this release includes the landlock security module, which aims to make easier to sandbox applications; support for the clang control flow integrity, which aims to abort the program upon detecting certain forms of undefined behavior; support for randomising the stack address offset in each syscall; support for concurrent tbl flushing; preparatory apple m1 support; support for incoming amd and intel graphics chips; bpf support for calling kernel functions directly; a virtio sound driver for improved sound experience on virtualized guests; io_uring support for multi shot mode and a misc cgroup for miscellaneous resources. as always, there are many other features, new drivers, improvements and fixes.
|
add support for raid1 writes
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'security', 'networking', 'architectures arm x86 mips powerpc riscv s390 ia64 xtensa']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'remote processors', 'clock', 'phy ("physical layer" framework)', 'various']
|
['scsi ', 'smartpqi']
|
['h', 'c']
| 2
| 183
| 82
|
--- diff --git a/drivers/scsi/smartpqi/smartpqi.h b/drivers/scsi/smartpqi/smartpqi.h --- a/drivers/scsi/smartpqi/smartpqi.h +++ b/drivers/scsi/smartpqi/smartpqi.h +#define pqi_raid1_nvme_xfer_limit (32 * 1024) /* 32 kib */ +struct pqi_aio_r1_path_request { + struct pqi_iu_header header; + __le16 request_id; + __le16 volume_id; /* id of the raid volume */ + __le32 it_nexus_1; /* it nexus of the 1st drive in the raid volume */ + __le32 it_nexus_2; /* it nexus of the 2nd drive in the raid volume */ + __le32 it_nexus_3; /* it nexus of the 3rd drive in the raid volume */ + __le32 data_length; /* total bytes to read/write */ + u8 data_direction : 2; + u8 partial : 1; + u8 memory_type : 1; + u8 fence : 1; + u8 encryption_enable : 1; + u8 reserved : 2; + u8 task_attribute : 3; + u8 command_priority : 4; + u8 reserved2 : 1; + __le16 data_encryption_key_index; + u8 cdb[16]; + __le16 error_index; + u8 num_sg_descriptors; + u8 cdb_length; + u8 num_drives; /* number of drives in the raid volume (2 or 3) */ + u8 reserved3[3]; + __le32 encrypt_tweak_lower; + __le32 encrypt_tweak_upper; + struct pqi_sg_descriptor sg_descriptors[pqi_max_embedded_sg_descriptors]; +}; + +#define pqi_request_iu_aio_path_raid1_io 0x1a - u32 current_group; - int offload_to_mirror; - int offload_to_mirror; /* send next raid bypass request */ - /* to mirror drive. */ + u32 next_bypass_group; + u8 enable_r1_writes : 1; diff --git a/drivers/scsi/smartpqi/smartpqi_init.c b/drivers/scsi/smartpqi/smartpqi_init.c --- a/drivers/scsi/smartpqi/smartpqi_init.c +++ b/drivers/scsi/smartpqi/smartpqi_init.c +static int pqi_aio_submit_r1_write_io(struct pqi_ctrl_info *ctrl_info, + struct scsi_cmnd *scmd, struct pqi_queue_group *queue_group, + struct pqi_encryption_info *encryption_info, struct pqi_scsi_dev *device, + struct pqi_scsi_dev_raid_map_data *rmd); - "raid-adg", - "raid-1(adm)", + "raid-6", + "raid-1(triple)", -#define sa_raid_adm 6 /* also used for raid 1+0 adm */ -#define sa_raid_max sa_raid_adm +#define sa_raid_triple 6 /* also used for raid 1+0 triple */ +#define sa_raid_max sa_raid_triple - } else if (device->raid_level == sa_raid_adm) { + } else if (device->raid_level == sa_raid_triple) { - err_msg = "invalid raid-1(adm) map"; + err_msg = "invalid raid-1(triple) map"; - existing_device->offload_to_mirror = 0; + existing_device->next_bypass_group = 0; - if (rmd->is_write) + case sa_raid_triple: + if (rmd->is_write && !ctrl_info->enable_r1_writes) - case sa_raid_adm: - if (rmd->is_write) - is_supported = false; - break; -static int pqi_calc_aio_raid_adm(struct pqi_scsi_dev_raid_map_data *rmd, - struct pqi_scsi_dev *device) -{ - /* raid adm */ - /* - * handles n-way mirrors (r1-adm) and r10 with # of drives - * divisible by 3. - */ - rmd->offload_to_mirror = device->offload_to_mirror; - - if (rmd->offload_to_mirror == 0) { - /* use physical disk in the first mirrored group. */ - rmd->map_index %= rmd->data_disks_per_row; - } else { - do { - /* - * determine mirror group that map_index - * indicates. - */ - rmd->current_group = - rmd->map_index / rmd->data_disks_per_row; - - if (rmd->offload_to_mirror != - rmd->current_group) { - if (rmd->current_group < - rmd->layout_map_count - 1) { - /* - * select raid index from - * next group. - */ - rmd->map_index += rmd->data_disks_per_row; - rmd->current_group++; - } else { - /* - * select raid index from first - * group. - */ - rmd->map_index %= rmd->data_disks_per_row; - rmd->current_group = 0; - } - } - } while (rmd->offload_to_mirror != rmd->current_group); - } - - /* set mirror group to use next time. */ - rmd->offload_to_mirror = - (rmd->offload_to_mirror >= rmd->layout_map_count - 1) ? - 0 : rmd->offload_to_mirror + 1; - device->offload_to_mirror = rmd->offload_to_mirror; - /* - * avoid direct use of device->offload_to_mirror within this - * function since multiple threads might simultaneously - * increment it beyond the range of device->layout_map_count -1. - */ - - return 0; -} - +static void pqi_calc_aio_r1_nexus(struct raid_map *raid_map, + struct pqi_scsi_dev_raid_map_data *rmd) +{ + u32 index; + u32 group; + + group = rmd->map_index / rmd->data_disks_per_row; + + index = rmd->map_index - (group * rmd->data_disks_per_row); + rmd->it_nexus[0] = raid_map->disk_data[index].aio_handle; + index += rmd->data_disks_per_row; + rmd->it_nexus[1] = raid_map->disk_data[index].aio_handle; + if (rmd->layout_map_count > 2) { + index += rmd->data_disks_per_row; + rmd->it_nexus[2] = raid_map->disk_data[index].aio_handle; + } + + rmd->num_it_nexus_entries = rmd->layout_map_count; +} + - struct raid_map *raid_map; + struct raid_map *raid_map; + u32 group; + u32 next_bypass_group; - /* raid 1 */ - if (device->raid_level == sa_raid_1) { - if (device->offload_to_mirror) - rmd.map_index += rmd.data_disks_per_row; - device->offload_to_mirror = !device->offload_to_mirror; - } else if (device->raid_level == sa_raid_adm) { - rc = pqi_calc_aio_raid_adm(&rmd, device); + if (device->raid_level == sa_raid_1 || + device->raid_level == sa_raid_triple) { + if (rmd.is_write) { + pqi_calc_aio_r1_nexus(raid_map, &rmd); + } else { + group = device->next_bypass_group; + next_bypass_group = group + 1; + if (next_bypass_group >= rmd.layout_map_count) + next_bypass_group = 0; + device->next_bypass_group = next_bypass_group; + rmd.map_index += group * rmd.data_disks_per_row; + } + case sa_raid_1: + case sa_raid_triple: + return pqi_aio_submit_r1_write_io(ctrl_info, scmd, queue_group, + encryption_info_ptr, device, &rmd); +static int pqi_build_aio_r1_sg_list(struct pqi_ctrl_info *ctrl_info, + struct pqi_aio_r1_path_request *request, struct scsi_cmnd *scmd, + struct pqi_io_request *io_request) +{ + u16 iu_length; + int sg_count; + bool chained; + unsigned int num_sg_in_iu; + struct scatterlist *sg; + struct pqi_sg_descriptor *sg_descriptor; + + sg_count = scsi_dma_map(scmd); + if (sg_count < 0) + return sg_count; + + iu_length = offsetof(struct pqi_aio_r1_path_request, sg_descriptors) - + pqi_request_header_length; + num_sg_in_iu = 0; + + if (sg_count == 0) + goto out; + + sg = scsi_sglist(scmd); + sg_descriptor = request->sg_descriptors; + + num_sg_in_iu = pqi_build_sg_list(sg_descriptor, sg, sg_count, io_request, + ctrl_info->max_sg_per_iu, &chained); + + request->partial = chained; + iu_length += num_sg_in_iu * sizeof(*sg_descriptor); + +out: + put_unaligned_le16(iu_length, &request->header.iu_length); + request->num_sg_descriptors = num_sg_in_iu; + + return 0; +} + +static int pqi_aio_submit_r1_write_io(struct pqi_ctrl_info *ctrl_info, + struct scsi_cmnd *scmd, struct pqi_queue_group *queue_group, + struct pqi_encryption_info *encryption_info, struct pqi_scsi_dev *device, + struct pqi_scsi_dev_raid_map_data *rmd) + +{ + int rc; + struct pqi_io_request *io_request; + struct pqi_aio_r1_path_request *r1_request; + + io_request = pqi_alloc_io_request(ctrl_info); + io_request->io_complete_callback = pqi_aio_io_complete; + io_request->scmd = scmd; + io_request->raid_bypass = true; + + r1_request = io_request->iu; + memset(r1_request, 0, offsetof(struct pqi_aio_r1_path_request, sg_descriptors)); + + r1_request->header.iu_type = pqi_request_iu_aio_path_raid1_io; + + put_unaligned_le16(*(u16 *)device->scsi3addr & 0x3fff, &r1_request->volume_id); + r1_request->num_drives = rmd->num_it_nexus_entries; + put_unaligned_le32(rmd->it_nexus[0], &r1_request->it_nexus_1); + put_unaligned_le32(rmd->it_nexus[1], &r1_request->it_nexus_2); + if (rmd->num_it_nexus_entries == 3) + put_unaligned_le32(rmd->it_nexus[2], &r1_request->it_nexus_3); + + put_unaligned_le32(scsi_bufflen(scmd), &r1_request->data_length); + r1_request->task_attribute = sop_task_attribute_simple; + put_unaligned_le16(io_request->index, &r1_request->request_id); + r1_request->error_index = r1_request->request_id; + if (rmd->cdb_length > sizeof(r1_request->cdb)) + rmd->cdb_length = sizeof(r1_request->cdb); + r1_request->cdb_length = rmd->cdb_length; + memcpy(r1_request->cdb, rmd->cdb, rmd->cdb_length); + + /* the direction is always write. */ + r1_request->data_direction = sop_read_flag; + + if (encryption_info) { + r1_request->encryption_enable = true; + put_unaligned_le16(encryption_info->data_encryption_key_index, + &r1_request->data_encryption_key_index); + put_unaligned_le32(encryption_info->encrypt_tweak_lower, + &r1_request->encrypt_tweak_lower); + put_unaligned_le32(encryption_info->encrypt_tweak_upper, + &r1_request->encrypt_tweak_upper); + } + + rc = pqi_build_aio_r1_sg_list(ctrl_info, r1_request, scmd, io_request); + if (rc) { + pqi_free_io_request(io_request); + return scsi_mlqueue_host_busy; + } + + pqi_start_io(ctrl_info, queue_group, aio_path, io_request); + + return 0; +} +
|
Storage
|
7a012c23c7a7d9cdc7b6db0e8837f8a413dbe436
|
don brace scott benesh scott benesh microchip com scott teel scott teel microchip com kevin barnett kevin barnett microchip com
|
drivers
|
scsi
|
smartpqi
|
scsi: smartpqi: add support for raid5 and raid6 writes
|
add in new iu definition and implement support for raid5 and raid6 writes.
|
this release includes the landlock security module, which aims to make easier to sandbox applications; support for the clang control flow integrity, which aims to abort the program upon detecting certain forms of undefined behavior; support for randomising the stack address offset in each syscall; support for concurrent tbl flushing; preparatory apple m1 support; support for incoming amd and intel graphics chips; bpf support for calling kernel functions directly; a virtio sound driver for improved sound experience on virtualized guests; io_uring support for multi shot mode and a misc cgroup for miscellaneous resources. as always, there are many other features, new drivers, improvements and fixes.
|
add support for raid5 and raid6 writes
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'security', 'networking', 'architectures arm x86 mips powerpc riscv s390 ia64 xtensa']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'remote processors', 'clock', 'phy ("physical layer" framework)', 'various']
|
['scsi ', 'smartpqi']
|
['h', 'c']
| 2
| 268
| 6
|
--- diff --git a/drivers/scsi/smartpqi/smartpqi.h b/drivers/scsi/smartpqi/smartpqi.h --- a/drivers/scsi/smartpqi/smartpqi.h +++ b/drivers/scsi/smartpqi/smartpqi.h +#define pqi_max_embedded_r56_sg_descriptors 3 +struct pqi_aio_r56_path_request { + struct pqi_iu_header header; + __le16 request_id; + __le16 volume_id; /* id of the raid volume */ + __le32 data_it_nexus; /* it nexus for the data drive */ + __le32 p_parity_it_nexus; /* it nexus for the p parity drive */ + __le32 q_parity_it_nexus; /* it nexus for the q parity drive */ + __le32 data_length; /* total bytes to read/write */ + u8 data_direction : 2; + u8 partial : 1; + u8 mem_type : 1; /* 0 = pcie, 1 = ddr */ + u8 fence : 1; + u8 encryption_enable : 1; + u8 reserved : 2; + u8 task_attribute : 3; + u8 command_priority : 4; + u8 reserved1 : 1; + __le16 data_encryption_key_index; + u8 cdb[16]; + __le16 error_index; + u8 num_sg_descriptors; + u8 cdb_length; + u8 xor_multiplier; + u8 reserved2[3]; + __le32 encrypt_tweak_lower; + __le32 encrypt_tweak_upper; + __le64 row; /* row = logical lba/blocks per row */ + u8 reserved3[8]; + struct pqi_sg_descriptor sg_descriptors[pqi_max_embedded_r56_sg_descriptors]; +}; + +#define pqi_request_iu_aio_path_raid5_io 0x18 +#define pqi_request_iu_aio_path_raid6_io 0x19 + unsigned int max_sg_per_r56_iu; + u8 enable_r5_writes : 1; + u8 enable_r6_writes : 1; diff --git a/drivers/scsi/smartpqi/smartpqi_init.c b/drivers/scsi/smartpqi/smartpqi_init.c --- a/drivers/scsi/smartpqi/smartpqi_init.c +++ b/drivers/scsi/smartpqi/smartpqi_init.c +static int pqi_aio_submit_r56_write_io(struct pqi_ctrl_info *ctrl_info, + struct scsi_cmnd *scmd, struct pqi_queue_group *queue_group, + struct pqi_encryption_info *encryption_info, struct pqi_scsi_dev *device, + struct pqi_scsi_dev_raid_map_data *rmd); -static bool pqi_aio_raid_level_supported(struct pqi_scsi_dev_raid_map_data *rmd) +static bool pqi_aio_raid_level_supported(struct pqi_ctrl_info *ctrl_info, + struct pqi_scsi_dev_raid_map_data *rmd) - fallthrough; + if (rmd->is_write && !ctrl_info->enable_r5_writes) + is_supported = false; + break; - if (rmd->is_write) + if (rmd->is_write && !ctrl_info->enable_r6_writes) + if (rmd->is_write) { + u32 index; + + /* + * p_parity_it_nexus and q_parity_it_nexus are pointers to the + * parity entries inside the device's raid_map. + * + * a device's raid map is bounded by: number of raid disks squared. + * + * the devices raid map size is checked during device + * initialization. + */ + index = div_round_up(rmd->map_index + 1, rmd->total_disks_per_row); + index *= rmd->total_disks_per_row; + index -= get_unaligned_le16(&raid_map->metadata_disks_per_row); + + rmd->p_parity_it_nexus = raid_map->disk_data[index].aio_handle; + if (rmd->raid_level == sa_raid_6) { + rmd->q_parity_it_nexus = raid_map->disk_data[index + 1].aio_handle; + rmd->xor_mult = raid_map->disk_data[rmd->map_index].xor_mult[1]; + } + if (rmd->blocks_per_row == 0) + return pqi_raid_bypass_ineligible; +#if bits_per_long == 32 + tmpdiv = rmd->first_block; + do_div(tmpdiv, rmd->blocks_per_row); + rmd->row = tmpdiv; +#else + rmd->row = rmd->first_block / rmd->blocks_per_row; +#endif + } + - if (!pqi_aio_raid_level_supported(&rmd)) + if (!pqi_aio_raid_level_supported(ctrl_info, &rmd)) - device->raid_level == sa_raid_6) && rmd.layout_map_count > 1) { + device->raid_level == sa_raid_6) && + (rmd.layout_map_count > 1 || rmd.is_write)) { - return pqi_aio_submit_io(ctrl_info, scmd, rmd.aio_handle, + if (rmd.is_write) { + switch (device->raid_level) { + case sa_raid_0: + return pqi_aio_submit_io(ctrl_info, scmd, rmd.aio_handle, + case sa_raid_5: + case sa_raid_6: + return pqi_aio_submit_r56_write_io(ctrl_info, scmd, queue_group, + encryption_info_ptr, device, &rmd); + default: + return pqi_aio_submit_io(ctrl_info, scmd, rmd.aio_handle, + rmd.cdb, rmd.cdb_length, queue_group, + encryption_info_ptr, true); + } + } else { + return pqi_aio_submit_io(ctrl_info, scmd, rmd.aio_handle, + rmd.cdb, rmd.cdb_length, queue_group, + encryption_info_ptr, true); + } + + + ctrl_info->max_sg_per_r56_iu = + ((ctrl_info->max_inbound_iu_length - + pqi_operational_iq_element_length) / + sizeof(struct pqi_sg_descriptor)) + + pqi_max_embedded_r56_sg_descriptors; +static int pqi_build_aio_r56_sg_list(struct pqi_ctrl_info *ctrl_info, + struct pqi_aio_r56_path_request *request, struct scsi_cmnd *scmd, + struct pqi_io_request *io_request) +{ + u16 iu_length; + int sg_count; + bool chained; + unsigned int num_sg_in_iu; + struct scatterlist *sg; + struct pqi_sg_descriptor *sg_descriptor; + + sg_count = scsi_dma_map(scmd); + if (sg_count < 0) + return sg_count; + + iu_length = offsetof(struct pqi_aio_r56_path_request, sg_descriptors) - + pqi_request_header_length; + num_sg_in_iu = 0; + + if (sg_count != 0) { + sg = scsi_sglist(scmd); + sg_descriptor = request->sg_descriptors; + + num_sg_in_iu = pqi_build_sg_list(sg_descriptor, sg, sg_count, io_request, + ctrl_info->max_sg_per_r56_iu, &chained); + + request->partial = chained; + iu_length += num_sg_in_iu * sizeof(*sg_descriptor); + } + + put_unaligned_le16(iu_length, &request->header.iu_length); + request->num_sg_descriptors = num_sg_in_iu; + + return 0; +} + +static int pqi_aio_submit_r56_write_io(struct pqi_ctrl_info *ctrl_info, + struct scsi_cmnd *scmd, struct pqi_queue_group *queue_group, + struct pqi_encryption_info *encryption_info, struct pqi_scsi_dev *device, + struct pqi_scsi_dev_raid_map_data *rmd) +{ + int rc; + struct pqi_io_request *io_request; + struct pqi_aio_r56_path_request *r56_request; + + io_request = pqi_alloc_io_request(ctrl_info); + io_request->io_complete_callback = pqi_aio_io_complete; + io_request->scmd = scmd; + io_request->raid_bypass = true; + + r56_request = io_request->iu; + memset(r56_request, 0, offsetof(struct pqi_aio_r56_path_request, sg_descriptors)); + + if (device->raid_level == sa_raid_5 || device->raid_level == sa_raid_51) + r56_request->header.iu_type = pqi_request_iu_aio_path_raid5_io; + else + r56_request->header.iu_type = pqi_request_iu_aio_path_raid6_io; + + put_unaligned_le16(*(u16 *)device->scsi3addr & 0x3fff, &r56_request->volume_id); + put_unaligned_le32(rmd->aio_handle, &r56_request->data_it_nexus); + put_unaligned_le32(rmd->p_parity_it_nexus, &r56_request->p_parity_it_nexus); + if (rmd->raid_level == sa_raid_6) { + put_unaligned_le32(rmd->q_parity_it_nexus, &r56_request->q_parity_it_nexus); + r56_request->xor_multiplier = rmd->xor_mult; + } + put_unaligned_le32(scsi_bufflen(scmd), &r56_request->data_length); + r56_request->task_attribute = sop_task_attribute_simple; + put_unaligned_le64(rmd->row, &r56_request->row); + + put_unaligned_le16(io_request->index, &r56_request->request_id); + r56_request->error_index = r56_request->request_id; + + if (rmd->cdb_length > sizeof(r56_request->cdb)) + rmd->cdb_length = sizeof(r56_request->cdb); + r56_request->cdb_length = rmd->cdb_length; + memcpy(r56_request->cdb, rmd->cdb, rmd->cdb_length); + + /* the direction is always write. */ + r56_request->data_direction = sop_read_flag; + + if (encryption_info) { + r56_request->encryption_enable = true; + put_unaligned_le16(encryption_info->data_encryption_key_index, + &r56_request->data_encryption_key_index); + put_unaligned_le32(encryption_info->encrypt_tweak_lower, + &r56_request->encrypt_tweak_lower); + put_unaligned_le32(encryption_info->encrypt_tweak_upper, + &r56_request->encrypt_tweak_upper); + } + + rc = pqi_build_aio_r56_sg_list(ctrl_info, r56_request, scmd, io_request); + if (rc) { + pqi_free_io_request(io_request); + return scsi_mlqueue_host_busy; + } + + pqi_start_io(ctrl_info, queue_group, aio_path, io_request); + + return 0; +} + +static ssize_t pqi_host_enable_r5_writes_show(struct device *dev, + struct device_attribute *attr, char *buffer) +{ + struct scsi_host *shost = class_to_shost(dev); + struct pqi_ctrl_info *ctrl_info = shost_to_hba(shost); + + return scnprintf(buffer, 10, "%x ", ctrl_info->enable_r5_writes); +} + +static ssize_t pqi_host_enable_r5_writes_store(struct device *dev, + struct device_attribute *attr, const char *buffer, size_t count) +{ + struct scsi_host *shost = class_to_shost(dev); + struct pqi_ctrl_info *ctrl_info = shost_to_hba(shost); + u8 set_r5_writes = 0; + + if (kstrtou8(buffer, 0, &set_r5_writes)) + return -einval; + + if (set_r5_writes > 0) + set_r5_writes = 1; + + ctrl_info->enable_r5_writes = set_r5_writes; + + return count; +} + +static ssize_t pqi_host_enable_r6_writes_show(struct device *dev, + struct device_attribute *attr, char *buffer) +{ + struct scsi_host *shost = class_to_shost(dev); + struct pqi_ctrl_info *ctrl_info = shost_to_hba(shost); + + return scnprintf(buffer, 10, "%x ", ctrl_info->enable_r6_writes); +} + +static ssize_t pqi_host_enable_r6_writes_store(struct device *dev, + struct device_attribute *attr, const char *buffer, size_t count) +{ + struct scsi_host *shost = class_to_shost(dev); + struct pqi_ctrl_info *ctrl_info = shost_to_hba(shost); + u8 set_r6_writes = 0; + + if (kstrtou8(buffer, 0, &set_r6_writes)) + return -einval; + + if (set_r6_writes > 0) + set_r6_writes = 1; + + ctrl_info->enable_r6_writes = set_r6_writes; + + return count; +} + +static device_attr(enable_r5_writes, 0644, + pqi_host_enable_r5_writes_show, pqi_host_enable_r5_writes_store); +static device_attr(enable_r6_writes, 0644, + pqi_host_enable_r6_writes_show, pqi_host_enable_r6_writes_store); + &dev_attr_enable_r5_writes, + &dev_attr_enable_r6_writes,
|
Storage
|
6702d2c40f31b200d90614d1b0a841f14ba22ee0
|
don brace scott benesh scott benesh microchip com mike mcgowen mike mcgowen microchip com scott teel scott teel microchip com kevin barnett kevin barnett microchip com
|
drivers
|
scsi
|
smartpqi
|
scsi: smartpqi: add support for long firmware version
|
add support for new "long" firmware version which requires minor driver changes to expose.
|
this release includes the landlock security module, which aims to make easier to sandbox applications; support for the clang control flow integrity, which aims to abort the program upon detecting certain forms of undefined behavior; support for randomising the stack address offset in each syscall; support for concurrent tbl flushing; preparatory apple m1 support; support for incoming amd and intel graphics chips; bpf support for calling kernel functions directly; a virtio sound driver for improved sound experience on virtualized guests; io_uring support for multi shot mode and a misc cgroup for miscellaneous resources. as always, there are many other features, new drivers, improvements and fixes.
|
add support for long firmware version
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'security', 'networking', 'architectures arm x86 mips powerpc riscv s390 ia64 xtensa']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'remote processors', 'clock', 'phy ("physical layer" framework)', 'various']
|
['scsi ', 'smartpqi']
|
['h', 'c']
| 2
| 39
| 12
|
--- diff --git a/drivers/scsi/smartpqi/smartpqi.h b/drivers/scsi/smartpqi/smartpqi.h --- a/drivers/scsi/smartpqi/smartpqi.h +++ b/drivers/scsi/smartpqi/smartpqi.h - char firmware_version[11]; + char firmware_version[32]; - u8 firmware_version[4]; + u8 firmware_version_short[4]; - u8 reserved3[68]; + u8 reserved3[62]; + __le32 extra_controller_flags; + u8 reserved4[2]; - u8 reserved4[32]; + u8 spare_part_number[32]; + u8 firmware_version_long[32]; +/* constants for extra_controller_flags field of bmic_identify_controller */ +#define bmic_identify_extra_flags_long_fw_version_supported 0x20000000 + diff --git a/drivers/scsi/smartpqi/smartpqi_init.c b/drivers/scsi/smartpqi/smartpqi_init.c --- a/drivers/scsi/smartpqi/smartpqi_init.c +++ b/drivers/scsi/smartpqi/smartpqi_init.c - memcpy(ctrl_info->firmware_version, identify->firmware_version, - sizeof(identify->firmware_version)); - ctrl_info->firmware_version[sizeof(identify->firmware_version)] = ''; - snprintf(ctrl_info->firmware_version + - strlen(ctrl_info->firmware_version), - sizeof(ctrl_info->firmware_version), - "-%u", get_unaligned_le16(&identify->firmware_build_number)); + if (get_unaligned_le32(&identify->extra_controller_flags) & + bmic_identify_extra_flags_long_fw_version_supported) { + memcpy(ctrl_info->firmware_version, + identify->firmware_version_long, + sizeof(identify->firmware_version_long)); + } else { + memcpy(ctrl_info->firmware_version, + identify->firmware_version_short, + sizeof(identify->firmware_version_short)); + ctrl_info->firmware_version + [sizeof(identify->firmware_version_short)] = ''; + snprintf(ctrl_info->firmware_version + + strlen(ctrl_info->firmware_version), + sizeof(ctrl_info->firmware_version) - + sizeof(identify->firmware_version_short), + "-%u", + get_unaligned_le16(&identify->firmware_build_number)); + } - firmware_version) != 5); + firmware_version_short) != 5); + build_bug_on(offsetof(struct bmic_identify_controller, + vendor_id) != 200); + build_bug_on(offsetof(struct bmic_identify_controller, + product_id) != 208); + build_bug_on(offsetof(struct bmic_identify_controller, + extra_controller_flags) != 286); + build_bug_on(offsetof(struct bmic_identify_controller, + spare_part_number) != 293); + build_bug_on(offsetof(struct bmic_identify_controller, + firmware_version_long) != 325);
|
Storage
|
598bef8d79421117b49642ef2b7cb65a73e186c1
|
kevin barnett scott benesh scott benesh microchip com mike mcgowen mike mcgowen microchip com scott teel scott teel microchip com
|
drivers
|
scsi
|
smartpqi
|
scsi: smartpqi: add support for new product ids
|
add support for newer hardware by adding in a product identifier. this identifier can then be used to check for the hardware generation.
|
this release includes the landlock security module, which aims to make easier to sandbox applications; support for the clang control flow integrity, which aims to abort the program upon detecting certain forms of undefined behavior; support for randomising the stack address offset in each syscall; support for concurrent tbl flushing; preparatory apple m1 support; support for incoming amd and intel graphics chips; bpf support for calling kernel functions directly; a virtio sound driver for improved sound experience on virtualized guests; io_uring support for multi shot mode and a misc cgroup for miscellaneous resources. as always, there are many other features, new drivers, improvements and fixes.
|
add support for new product ids
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'security', 'networking', 'architectures arm x86 mips powerpc riscv s390 ia64 xtensa']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'remote processors', 'clock', 'phy ("physical layer" framework)', 'various']
|
['scsi ', 'smartpqi']
|
['h', 'c']
| 4
| 23
| 1
|
--- diff --git a/drivers/scsi/smartpqi/smartpqi.h b/drivers/scsi/smartpqi/smartpqi.h --- a/drivers/scsi/smartpqi/smartpqi.h +++ b/drivers/scsi/smartpqi/smartpqi.h - u8 reserved5[0xbc - (0xb0 + sizeof(__le32))]; + __le32 sis_product_identifier; /* b4h */ + u8 reserved5[0xbc - (0xb4 + sizeof(__le32))]; + +#define pqi_ctrl_product_id_gen1 0 +#define pqi_ctrl_product_id_gen2 7 +#define pqi_ctrl_product_revision_a 0 +#define pqi_ctrl_product_revision_b 1 + + u8 product_id; + u8 product_revision; diff --git a/drivers/scsi/smartpqi/smartpqi_init.c b/drivers/scsi/smartpqi/smartpqi_init.c --- a/drivers/scsi/smartpqi/smartpqi_init.c +++ b/drivers/scsi/smartpqi/smartpqi_init.c + u32 product_id; + product_id = sis_get_product_id(ctrl_info); + ctrl_info->product_id = (u8)product_id; + ctrl_info->product_revision = (u8)(product_id >> 8); + + build_bug_on(offsetof(struct pqi_ctrl_registers, + sis_product_identifier) != 0xb4); diff --git a/drivers/scsi/smartpqi/smartpqi_sis.c b/drivers/scsi/smartpqi/smartpqi_sis.c --- a/drivers/scsi/smartpqi/smartpqi_sis.c +++ b/drivers/scsi/smartpqi/smartpqi_sis.c +u32 sis_get_product_id(struct pqi_ctrl_info *ctrl_info) +{ + return readl(&ctrl_info->registers->sis_product_identifier); +} + diff --git a/drivers/scsi/smartpqi/smartpqi_sis.h b/drivers/scsi/smartpqi/smartpqi_sis.h --- a/drivers/scsi/smartpqi/smartpqi_sis.h +++ b/drivers/scsi/smartpqi/smartpqi_sis.h +u32 sis_get_product_id(struct pqi_ctrl_info *ctrl_info);
|
Storage
|
2708a25643abaf24b7edb553afd09a1eb5d4081f
|
kevin barnett scott benesh scott benesh microchip com mike mcgowen mike mcgowen microchip com scott teel scott teel microchip com martin wilck mwilck suse com
|
drivers
|
scsi
|
smartpqi
|
scsi: smartpqi: add support for wwid
|
wwid has been added to report physical luns in newer controller firmware. the presence of this field is detected by a feature bit. add detection of this new feature and store the wwid when set.
|
this release includes the landlock security module, which aims to make easier to sandbox applications; support for the clang control flow integrity, which aims to abort the program upon detecting certain forms of undefined behavior; support for randomising the stack address offset in each syscall; support for concurrent tbl flushing; preparatory apple m1 support; support for incoming amd and intel graphics chips; bpf support for calling kernel functions directly; a virtio sound driver for improved sound experience on virtualized guests; io_uring support for multi shot mode and a misc cgroup for miscellaneous resources. as always, there are many other features, new drivers, improvements and fixes.
|
add support for wwid
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'security', 'networking', 'architectures arm x86 mips powerpc riscv s390 ia64 xtensa']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'remote processors', 'clock', 'phy ("physical layer" framework)', 'various']
|
['scsi ', 'smartpqi']
|
['h', 'c']
| 2
| 27
| 2
|
--- diff --git a/drivers/scsi/smartpqi/smartpqi.h b/drivers/scsi/smartpqi/smartpqi.h --- a/drivers/scsi/smartpqi/smartpqi.h +++ b/drivers/scsi/smartpqi/smartpqi.h -#define pqi_firmware_feature_maximum 15 +#define pqi_firmware_feature_unique_wwid_in_report_phys_lun 16 +#define pqi_firmware_feature_maximum 16 + u8 page_83_identifier[16]; + u8 unique_wwid_in_report_phys_lun_supported : 1; diff --git a/drivers/scsi/smartpqi/smartpqi_init.c b/drivers/scsi/smartpqi/smartpqi_init.c --- a/drivers/scsi/smartpqi/smartpqi_init.c +++ b/drivers/scsi/smartpqi/smartpqi_init.c + memcpy(&device->page_83_identifier, &id_phys->page_83_identifier, + sizeof(device->page_83_identifier)); + +static inline void pqi_set_physical_device_wwid(struct pqi_ctrl_info *ctrl_info, + struct pqi_scsi_dev *device, struct report_phys_lun_extended_entry *phys_lun_ext_entry) +{ + if (ctrl_info->unique_wwid_in_report_phys_lun_supported || + pqi_is_device_with_sas_address(device)) + device->wwid = phys_lun_ext_entry->wwid; + else + device->wwid = cpu_to_be64(get_unaligned_be64(&device->page_83_identifier)); +} + - device->wwid = phys_lun_ext_entry->wwid; + pqi_set_physical_device_wwid(ctrl_info, device, phys_lun_ext_entry); + case pqi_firmware_feature_unique_wwid_in_report_phys_lun: + ctrl_info->unique_wwid_in_report_phys_lun_supported = + firmware_feature->enabled; + break; + { + .feature_name = "unique wwid in report physical lun", + .feature_bit = pqi_firmware_feature_unique_wwid_in_report_phys_lun, + .feature_status = pqi_ctrl_update_feature_flags, + },
|
Storage
|
7a84a821f194bb1e509219c80efcbff2b4d47e45
|
kevin barnett scott benesh scott benesh microchip com mike mcgowen mike mcgowen microchip com scott teel scott teel microchip com
|
drivers
|
scsi
|
smartpqi
|
scsi: storvsc: parameterize number hardware queues
|
add ability to set the number of hardware queues with new module parameter, storvsc_max_hw_queues. the default value remains the number of cpus. this functionality is useful in some environments (e.g. microsoft azure) where decreasing the number of hardware queues has been shown to improve performance.
|
this release includes the landlock security module, which aims to make easier to sandbox applications; support for the clang control flow integrity, which aims to abort the program upon detecting certain forms of undefined behavior; support for randomising the stack address offset in each syscall; support for concurrent tbl flushing; preparatory apple m1 support; support for incoming amd and intel graphics chips; bpf support for calling kernel functions directly; a virtio sound driver for improved sound experience on virtualized guests; io_uring support for multi shot mode and a misc cgroup for miscellaneous resources. as always, there are many other features, new drivers, improvements and fixes.
|
parameterize number hardware queues
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'security', 'networking', 'architectures arm x86 mips powerpc riscv s390 ia64 xtensa']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'remote processors', 'clock', 'phy ("physical layer" framework)', 'various']
|
['scsi ', 'storvsc']
|
['c']
| 1
| 16
| 2
|
--- diff --git a/drivers/scsi/storvsc_drv.c b/drivers/scsi/storvsc_drv.c --- a/drivers/scsi/storvsc_drv.c +++ b/drivers/scsi/storvsc_drv.c +static unsigned int storvsc_max_hw_queues; +module_param(storvsc_max_hw_queues, uint, 0644); +module_parm_desc(storvsc_max_hw_queues, "maximum number of hardware queues"); + + int num_present_cpus = num_present_cpus(); - if (!dev_is_ide) - host->nr_hw_queues = num_present_cpus(); + if (!dev_is_ide) { + if (storvsc_max_hw_queues > num_present_cpus) { + storvsc_max_hw_queues = 0; + storvsc_log(device, storvsc_logging_warn, + "resetting invalid storvsc_max_hw_queues value to default. "); + } + if (storvsc_max_hw_queues) + host->nr_hw_queues = storvsc_max_hw_queues; + else + host->nr_hw_queues = num_present_cpus; + }
|
Storage
|
a81a38cc6ddaf128c7ca9e3fffff21c243f33c97
|
melanie plageman microsoft michael kelley mikelley microsoft com
|
drivers
|
scsi
| |
scsi: target: tcmu: support data_block_size = n * page_size
|
change tcmu to support data_block_size being a multiple of page_size. there are two reasons why one would like to have a bigger data_block_size:
|
this release includes the landlock security module, which aims to make easier to sandbox applications; support for the clang control flow integrity, which aims to abort the program upon detecting certain forms of undefined behavior; support for randomising the stack address offset in each syscall; support for concurrent tbl flushing; preparatory apple m1 support; support for incoming amd and intel graphics chips; bpf support for calling kernel functions directly; a virtio sound driver for improved sound experience on virtualized guests; io_uring support for multi shot mode and a misc cgroup for miscellaneous resources. as always, there are many other features, new drivers, improvements and fixes.
|
support data_block_size = n * page_size
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'security', 'networking', 'architectures arm x86 mips powerpc riscv s390 ia64 xtensa']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'remote processors', 'clock', 'phy ("physical layer" framework)', 'various']
|
['scsi ', 'target', 'tcmu']
|
['c']
| 1
| 116
| 89
|
--- diff --git a/drivers/target/target_core_user.c b/drivers/target/target_core_user.c --- a/drivers/target/target_core_user.c +++ b/drivers/target/target_core_user.c - * for data area, the block size is page_size and - * the total size is 256k * page_size. + * for data area, the default block size is page_size and + * the default total size is 256k * page_size. -#define data_block_size page_size -#define data_block_bits_def (256 * 1024) +#define data_block_size (data_pages_per_blk * page_size) +#define data_area_pages_def (256 * 1024) -#define tcmu_mbs_to_pages(_mbs) (_mbs << (20 - page_shift)) +#define tcmu_mbs_to_pages(_mbs) ((size_t)_mbs << (20 - page_shift)) - size_t data_size; + int data_area_mb; - int prev_dbi, int *iov_cnt) + int prev_dbi, int length, int *iov_cnt) + xa_state(xas, &udev->data_pages, 0); - int dbi; + int i, cnt, dbi; + int page_cnt = div_round_up(length, page_size); - page = xa_load(&udev->data_pages, dbi); - if (!page) { - if (atomic_add_return(1, &global_page_count) > - tcmu_global_max_pages) - schedule_delayed_work(&tcmu_unmap_work, 0); + /* count the number of already allocated pages */ + xas_set(&xas, dbi * data_pages_per_blk); + for (cnt = 0; xas_next(&xas) && cnt < page_cnt;) + cnt++; + for (i = cnt; i < page_cnt; i++) { - goto err_alloc; + break; - if (xa_store(&udev->data_pages, dbi, page, gfp_noio)) - goto err_insert; + if (xa_store(&udev->data_pages, dbi * data_pages_per_blk + i, + page, gfp_noio)) { + __free_page(page); + break; + } + if (atomic_add_return(i - cnt, &global_page_count) > + tcmu_global_max_pages) + schedule_delayed_work(&tcmu_unmap_work, 0); - if (dbi > udev->dbi_max) + if (i && dbi > udev->dbi_max) - return dbi; -err_insert: - __free_page(page); -err_alloc: - atomic_dec(&global_page_count); - return -1; + return i == page_cnt ? dbi : -1; - struct tcmu_cmd *tcmu_cmd, int dbi_cnt) + struct tcmu_cmd *tcmu_cmd, int length) - int i, iov_cnt = 0; + int blk_len, iov_cnt = 0; - for (i = 0; i < dbi_cnt; i++) { - dbi = tcmu_get_empty_block(udev, tcmu_cmd, dbi, &iov_cnt); + for (; length > 0; length -= data_block_size) { + blk_len = min_t(int, length, data_block_size); + dbi = tcmu_get_empty_block(udev, tcmu_cmd, dbi, blk_len, &iov_cnt); + xa_state(xas, &udev->data_pages, 0); - size_t block_remaining, cp_len; + size_t page_remaining, cp_len; + int page_cnt, page_inx; - page = tcmu_get_block_page(udev, dbi); - if (direction == tcmu_data_area_to_sg) - flush_dcache_page(page); - data_page_start = kmap_atomic(page); - block_remaining = data_block_size; - - while (block_remaining && data_len) { - if (!sg_miter_next(&sg_iter)) { - /* set length to 0 to abort outer loop */ - data_len = 0; - pr_debug("tcmu_move_data: aborting data copy due to exhausted sg_list "); - break; + + page_cnt = div_round_up(data_len, page_size); + if (page_cnt > data_pages_per_blk) + page_cnt = data_pages_per_blk; + + xas_set(&xas, dbi * data_pages_per_blk); + for (page_inx = 0; page_inx < page_cnt && data_len; page_inx++) { + page = xas_next(&xas); + + if (direction == tcmu_data_area_to_sg) + flush_dcache_page(page); + data_page_start = kmap_atomic(page); + page_remaining = page_size; + + while (page_remaining && data_len) { + if (!sg_miter_next(&sg_iter)) { + /* set length to 0 to abort outer loop */ + data_len = 0; + pr_debug("%s: aborting data copy due to exhausted sg_list ", + __func__); + break; + } + cp_len = min3(sg_iter.length, page_remaining, + data_len); + + data_addr = data_page_start + + page_size - page_remaining; + if (direction == tcmu_sg_to_data_area) + memcpy(data_addr, sg_iter.addr, cp_len); + else + memcpy(sg_iter.addr, data_addr, cp_len); + + data_len -= cp_len; + page_remaining -= cp_len; + sg_iter.consumed = cp_len; - cp_len = min3(sg_iter.length, block_remaining, data_len); + sg_miter_stop(&sg_iter); - data_addr = data_page_start + - data_block_size - block_remaining; + kunmap_atomic(data_page_start); - memcpy(data_addr, sg_iter.addr, cp_len); - else - memcpy(sg_iter.addr, data_addr, cp_len); - - data_len -= cp_len; - block_remaining -= cp_len; - sg_iter.consumed = cp_len; + flush_dcache_page(page); - sg_miter_stop(&sg_iter); - - kunmap_atomic(data_page_start); - if (direction == tcmu_sg_to_data_area) - flush_dcache_page(page); - iov_cnt = tcmu_get_empty_blocks(udev, cmd, - cmd->dbi_cnt - cmd->dbi_bidi_cnt); + iov_cnt = tcmu_get_empty_blocks(udev, cmd, cmd->se_cmd->data_length); - ret = tcmu_get_empty_blocks(udev, cmd, cmd->dbi_bidi_cnt); + ret = tcmu_get_empty_blocks(udev, cmd, cmd->data_len_bidi); - if (data_length > udev->data_size) { + if (data_length > udev->max_blocks * data_block_size) { - data_length, udev->data_size); + data_length, udev->max_blocks * data_block_size); - udev->max_blocks = data_block_bits_def; + udev->max_blocks = data_area_pages_def / data_pages_per_blk; + udev->data_area_mb = tcmu_pages_to_mbs(data_area_pages_def); -static void tcmu_blocks_release(struct xarray *blocks, unsigned long first, +static u32 tcmu_blocks_release(struct xarray *blocks, unsigned long first, - xa_state(xas, blocks, first); + xa_state(xas, blocks, first * data_pages_per_blk); + u32 pages_freed = 0; - xas_for_each(&xas, page, last) { + xas_for_each(&xas, page, (last + 1) * data_pages_per_blk - 1) { - atomic_dec(&global_page_count); + pages_freed++; + + atomic_sub(pages_freed, &global_page_count); + + return pages_freed; + size_t data_size; - udev->data_size = udev->max_blocks * data_block_size; - udev->mmap_pages = (udev->data_size + mb_cmdr_size) >> page_shift; + data_size = tcmu_mbs_to_pages(udev->data_area_mb) << page_shift; + udev->mmap_pages = (data_size + mb_cmdr_size) >> page_shift; - warn_on(udev->data_size % page_size); - warn_on(udev->data_size % data_block_size); + warn_on(data_size % page_size); - info->mem[0].size = udev->data_size + mb_cmdr_size; + info->mem[0].size = data_size + mb_cmdr_size; - int val, ret, blks; + int val, ret; - - blks = tcmu_mbs_to_pages(val) / data_pages_per_blk; - if (blks <= 0) { + if (val <= 0) { + if (val > tcmu_pages_to_mbs(tcmu_global_max_pages)) { + pr_err("%d is too large. adjusting max_data_area_mb to global limit of %u ", + val, tcmu_pages_to_mbs(tcmu_global_max_pages)); + val = tcmu_pages_to_mbs(tcmu_global_max_pages); + } + if (tcmu_mbs_to_pages(val) < data_pages_per_blk) { + pr_err("invalid max_data_area %d (%zu pages): smaller than data_pages_per_blk (%d pages). ", + val, tcmu_mbs_to_pages(val), data_pages_per_blk); + return -einval; + } - udev->max_blocks = blks; - if (udev->max_blocks * data_pages_per_blk > tcmu_global_max_pages) { - pr_err("%d is too large. adjusting max_data_area_mb to global limit of %u ", - val, tcmu_pages_to_mbs(tcmu_global_max_pages)); - udev->max_blocks = tcmu_global_max_pages / data_pages_per_blk; - } + udev->data_area_mb = val; + udev->max_blocks = tcmu_mbs_to_pages(val) / data_pages_per_blk; - bl += sprintf(b + bl, "maxdataareamb: %u ", - tcmu_pages_to_mbs(udev->max_blocks * data_pages_per_blk)); + bl += sprintf(b + bl, "maxdataareamb: %u ", udev->data_area_mb); - return snprintf(page, page_size, "%u ", - tcmu_pages_to_mbs(udev->max_blocks * data_pages_per_blk)); + return snprintf(page, page_size, "%u ", udev->data_area_mb); - u32 start, end, block, total_freed = 0; + u32 pages_freed, total_pages_freed = 0; + u32 start, end, block, total_blocks_freed = 0; - tcmu_blocks_release(&udev->data_pages, start, end - 1); + pages_freed = tcmu_blocks_release(&udev->data_pages, start, end - 1); - total_freed += end - start; - pr_debug("freed %u blocks (total %u) from %s. ", end - start, - total_freed, udev->name); + total_pages_freed += pages_freed; + total_blocks_freed += end - start; + pr_debug("freed %u pages (total %u) from %u blocks (total %u) from %s. ", + pages_freed, total_pages_freed, end - start, + total_blocks_freed, udev->name);
|
Storage
|
f5ce815f34bc97b92f5605eced806f1d32e1d602
|
bodo stroesser
|
drivers
|
target
| |
scsi: ufs: ufs-debugfs: add user-defined exception event rate limiting
|
an enabled user-specified exception event that does not clear quickly will repeatedly cause the handler to run. that could unduly disturb the driver behaviour being tested or debugged. to prevent that add debugfs file exception_event_rate_limit_ms. when a exception event happens, it is disabled, and then after a period of time (default 20ms) the exception event is enabled again.
|
this release includes the landlock security module, which aims to make easier to sandbox applications; support for the clang control flow integrity, which aims to abort the program upon detecting certain forms of undefined behavior; support for randomising the stack address offset in each syscall; support for concurrent tbl flushing; preparatory apple m1 support; support for incoming amd and intel graphics chips; bpf support for calling kernel functions directly; a virtio sound driver for improved sound experience on virtualized guests; io_uring support for multi shot mode and a misc cgroup for miscellaneous resources. as always, there are many other features, new drivers, improvements and fixes.
|
add user-defined exception event rate limiting
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'security', 'networking', 'architectures arm x86 mips powerpc riscv s390 ia64 xtensa']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'remote processors', 'clock', 'phy ("physical layer" framework)', 'various']
|
['scsi ', 'ufs', 'ufs-debugfs']
|
['h', 'c']
| 4
| 53
| 2
|
--- diff --git a/drivers/scsi/ufs/ufs-debugfs.c b/drivers/scsi/ufs/ufs-debugfs.c --- a/drivers/scsi/ufs/ufs-debugfs.c +++ b/drivers/scsi/ufs/ufs-debugfs.c +void ufs_debugfs_exception_event(struct ufs_hba *hba, u16 status) +{ + bool chgd = false; + u16 ee_ctrl_mask; + int err = 0; + + if (!hba->debugfs_ee_rate_limit_ms || !status) + return; + + mutex_lock(&hba->ee_ctrl_mutex); + ee_ctrl_mask = hba->ee_drv_mask | (hba->ee_usr_mask & ~status); + chgd = ee_ctrl_mask != hba->ee_ctrl_mask; + if (chgd) { + err = __ufshcd_write_ee_control(hba, ee_ctrl_mask); + if (err) + dev_err(hba->dev, "%s: failed to write ee control %d ", + __func__, err); + } + mutex_unlock(&hba->ee_ctrl_mutex); + + if (chgd && !err) { + unsigned long delay = msecs_to_jiffies(hba->debugfs_ee_rate_limit_ms); + + queue_delayed_work(system_freezable_wq, &hba->debugfs_ee_work, delay); + } +} + +static void ufs_debugfs_restart_ee(struct work_struct *work) +{ + struct ufs_hba *hba = container_of(work, struct ufs_hba, debugfs_ee_work.work); + + if (!hba->ee_usr_mask || pm_runtime_suspended(hba->dev) || + ufs_debugfs_get_user_access(hba)) + return; + ufshcd_write_ee_control(hba); + ufs_debugfs_put_user_access(hba); +} + + /* set default exception event rate limit period to 20ms */ + hba->debugfs_ee_rate_limit_ms = 20; + init_delayed_work(&hba->debugfs_ee_work, ufs_debugfs_restart_ee); + debugfs_create_u32("exception_event_rate_limit_ms", 0600, hba->debugfs_root, + &hba->debugfs_ee_rate_limit_ms); + cancel_delayed_work_sync(&hba->debugfs_ee_work); diff --git a/drivers/scsi/ufs/ufs-debugfs.h b/drivers/scsi/ufs/ufs-debugfs.h --- a/drivers/scsi/ufs/ufs-debugfs.h +++ b/drivers/scsi/ufs/ufs-debugfs.h +void ufs_debugfs_exception_event(struct ufs_hba *hba, u16 status); +static inline void ufs_debugfs_exception_event(struct ufs_hba *hba, u16 status) {} diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c --- a/drivers/scsi/ufs/ufshcd.c +++ b/drivers/scsi/ufs/ufshcd.c -static int __ufshcd_write_ee_control(struct ufs_hba *hba, u32 ee_ctrl_mask) +int __ufshcd_write_ee_control(struct ufs_hba *hba, u32 ee_ctrl_mask) -static int ufshcd_write_ee_control(struct ufs_hba *hba) +int ufshcd_write_ee_control(struct ufs_hba *hba) + ufs_debugfs_exception_event(hba, status); diff --git a/drivers/scsi/ufs/ufshcd.h b/drivers/scsi/ufs/ufshcd.h --- a/drivers/scsi/ufs/ufshcd.h +++ b/drivers/scsi/ufs/ufshcd.h + struct delayed_work debugfs_ee_work; + u32 debugfs_ee_rate_limit_ms; +int __ufshcd_write_ee_control(struct ufs_hba *hba, u32 ee_ctrl_mask); +int ufshcd_write_ee_control(struct ufs_hba *hba);
|
Storage
|
7deedfdaeccfec5a9c41dbb83f1725cf11e3ff39
|
adrian hunter
|
drivers
|
scsi
|
ufs
|
scsi: ufs: ufs-debugfs: add user-defined exception_event_mask
|
allow users to enable specific exception events via debugfs.
|
this release includes the landlock security module, which aims to make easier to sandbox applications; support for the clang control flow integrity, which aims to abort the program upon detecting certain forms of undefined behavior; support for randomising the stack address offset in each syscall; support for concurrent tbl flushing; preparatory apple m1 support; support for incoming amd and intel graphics chips; bpf support for calling kernel functions directly; a virtio sound driver for improved sound experience on virtualized guests; io_uring support for multi shot mode and a misc cgroup for miscellaneous resources. as always, there are many other features, new drivers, improvements and fixes.
|
add user-defined exception_event_mask
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'security', 'networking', 'architectures arm x86 mips powerpc riscv s390 ia64 xtensa']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'remote processors', 'clock', 'phy ("physical layer" framework)', 'various']
|
['scsi ', 'ufs', 'ufs-debugfs']
|
['h', 'c']
| 3
| 120
| 34
|
--- diff --git a/drivers/scsi/ufs/ufs-debugfs.c b/drivers/scsi/ufs/ufs-debugfs.c --- a/drivers/scsi/ufs/ufs-debugfs.c +++ b/drivers/scsi/ufs/ufs-debugfs.c +static int ee_usr_mask_get(void *data, u64 *val) +{ + struct ufs_hba *hba = data; + + *val = hba->ee_usr_mask; + return 0; +} + +static int ufs_debugfs_get_user_access(struct ufs_hba *hba) +__acquires(&hba->host_sem) +{ + down(&hba->host_sem); + if (!ufshcd_is_user_access_allowed(hba)) { + up(&hba->host_sem); + return -ebusy; + } + pm_runtime_get_sync(hba->dev); + return 0; +} + +static void ufs_debugfs_put_user_access(struct ufs_hba *hba) +__releases(&hba->host_sem) +{ + pm_runtime_put_sync(hba->dev); + up(&hba->host_sem); +} + +static int ee_usr_mask_set(void *data, u64 val) +{ + struct ufs_hba *hba = data; + int err; + + if (val & ~(u64)mask_ee_status) + return -einval; + err = ufs_debugfs_get_user_access(hba); + if (err) + return err; + err = ufshcd_update_ee_usr_mask(hba, val, mask_ee_status); + ufs_debugfs_put_user_access(hba); + return err; +} + +define_debugfs_attribute(ee_usr_mask_fops, ee_usr_mask_get, ee_usr_mask_set, "%#llx "); + + debugfs_create_file("exception_event_mask", 0600, hba->debugfs_root, + hba, &ee_usr_mask_fops); diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c --- a/drivers/scsi/ufs/ufshcd.c +++ b/drivers/scsi/ufs/ufshcd.c +static int __ufshcd_write_ee_control(struct ufs_hba *hba, u32 ee_ctrl_mask) +{ + return ufshcd_query_attr_retry(hba, upiu_query_opcode_write_attr, + query_attr_idn_ee_control, 0, 0, + &ee_ctrl_mask); +} + +static int ufshcd_write_ee_control(struct ufs_hba *hba) +{ + int err; + + mutex_lock(&hba->ee_ctrl_mutex); + err = __ufshcd_write_ee_control(hba, hba->ee_ctrl_mask); + mutex_unlock(&hba->ee_ctrl_mutex); + if (err) + dev_err(hba->dev, "%s: failed to write ee control %d ", + __func__, err); + return err; +} + +int ufshcd_update_ee_control(struct ufs_hba *hba, u16 *mask, u16 *other_mask, + u16 set, u16 clr) +{ + u16 new_mask, ee_ctrl_mask; + int err = 0; + + mutex_lock(&hba->ee_ctrl_mutex); + new_mask = (*mask & ~clr) | set; + ee_ctrl_mask = new_mask | *other_mask; + if (ee_ctrl_mask != hba->ee_ctrl_mask) + err = __ufshcd_write_ee_control(hba, ee_ctrl_mask); + /* still need to update 'mask' even if 'ee_ctrl_mask' was unchanged */ + if (!err) { + hba->ee_ctrl_mask = ee_ctrl_mask; + *mask = new_mask; + } + mutex_unlock(&hba->ee_ctrl_mutex); + return err; +} + -static int ufshcd_disable_ee(struct ufs_hba *hba, u16 mask) +static inline int ufshcd_disable_ee(struct ufs_hba *hba, u16 mask) - int err = 0; - u32 val; - - if (!(hba->ee_ctrl_mask & mask)) - goto out; - - val = hba->ee_ctrl_mask & ~mask; - val &= mask_ee_status; - err = ufshcd_query_attr_retry(hba, upiu_query_opcode_write_attr, - query_attr_idn_ee_control, 0, 0, &val); - if (!err) - hba->ee_ctrl_mask &= ~mask; -out: - return err; + return ufshcd_update_ee_drv_mask(hba, 0, mask); -static int ufshcd_enable_ee(struct ufs_hba *hba, u16 mask) +static inline int ufshcd_enable_ee(struct ufs_hba *hba, u16 mask) - int err = 0; - u32 val; - - if (hba->ee_ctrl_mask & mask) - goto out; - - val = hba->ee_ctrl_mask | mask; - val &= mask_ee_status; - err = ufshcd_query_attr_retry(hba, upiu_query_opcode_write_attr, - query_attr_idn_ee_control, 0, 0, &val); - if (!err) - hba->ee_ctrl_mask |= mask; -out: - return err; + return ufshcd_update_ee_drv_mask(hba, mask, 0); - status &= hba->ee_ctrl_mask; - - if (status & mask_ee_urgent_bkops) + if (status & hba->ee_drv_mask & mask_ee_urgent_bkops) + if (hba->ee_usr_mask) + ufshcd_write_ee_control(hba); + if (hba->ee_usr_mask) + ufshcd_write_ee_control(hba); + + /* initialize mutex for exception event control */ + mutex_init(&hba->ee_ctrl_mutex); + diff --git a/drivers/scsi/ufs/ufshcd.h b/drivers/scsi/ufs/ufshcd.h --- a/drivers/scsi/ufs/ufshcd.h +++ b/drivers/scsi/ufs/ufshcd.h - u16 ee_ctrl_mask; + u16 ee_ctrl_mask; /* exception event mask */ + u16 ee_drv_mask; /* exception event mask for driver */ + u16 ee_usr_mask; /* exception event mask for user (via debugfs) */ + struct mutex ee_ctrl_mutex; +int ufshcd_update_ee_control(struct ufs_hba *hba, u16 *mask, u16 *other_mask, + u16 set, u16 clr); + +static inline int ufshcd_update_ee_drv_mask(struct ufs_hba *hba, + u16 set, u16 clr) +{ + return ufshcd_update_ee_control(hba, &hba->ee_drv_mask, + &hba->ee_usr_mask, set, clr); +} + +static inline int ufshcd_update_ee_usr_mask(struct ufs_hba *hba, + u16 set, u16 clr) +{ + return ufshcd_update_ee_control(hba, &hba->ee_usr_mask, + &hba->ee_drv_mask, set, clr); +} +
|
Storage
|
cd4694756188dcca0f631e60da26053be1ffdc91
|
adrian hunter
|
drivers
|
scsi
|
ufs
|
scsi: ufs: ufs-pci: add support for intel lkf
|
add pci id and callbacks to support intel lkf.
|
this release includes the landlock security module, which aims to make easier to sandbox applications; support for the clang control flow integrity, which aims to abort the program upon detecting certain forms of undefined behavior; support for randomising the stack address offset in each syscall; support for concurrent tbl flushing; preparatory apple m1 support; support for incoming amd and intel graphics chips; bpf support for calling kernel functions directly; a virtio sound driver for improved sound experience on virtualized guests; io_uring support for multi shot mode and a misc cgroup for miscellaneous resources. as always, there are many other features, new drivers, improvements and fixes.
|
add support for intel lkf
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'security', 'networking', 'architectures arm x86 mips powerpc riscv s390 ia64 xtensa']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'remote processors', 'clock', 'phy ("physical layer" framework)', 'various']
|
['scsi ', 'ufs', 'ufs-pci']
|
['c']
| 1
| 169
| 0
|
--- diff --git a/drivers/scsi/ufs/ufshcd-pci.c b/drivers/scsi/ufs/ufshcd-pci.c --- a/drivers/scsi/ufs/ufshcd-pci.c +++ b/drivers/scsi/ufs/ufshcd-pci.c +#include <linux/uuid.h> +#include <linux/acpi.h> +#include <linux/gpio/consumer.h> + +struct ufs_host { + void (*late_init)(struct ufs_hba *hba); +}; + +enum { + intel_dsm_fns = 0, + intel_dsm_reset = 1, +}; + struct ufs_host ufs_host; + u32 dsm_fns; + struct gpio_desc *reset_gpio; +static const guid_t intel_dsm_guid = + guid_init(0x1a4832a0, 0x7d03, 0x43ca, + 0xb0, 0x20, 0xf6, 0xdc, 0xd1, 0x2a, 0x19, 0x50); + +static int __intel_dsm(struct intel_host *intel_host, struct device *dev, + unsigned int fn, u32 *result) +{ + union acpi_object *obj; + int err = 0; + size_t len; + + obj = acpi_evaluate_dsm(acpi_handle(dev), &intel_dsm_guid, 0, fn, null); + if (!obj) + return -eopnotsupp; + + if (obj->type != acpi_type_buffer || obj->buffer.length < 1) { + err = -einval; + goto out; + } + + len = min_t(size_t, obj->buffer.length, 4); + + *result = 0; + memcpy(result, obj->buffer.pointer, len); +out: + acpi_free(obj); + + return err; +} + +static int intel_dsm(struct intel_host *intel_host, struct device *dev, + unsigned int fn, u32 *result) +{ + if (fn > 31 || !(intel_host->dsm_fns & (1 << fn))) + return -eopnotsupp; + + return __intel_dsm(intel_host, dev, fn, result); +} + +static void intel_dsm_init(struct intel_host *intel_host, struct device *dev) +{ + int err; + + err = __intel_dsm(intel_host, dev, intel_dsm_fns, &intel_host->dsm_fns); + dev_dbg(dev, "dsm fns %#x, error %d ", intel_host->dsm_fns, err); +} + +static int ufs_intel_hce_enable_notify(struct ufs_hba *hba, + enum ufs_notify_change_status status) +{ + /* cannot enable ice until after hc enable */ + if (status == post_change && hba->caps & ufshcd_cap_crypto) { + u32 hce = ufshcd_readl(hba, reg_controller_enable); + + hce |= crypto_general_enable; + ufshcd_writel(hba, hce, reg_controller_enable); + } + + return 0; +} + +static int ufs_intel_device_reset(struct ufs_hba *hba) +{ + struct intel_host *host = ufshcd_get_variant(hba); + + if (host->dsm_fns & intel_dsm_reset) { + u32 result = 0; + int err; + + err = intel_dsm(host, hba->dev, intel_dsm_reset, &result); + if (!err && !result) + err = -eio; + if (err) + dev_err(hba->dev, "%s: dsm error %d result %u ", + __func__, err, result); + return err; + } + + if (!host->reset_gpio) + return -eopnotsupp; + + gpiod_set_value_cansleep(host->reset_gpio, 1); + usleep_range(10, 15); + + gpiod_set_value_cansleep(host->reset_gpio, 0); + usleep_range(10, 15); + + return 0; +} + +static struct gpio_desc *ufs_intel_get_reset_gpio(struct device *dev) +{ + /* gpio in _dsd has active low setting */ + return devm_gpiod_get_optional(dev, "reset", gpiod_out_low); +} + + intel_dsm_init(host, hba->dev); + if (host->dsm_fns & intel_dsm_reset) { + if (hba->vops->device_reset) + hba->caps |= ufshcd_cap_deepsleep; + } else { + if (hba->vops->device_reset) + host->reset_gpio = ufs_intel_get_reset_gpio(hba->dev); + if (is_err(host->reset_gpio)) { + dev_err(hba->dev, "%s: failed to get reset gpio, error %ld ", + __func__, ptr_err(host->reset_gpio)); + host->reset_gpio = null; + } + if (host->reset_gpio) { + gpiod_set_value_cansleep(host->reset_gpio, 0); + hba->caps |= ufshcd_cap_deepsleep; + } + } +static void ufs_intel_lkf_late_init(struct ufs_hba *hba) +{ + /* lkf always needs a full reset, so set pm accordingly */ + if (hba->caps & ufshcd_cap_deepsleep) { + hba->spm_lvl = ufs_pm_lvl_6; + hba->rpm_lvl = ufs_pm_lvl_6; + } else { + hba->spm_lvl = ufs_pm_lvl_5; + hba->rpm_lvl = ufs_pm_lvl_5; + } +} + +static int ufs_intel_lkf_init(struct ufs_hba *hba) +{ + struct ufs_host *ufs_host; + int err; + + hba->quirks |= ufshcd_quirk_broken_auto_hibern8; + hba->caps |= ufshcd_cap_crypto; + err = ufs_intel_common_init(hba); + ufs_host = ufshcd_get_variant(hba); + ufs_host->late_init = ufs_intel_lkf_late_init; + return err; +} + +static struct ufs_hba_variant_ops ufs_intel_lkf_hba_vops = { + .name = "intel-pci", + .init = ufs_intel_lkf_init, + .exit = ufs_intel_common_exit, + .hce_enable_notify = ufs_intel_hce_enable_notify, + .link_startup_notify = ufs_intel_link_startup_notify, + .resume = ufs_intel_resume, + .device_reset = ufs_intel_device_reset, +}; + + struct ufs_host *ufs_host; + ufs_host = ufshcd_get_variant(hba); + if (ufs_host && ufs_host->late_init) + ufs_host->late_init(hba); + + { pci_vdevice(intel, 0x98fa), (kernel_ulong_t)&ufs_intel_lkf_hba_vops },
|
Storage
|
b2c57925df1ffc9c930629a39c1680035f735ffb
|
adrian hunter
|
drivers
|
scsi
|
ufs
|
buslogic: remove isa support
|
the isa support in buslogic has been broken for a long time, as all the i/o path expects a struct device for dma mapping that is derived from the pci device, which would simply crash for isa adapters.
|
this release includes the landlock security module, which aims to make easier to sandbox applications; support for the clang control flow integrity, which aims to abort the program upon detecting certain forms of undefined behavior; support for randomising the stack address offset in each syscall; support for concurrent tbl flushing; preparatory apple m1 support; support for incoming amd and intel graphics chips; bpf support for calling kernel functions directly; a virtio sound driver for improved sound experience on virtualized guests; io_uring support for multi shot mode and a misc cgroup for miscellaneous resources. as always, there are many other features, new drivers, improvements and fixes.
|
remove isa support
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'security', 'networking', 'architectures arm x86 mips powerpc riscv s390 ia64 xtensa']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'remote processors', 'clock', 'phy ("physical layer" framework)', 'various']
|
['buslogic']
|
['kconfig', 'h', 'c', 'rst']
| 4
| 7
| 215
|
--- diff --git a/documentation/scsi/buslogic.rst b/documentation/scsi/buslogic.rst --- a/documentation/scsi/buslogic.rst +++ b/documentation/scsi/buslogic.rst -bt-545c isa fast scsi-2 -bt-540cf isa fast scsi-2 -bt-545s isa fast scsi-2 -bt-542d isa differential fast scsi-2 -bt-542b isa scsi-2 (542b revision h) -bt-542b isa scsi-2 (542b revisions a - g) -io:<integer> - - the "io:" option specifies an isa i/o address to be probed for a non-pci - multimaster host adapter. if neither "io:" nor "noprobeisa" options are - specified, then the standard list of buslogic multimaster isa i/o addresses - will be probed (0x330, 0x334, 0x230, 0x234, 0x130, and 0x134). multiple - "io:" options may be specified to precisely determine the i/o addresses to - be probed, but the probe order will always follow the standard list. - -noprobeisa - - the "noprobeisa" option disables probing of the standard buslogic isa i/o - addresses and therefore only pci multimaster and flashpoint host adapters - will be detected. - - capabilities of the detected target devices. for host adapters that - require isa bounce buffers, the queue depth is automatically set by default - to buslogic_taggedqueuedepthbb or buslogic_untaggedqueuedepthbb to avoid - excessive preallocation of dma bounce buffer memory. target devices that + capabilities of the detected target devices. target devices that diff --git a/drivers/scsi/buslogic.c b/drivers/scsi/buslogic.c --- a/drivers/scsi/buslogic.c +++ b/drivers/scsi/buslogic.c -/* - blogic_add_probeaddr_isa appends a single isa i/o address to the list - of i/o address and bus probe information to be checked for potential buslogic - host adapters. -*/ - -static void __init blogic_add_probeaddr_isa(unsigned long io_addr) -{ - struct blogic_probeinfo *probeinfo; - if (blogic_probeinfo_count >= blogic_max_adapters) - return; - probeinfo = &blogic_probeinfo_list[blogic_probeinfo_count++]; - probeinfo->adapter_type = blogic_multimaster; - probeinfo->adapter_bus_type = blogic_isa_bus; - probeinfo->io_addr = io_addr; - probeinfo->pci_device = null; -} - - -/* - blogic_init_probeinfo_isa initializes the list of i/o address and - bus probe information to be checked for potential buslogic scsi host adapters - only from the list of standard buslogic multimaster isa i/o addresses. -*/ - -static void __init blogic_init_probeinfo_isa(struct blogic_adapter *adapter) -{ - /* - if buslogic driver options specifications requested that isa - bus probes be inhibited, do not proceed further. - */ - if (blogic_probe_options.noprobe_isa) - return; - /* - append the list of standard buslogic multimaster isa i/o addresses. - */ - if (!blogic_probe_options.limited_isa || blogic_probe_options.probe330) - blogic_add_probeaddr_isa(0x330); - if (!blogic_probe_options.limited_isa || blogic_probe_options.probe334) - blogic_add_probeaddr_isa(0x334); - if (!blogic_probe_options.limited_isa || blogic_probe_options.probe230) - blogic_add_probeaddr_isa(0x230); - if (!blogic_probe_options.limited_isa || blogic_probe_options.probe234) - blogic_add_probeaddr_isa(0x234); - if (!blogic_probe_options.limited_isa || blogic_probe_options.probe130) - blogic_add_probeaddr_isa(0x130); - if (!blogic_probe_options.limited_isa || blogic_probe_options.probe134) - blogic_add_probeaddr_isa(0x134); -} - - -#ifdef config_pci - - - bool addr_seen[6]; - for (i = 0; i < 6; i++) - addr_seen[i] = false; - &adapter_info, sizeof(adapter_info)) == - sizeof(adapter_info)) { - if (adapter_info.isa_port < 6) - addr_seen[adapter_info.isa_port] = true; - } else + &adapter_info, sizeof(adapter_info)) != + sizeof(adapter_info)) - /* - if no pci multimaster host adapter is assigned the primary - i/o address, then the primary i/o address must be probed - explicitly before any pci host adapters are probed. - */ - if (!blogic_probe_options.noprobe_isa) - if (pr_probeinfo->io_addr == 0 && - (!blogic_probe_options.limited_isa || - blogic_probe_options.probe330)) { - pr_probeinfo->adapter_type = blogic_multimaster; - pr_probeinfo->adapter_bus_type = blogic_isa_bus; - pr_probeinfo->io_addr = 0x330; - } - /* - append the list of standard buslogic multimaster isa i/o addresses, - omitting the primary i/o address which has already been handled. - */ - if (!blogic_probe_options.noprobe_isa) { - if (!addr_seen[1] && - (!blogic_probe_options.limited_isa || - blogic_probe_options.probe334)) - blogic_add_probeaddr_isa(0x334); - if (!addr_seen[2] && - (!blogic_probe_options.limited_isa || - blogic_probe_options.probe230)) - blogic_add_probeaddr_isa(0x230); - if (!addr_seen[3] && - (!blogic_probe_options.limited_isa || - blogic_probe_options.probe234)) - blogic_add_probeaddr_isa(0x234); - if (!addr_seen[4] && - (!blogic_probe_options.limited_isa || - blogic_probe_options.probe130)) - blogic_add_probeaddr_isa(0x130); - if (!addr_seen[5] && - (!blogic_probe_options.limited_isa || - blogic_probe_options.probe134)) - blogic_add_probeaddr_isa(0x134); - } - } else { - blogic_init_probeinfo_isa(adapter); -#else -#define blogic_init_probeinfo_list(adapter) \ - blogic_init_probeinfo_isa(adapter) -#endif /* config_pci */ - - - if (adapter->adapter_bus_type == blogic_isa_bus) { - if (config.dma_ch5) - adapter->dma_ch = 5; - else if (config.dma_ch6) - adapter->dma_ch = 6; - else if (config.dma_ch7) - adapter->dma_ch = 7; - } - adapter->adapter_qdepth = (adapter->adapter_bus_type != - blogic_isa_bus ? 100 : 50); + adapter->adapter_qdepth = 100; - /* - isa host adapters require bounce buffers if there is more than - 16mb memory. - */ - if (adapter->adapter_bus_type == blogic_isa_bus && - (void *) high_memory > (void *) max_dma_address) - adapter->need_bouncebuf = true; - blogic_info(" dma channel: ", adapter); - if (adapter->dma_ch > 0) - blogic_info("%d, ", adapter, adapter->dma_ch); - else - blogic_info("none, ", adapter); + blogic_info(" dma channel: none, ", adapter); - /* - acquire exclusive access to the dma channel. - */ - if (adapter->dma_ch > 0) { - if (request_dma(adapter->dma_ch, adapter->full_model) < 0) { - blogic_err("unable to acquire dma channel %d - detaching ", adapter, adapter->dma_ch); - return false; - } - set_dma_mode(adapter->dma_ch, dma_mode_cascade); - enable_dma(adapter->dma_ch); - adapter->dma_chan_acquired = true; - } - /* - release exclusive access to the dma channel. - */ - if (adapter->dma_chan_acquired) - free_dma(adapter->dma_ch); - /* probing options. */ - if (blogic_parse(&options, "io:")) { - unsigned long io_addr = simple_strtoul(options, - &options, 0); - blogic_probe_options.limited_isa = true; - switch (io_addr) { - case 0x330: - blogic_probe_options.probe330 = true; - break; - case 0x334: - blogic_probe_options.probe334 = true; - break; - case 0x230: - blogic_probe_options.probe230 = true; - break; - case 0x234: - blogic_probe_options.probe234 = true; - break; - case 0x130: - blogic_probe_options.probe130 = true; - break; - case 0x134: - blogic_probe_options.probe134 = true; - break; - default: - blogic_err("buslogic: invalid driver options (invalid i/o address 0x%lx) ", null, io_addr); - return 0; - } - } else if (blogic_parse(&options, "noprobeisa")) - blogic_probe_options.noprobe_isa = true; - else if (blogic_parse(&options, "noprobepci")) + if (blogic_parse(&options, "noprobepci")) diff --git a/drivers/scsi/buslogic.h b/drivers/scsi/buslogic.h --- a/drivers/scsi/buslogic.h +++ b/drivers/scsi/buslogic.h - bool noprobe_isa:1; /* bit 1 */ - bool limited_isa:1; /* bit 6 */ - bool probe330:1; /* bit 7 */ - bool probe334:1; /* bit 8 */ - bool probe230:1; /* bit 9 */ - bool probe234:1; /* bit 10 */ - bool probe130:1; /* bit 11 */ - bool probe134:1; /* bit 12 */ - unsigned char dma_ch; - bool dma_chan_acquired:1; diff --git a/drivers/scsi/kconfig b/drivers/scsi/kconfig --- a/drivers/scsi/kconfig +++ b/drivers/scsi/kconfig - depends on (pci || isa) && scsi && isa_dma_api && virt_to_bus + depends on pci && scsi && virt_to_bus
|
Storage
|
8cad3b66bff4ee7c7d52b9a663cb6a2c5f66a7f7
|
christoph hellwig martin k petersen martin petersen oracle com khalid aziz khalid gonehiking org hannes reinecke hare suse de
|
drivers
|
scsi
| |
advansys: remove isa support
|
this is the last piece in the kernel requiring the block layer isa bounce buffering, and it does not actually look used. so remove it to see if anyone screams, in which case we'll need to find a solution to fix it back up.
|
this release includes the landlock security module, which aims to make easier to sandbox applications; support for the clang control flow integrity, which aims to abort the program upon detecting certain forms of undefined behavior; support for randomising the stack address offset in each syscall; support for concurrent tbl flushing; preparatory apple m1 support; support for incoming amd and intel graphics chips; bpf support for calling kernel functions directly; a virtio sound driver for improved sound experience on virtualized guests; io_uring support for multi shot mode and a misc cgroup for miscellaneous resources. as always, there are many other features, new drivers, improvements and fixes.
|
remove isa support
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'security', 'networking', 'architectures arm x86 mips powerpc riscv s390 ia64 xtensa']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'remote processors', 'clock', 'phy ("physical layer" framework)', 'various']
|
['advansys']
|
['c']
| 1
| 32
| 289
|
--- diff --git a/drivers/scsi/advansys.c b/drivers/scsi/advansys.c --- a/drivers/scsi/advansys.c +++ b/drivers/scsi/advansys.c -#define asc_is_isa (0x0001) -#define asc_is_isapnp (0x0081) -#define asc_chip_min_ver_isa (0x11) -#define asc_chip_min_ver_isa_pnp (0x21) -#define asc_chip_max_ver_isa (0x27) -#define asc_chip_ver_isa_bit (0x30) -#define asc_chip_ver_isapnp_bit (0x20) -#define asc_max_isa_dma_count (0x00ffffffl) -#define asc_flag_isa_over_16mb 0x40 - uchar isa_dma_speed; - uchar isa_dma_channel; - * these macros keep the chip scsi id and isa dma speed - * bitfields in board order. c bitfields aren't portable - * between big and little-endian platforms so they are - * not used. + * these macros keep the chip scsi id bitfields in board order. c bitfields + * aren't portable between big and little-endian platforms so they are not used. - printk(" chip_scsi_id %d, isa_dma_speed %d, isa_dma_channel %d, " - "chip_version %d, ", h->chip_scsi_id, h->isa_dma_speed, - h->isa_dma_channel, h->chip_version); + printk(" chip_scsi_id %d, chip_version %d, ", + h->chip_scsi_id, h->chip_version); - printk(" cmd_per_lun %d, sg_tablesize %d, unchecked_isa_dma %d ", - s->cmd_per_lun, s->sg_tablesize, s->unchecked_isa_dma); + printk(" cmd_per_lun %d, sg_tablesize %d ", + s->cmd_per_lun, s->sg_tablesize); - if (asc_dvc_varp->bus_type & asc_is_isa) { - if ((asc_dvc_varp->bus_type & asc_is_isapnp) == - asc_is_isapnp) { - busname = "isa pnp"; + + if (asc_dvc_varp->bus_type & asc_is_vl) { + busname = "vl"; + } else if (asc_dvc_varp->bus_type & asc_is_eisa) { + busname = "eisa"; + } else if (asc_dvc_varp->bus_type & asc_is_pci) { + if ((asc_dvc_varp->bus_type & asc_is_pci_ultra) + == asc_is_pci_ultra) { + busname = "pci ultra"; - busname = "isa"; + busname = "pci"; - sprintf(info, - "advansys scsi %s: %s: io 0x%lx-0x%lx, irq 0x%x, dma 0x%x", - asc_version, busname, - (ulong)shost->io_port, - (ulong)shost->io_port + asc_ioadr_gap - 1, - boardp->irq, shost->dma_channel); - if (asc_dvc_varp->bus_type & asc_is_vl) { - busname = "vl"; - } else if (asc_dvc_varp->bus_type & asc_is_eisa) { - busname = "eisa"; - } else if (asc_dvc_varp->bus_type & asc_is_pci) { - if ((asc_dvc_varp->bus_type & asc_is_pci_ultra) - == asc_is_pci_ultra) { - busname = "pci ultra"; - } else { - busname = "pci"; - } - } else { - busname = "?"; - shost_printk(kern_err, shost, "unknown bus " - "type %d ", asc_dvc_varp->bus_type); - } - sprintf(info, - "advansys scsi %s: %s: io 0x%lx-0x%lx, irq 0x%x", - asc_version, busname, (ulong)shost->io_port, - (ulong)shost->io_port + asc_ioadr_gap - 1, - boardp->irq); + busname = "?"; + shost_printk(kern_err, shost, "unknown bus " + "type %d ", asc_dvc_varp->bus_type); + sprintf(info, + "advansys scsi %s: %s: io 0x%lx-0x%lx, irq 0x%x", + asc_version, busname, (ulong)shost->io_port, + (ulong)shost->io_port + asc_ioadr_gap - 1, + boardp->irq); -#ifdef config_isa - asc_dvc_var *asc_dvc_varp; - int isa_dma_speed[] = { 10, 8, 7, 6, 5, 4, 3, 2 }; - asc_dvc_varp = &boardp->dvc_var.asc_dvc_var; -#endif /* config_isa */ - -#ifdef config_isa - if (asc_dvc_varp->bus_type & asc_is_isa) { - seq_printf(m, - " host isa dma speed: %d mb/s ", - isa_dma_speed[asc_eep_get_dma_spd(ep)]); - } -#endif /* config_isa */ - seq_printf(m, - " unchecked_isa_dma %d ", - shost->unchecked_isa_dma); - - - /* - * isa pnp uses the top bit as the 32k bios flag - */ - if (bus_type == asc_is_isapnp) - cfg_lsw &= 0x7fff; -#ifdef config_isa -static void ascenableisadma(uchar dma_channel) -{ - if (dma_channel < 4) { - outp(0x000b, (ushort)(0xc0 | dma_channel)); - outp(0x000a, dma_channel); - } else if (dma_channel < 8) { - outp(0x00d6, (ushort)(0xc0 | (dma_channel - 4))); - outp(0x00d4, (ushort)(dma_channel - 4)); - } -} -#endif /* config_isa */ - - if (bus_type & asc_is_isa) - return asc_max_isa_dma_count; - else if (bus_type & (asc_is_eisa | asc_is_vl)) + if (bus_type & (asc_is_eisa | asc_is_vl)) -#ifdef config_isa -static ushort ascgetisadmachannel(portaddr iop_base) -{ - ushort channel; - - channel = ascgetchipcfglsw(iop_base) & 0x0003; - if (channel == 0x03) - return (0); - else if (channel == 0x00) - return (7); - return (channel + 4); -} - -static ushort ascsetisadmachannel(portaddr iop_base, ushort dma_channel) -{ - ushort cfg_lsw; - uchar value; - - if ((dma_channel >= 5) && (dma_channel <= 7)) { - if (dma_channel == 7) - value = 0x00; - else - value = dma_channel - 4; - cfg_lsw = ascgetchipcfglsw(iop_base) & 0xfffc; - cfg_lsw |= value; - ascsetchipcfglsw(iop_base, cfg_lsw); - return (ascgetisadmachannel(iop_base)); - } - return 0; -} - -static uchar ascgetisadmaspeed(portaddr iop_base) -{ - uchar speed_value; - - ascsetbank(iop_base, 1); - speed_value = ascreadchipdmaspeed(iop_base); - speed_value &= 0x07; - ascsetbank(iop_base, 0); - return speed_value; -} - -static uchar ascsetisadmaspeed(portaddr iop_base, uchar speed_value) -{ - speed_value &= 0x07; - ascsetbank(iop_base, 1); - ascwritechipdmaspeed(iop_base, speed_value); - ascsetbank(iop_base, 0); - return ascgetisadmaspeed(iop_base); -} -#endif /* config_isa */ - - (asc_is_isa | asc_is_pci | asc_is_eisa | asc_is_vl)) == 0) { + (asc_is_pci | asc_is_eisa | asc_is_vl)) == 0) { - asc_dvc->cfg->isa_dma_speed = asc_def_isa_dma_speed; -#ifdef config_isa - if ((asc_dvc->bus_type & asc_is_isa) != 0) { - if (chip_version >= asc_chip_min_ver_isa_pnp) { - ascsetchipifc(iop_base, ifc_init_default); - asc_dvc->bus_type = asc_is_isapnp; - } - asc_dvc->cfg->isa_dma_channel = - (uchar)ascgetisadmachannel(iop_base); - } -#endif /* config_isa */ - asc_dvc->cfg->isa_dma_speed = asc_eep_get_dma_spd(eep_config); - if (asc_dvc->bus_type == asc_is_isapnp) { - if (ascgetchipversion(iop_base, asc_dvc->bus_type) - == asc_chip_ver_asyn_bug) { - asc_dvc->bug_fix_cntl |= asc_bug_fix_asyn_use_syn; - } - } -#ifdef config_isa - if (asc_dvc->bus_type & asc_is_isa) { - ascsetisadmachannel(iop_base, asc_dvc->cfg->isa_dma_channel); - ascsetisadmaspeed(iop_base, asc_dvc->cfg->isa_dma_speed); - } -#endif /* config_isa */ - /* - * because the driver may control an isa adapter 'unchecked_isa_dma' - * must be set. the flag will be cleared in advansys_board_found - * for non-isa adapters. - */ - .unchecked_isa_dma = true, - case asc_is_isa: - shost->unchecked_isa_dma = true; - share_irq = 0; - break; - shost->unchecked_isa_dma = false; - shost->unchecked_isa_dma = false; - shost->unchecked_isa_dma = false; - shost->unchecked_isa_dma = false; - shost->unchecked_isa_dma = false; - asc_eep_set_dma_spd(ep, asc_dvc_varp->cfg->isa_dma_speed); + asc_eep_set_dma_spd(ep, asc_def_isa_dma_speed); -#ifdef config_isa - if (asc_narrow_board(boardp)) { - /* register dma channel for isa bus. */ - if (asc_dvc_varp->bus_type & asc_is_isa) { - shost->dma_channel = asc_dvc_varp->cfg->isa_dma_channel; - ret = request_dma(shost->dma_channel, drv_name); - if (ret) { - shost_printk(kern_err, shost, "request_dma() " - "%d failed %d ", - shost->dma_channel, ret); - goto err_unmap; - } - ascenableisadma(shost->dma_channel); - } - } -#endif /* config_isa */ - goto err_free_dma; + goto err_unmap; - err_free_dma: -#ifdef config_isa - if (shost->dma_channel != no_isa_dma) - free_dma(shost->dma_channel); -#endif -#ifdef config_isa - if (shost->dma_channel != no_isa_dma) { - asc_dbg(1, "free_dma() "); - free_dma(shost->dma_channel); - } -#endif + -/* - * the isa irq number is found in bits 2 and 3 of the cfglsw. it decodes as: - * 00: 10 - * 01: 11 - * 10: 12 - * 11: 15 - */ -static unsigned int advansys_isa_irq_no(portaddr iop_base) -{ - unsigned short cfg_lsw = ascgetchipcfglsw(iop_base); - unsigned int chip_irq = ((cfg_lsw >> 2) & 0x03) + 10; - if (chip_irq == 13) - chip_irq = 15; - return chip_irq; -} - -static int advansys_isa_probe(struct device *dev, unsigned int id) -{ - int err = -enodev; - portaddr iop_base = _asc_def_iop_base[id]; - struct scsi_host *shost; - struct asc_board *board; - - if (!request_region(iop_base, asc_ioadr_gap, drv_name)) { - asc_dbg(1, "i/o port 0x%x busy ", iop_base); - return -enodev; - } - asc_dbg(1, "probing i/o port 0x%x ", iop_base); - if (!ascfindsignature(iop_base)) - goto release_region; - if (!(ascgetchipversion(iop_base, asc_is_isa) & asc_chip_ver_isa_bit)) - goto release_region; - - err = -enomem; - shost = scsi_host_alloc(&advansys_template, sizeof(*board)); - if (!shost) - goto release_region; - - board = shost_priv(shost); - board->irq = advansys_isa_irq_no(iop_base); - board->dev = dev; - board->shost = shost; - - err = advansys_board_found(shost, iop_base, asc_is_isa); - if (err) - goto free_host; - - dev_set_drvdata(dev, shost); - return 0; - - free_host: - scsi_host_put(shost); - release_region: - release_region(iop_base, asc_ioadr_gap); - return err; -} - -static void advansys_isa_remove(struct device *dev, unsigned int id) +static void advansys_vlb_remove(struct device *dev, unsigned int id) -static struct isa_driver advansys_isa_driver = { - .probe = advansys_isa_probe, - .remove = advansys_isa_remove, - .driver = { - .owner = this_module, - .name = drv_name, - }, -}; - - .remove = advansys_isa_remove, + .remove = advansys_vlb_remove, - error = isa_register_driver(&advansys_isa_driver, - asc_ioadr_table_max_ix); - if (error) - goto fail; - - goto unregister_isa; + goto fail; - unregister_isa: - isa_unregister_driver(&advansys_isa_driver); - isa_unregister_driver(&advansys_isa_driver);
|
Storage
|
9b4c8eaa68d0ce85be4ae06cbbd158c53f66fe4f
|
christoph hellwig martin k petersen martin petersen oracle com matthew wilcox oracle willy infradead org hannes reinecke hare suse de
|
drivers
|
scsi
| |
staging: comedi: move out of staging directory
|
the comedi code came into the kernel back in 2008, but traces its lifetime to much much earlier. it's been polished and buffed and there's really nothing preventing it from being part of the "real" portion of the kernel.
|
this release includes the landlock security module, which aims to make easier to sandbox applications; support for the clang control flow integrity, which aims to abort the program upon detecting certain forms of undefined behavior; support for randomising the stack address offset in each syscall; support for concurrent tbl flushing; preparatory apple m1 support; support for incoming amd and intel graphics chips; bpf support for calling kernel functions directly; a virtio sound driver for improved sound experience on virtualized guests; io_uring support for multi shot mode and a misc cgroup for miscellaneous resources. as always, there are many other features, new drivers, improvements and fixes.
|
finally move out of staging directory
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'security', 'networking', 'architectures arm x86 mips powerpc riscv s390 ia64 xtensa']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'remote processors', 'clock', 'phy ("physical layer" framework)', 'various']
|
['comedi']
|
['c', 'h', 'kconfig', 'todo', 'py', 'gitignore', 'maintainers', 'readme', 'makefile']
| 220
| 9
| 9
|
--- diff --git a/maintainers b/maintainers --- a/maintainers +++ b/maintainers +comedi drivers +m: ian abbott <abbotti@mev.co.uk> +m: h hartley sweeten <hsweeten@visionengravers.com> +s: odd fixes +f: drivers/comedi/ + -staging - comedi -m: ian abbott <abbotti@mev.co.uk> -m: h hartley sweeten <hsweeten@visionengravers.com> -s: odd fixes -f: drivers/staging/comedi/ - diff --git a/drivers/kconfig b/drivers/kconfig --- a/drivers/kconfig +++ b/drivers/kconfig +source "drivers/comedi/kconfig" + diff --git a/drivers/makefile b/drivers/makefile --- a/drivers/makefile +++ b/drivers/makefile +obj-$(config_comedi) += comedi/ diff --git a/drivers/staging/comedi/kconfig b/drivers/comedi/kconfig diff --git a/drivers/staging/comedi/makefile b/drivers/comedi/makefile diff --git a/drivers/staging/comedi/todo b/drivers/comedi/todo diff --git a/drivers/staging/comedi/comedi.h b/drivers/comedi/comedi.h diff --git a/drivers/staging/comedi/comedi_buf.c b/drivers/comedi/comedi_buf.c diff --git a/drivers/staging/comedi/comedi_fops.c b/drivers/comedi/comedi_fops.c diff --git a/drivers/staging/comedi/comedi_internal.h b/drivers/comedi/comedi_internal.h diff --git a/drivers/staging/comedi/comedi_pci.c b/drivers/comedi/comedi_pci.c diff --git a/drivers/staging/comedi/comedi_pci.h b/drivers/comedi/comedi_pci.h diff --git a/drivers/staging/comedi/comedi_pcmcia.c b/drivers/comedi/comedi_pcmcia.c diff --git a/drivers/staging/comedi/comedi_pcmcia.h b/drivers/comedi/comedi_pcmcia.h diff --git a/drivers/staging/comedi/comedi_usb.c b/drivers/comedi/comedi_usb.c diff --git a/drivers/staging/comedi/comedi_usb.h b/drivers/comedi/comedi_usb.h diff --git a/drivers/staging/comedi/comedidev.h b/drivers/comedi/comedidev.h diff --git a/drivers/staging/comedi/comedilib.h b/drivers/comedi/comedilib.h diff --git a/drivers/staging/comedi/drivers.c b/drivers/comedi/drivers.c diff --git a/drivers/staging/comedi/drivers/8255.c b/drivers/comedi/drivers/8255.c diff --git a/drivers/staging/comedi/drivers/8255.h b/drivers/comedi/drivers/8255.h diff --git a/drivers/staging/comedi/drivers/8255_pci.c b/drivers/comedi/drivers/8255_pci.c diff --git a/drivers/staging/comedi/drivers/makefile b/drivers/comedi/drivers/makefile diff --git a/drivers/staging/comedi/drivers/addi_apci_1032.c b/drivers/comedi/drivers/addi_apci_1032.c diff --git a/drivers/staging/comedi/drivers/addi_apci_1500.c b/drivers/comedi/drivers/addi_apci_1500.c diff --git a/drivers/staging/comedi/drivers/addi_apci_1516.c b/drivers/comedi/drivers/addi_apci_1516.c diff --git a/drivers/staging/comedi/drivers/addi_apci_1564.c b/drivers/comedi/drivers/addi_apci_1564.c diff --git a/drivers/staging/comedi/drivers/addi_apci_16xx.c b/drivers/comedi/drivers/addi_apci_16xx.c diff --git a/drivers/staging/comedi/drivers/addi_apci_2032.c b/drivers/comedi/drivers/addi_apci_2032.c diff --git a/drivers/staging/comedi/drivers/addi_apci_2200.c b/drivers/comedi/drivers/addi_apci_2200.c diff --git a/drivers/staging/comedi/drivers/addi_apci_3120.c b/drivers/comedi/drivers/addi_apci_3120.c diff --git a/drivers/staging/comedi/drivers/addi_apci_3501.c b/drivers/comedi/drivers/addi_apci_3501.c diff --git a/drivers/staging/comedi/drivers/addi_apci_3xxx.c b/drivers/comedi/drivers/addi_apci_3xxx.c diff --git a/drivers/staging/comedi/drivers/addi_tcw.h b/drivers/comedi/drivers/addi_tcw.h diff --git a/drivers/staging/comedi/drivers/addi_watchdog.c b/drivers/comedi/drivers/addi_watchdog.c diff --git a/drivers/staging/comedi/drivers/addi_watchdog.h b/drivers/comedi/drivers/addi_watchdog.h diff --git a/drivers/staging/comedi/drivers/adl_pci6208.c b/drivers/comedi/drivers/adl_pci6208.c diff --git a/drivers/staging/comedi/drivers/adl_pci7x3x.c b/drivers/comedi/drivers/adl_pci7x3x.c diff --git a/drivers/staging/comedi/drivers/adl_pci8164.c b/drivers/comedi/drivers/adl_pci8164.c diff --git a/drivers/staging/comedi/drivers/adl_pci9111.c b/drivers/comedi/drivers/adl_pci9111.c diff --git a/drivers/staging/comedi/drivers/adl_pci9118.c b/drivers/comedi/drivers/adl_pci9118.c diff --git a/drivers/staging/comedi/drivers/adq12b.c b/drivers/comedi/drivers/adq12b.c diff --git a/drivers/staging/comedi/drivers/adv_pci1710.c b/drivers/comedi/drivers/adv_pci1710.c diff --git a/drivers/staging/comedi/drivers/adv_pci1720.c b/drivers/comedi/drivers/adv_pci1720.c diff --git a/drivers/staging/comedi/drivers/adv_pci1723.c b/drivers/comedi/drivers/adv_pci1723.c diff --git a/drivers/staging/comedi/drivers/adv_pci1724.c b/drivers/comedi/drivers/adv_pci1724.c diff --git a/drivers/staging/comedi/drivers/adv_pci1760.c b/drivers/comedi/drivers/adv_pci1760.c diff --git a/drivers/staging/comedi/drivers/adv_pci_dio.c b/drivers/comedi/drivers/adv_pci_dio.c diff --git a/drivers/staging/comedi/drivers/aio_aio12_8.c b/drivers/comedi/drivers/aio_aio12_8.c diff --git a/drivers/staging/comedi/drivers/aio_iiro_16.c b/drivers/comedi/drivers/aio_iiro_16.c diff --git a/drivers/staging/comedi/drivers/amcc_s5933.h b/drivers/comedi/drivers/amcc_s5933.h diff --git a/drivers/staging/comedi/drivers/amplc_dio200.c b/drivers/comedi/drivers/amplc_dio200.c diff --git a/drivers/staging/comedi/drivers/amplc_dio200.h b/drivers/comedi/drivers/amplc_dio200.h diff --git a/drivers/staging/comedi/drivers/amplc_dio200_common.c b/drivers/comedi/drivers/amplc_dio200_common.c diff --git a/drivers/staging/comedi/drivers/amplc_dio200_pci.c b/drivers/comedi/drivers/amplc_dio200_pci.c diff --git a/drivers/staging/comedi/drivers/amplc_pc236.c b/drivers/comedi/drivers/amplc_pc236.c diff --git a/drivers/staging/comedi/drivers/amplc_pc236.h b/drivers/comedi/drivers/amplc_pc236.h diff --git a/drivers/staging/comedi/drivers/amplc_pc236_common.c b/drivers/comedi/drivers/amplc_pc236_common.c diff --git a/drivers/staging/comedi/drivers/amplc_pc263.c b/drivers/comedi/drivers/amplc_pc263.c diff --git a/drivers/staging/comedi/drivers/amplc_pci224.c b/drivers/comedi/drivers/amplc_pci224.c diff --git a/drivers/staging/comedi/drivers/amplc_pci230.c b/drivers/comedi/drivers/amplc_pci230.c diff --git a/drivers/staging/comedi/drivers/amplc_pci236.c b/drivers/comedi/drivers/amplc_pci236.c diff --git a/drivers/staging/comedi/drivers/amplc_pci263.c b/drivers/comedi/drivers/amplc_pci263.c diff --git a/drivers/staging/comedi/drivers/c6xdigio.c b/drivers/comedi/drivers/c6xdigio.c diff --git a/drivers/staging/comedi/drivers/cb_das16_cs.c b/drivers/comedi/drivers/cb_das16_cs.c diff --git a/drivers/staging/comedi/drivers/cb_pcidas.c b/drivers/comedi/drivers/cb_pcidas.c diff --git a/drivers/staging/comedi/drivers/cb_pcidas64.c b/drivers/comedi/drivers/cb_pcidas64.c diff --git a/drivers/staging/comedi/drivers/cb_pcidda.c b/drivers/comedi/drivers/cb_pcidda.c diff --git a/drivers/staging/comedi/drivers/cb_pcimdas.c b/drivers/comedi/drivers/cb_pcimdas.c diff --git a/drivers/staging/comedi/drivers/cb_pcimdda.c b/drivers/comedi/drivers/cb_pcimdda.c diff --git a/drivers/staging/comedi/drivers/comedi_8254.c b/drivers/comedi/drivers/comedi_8254.c diff --git a/drivers/staging/comedi/drivers/comedi_8254.h b/drivers/comedi/drivers/comedi_8254.h diff --git a/drivers/staging/comedi/drivers/comedi_8255.c b/drivers/comedi/drivers/comedi_8255.c diff --git a/drivers/staging/comedi/drivers/comedi_bond.c b/drivers/comedi/drivers/comedi_bond.c diff --git a/drivers/staging/comedi/drivers/comedi_isadma.c b/drivers/comedi/drivers/comedi_isadma.c diff --git a/drivers/staging/comedi/drivers/comedi_isadma.h b/drivers/comedi/drivers/comedi_isadma.h diff --git a/drivers/staging/comedi/drivers/comedi_parport.c b/drivers/comedi/drivers/comedi_parport.c diff --git a/drivers/staging/comedi/drivers/comedi_test.c b/drivers/comedi/drivers/comedi_test.c diff --git a/drivers/staging/comedi/drivers/contec_pci_dio.c b/drivers/comedi/drivers/contec_pci_dio.c diff --git a/drivers/staging/comedi/drivers/dac02.c b/drivers/comedi/drivers/dac02.c diff --git a/drivers/staging/comedi/drivers/daqboard2000.c b/drivers/comedi/drivers/daqboard2000.c diff --git a/drivers/staging/comedi/drivers/das08.c b/drivers/comedi/drivers/das08.c diff --git a/drivers/staging/comedi/drivers/das08.h b/drivers/comedi/drivers/das08.h diff --git a/drivers/staging/comedi/drivers/das08_cs.c b/drivers/comedi/drivers/das08_cs.c diff --git a/drivers/staging/comedi/drivers/das08_isa.c b/drivers/comedi/drivers/das08_isa.c diff --git a/drivers/staging/comedi/drivers/das08_pci.c b/drivers/comedi/drivers/das08_pci.c diff --git a/drivers/staging/comedi/drivers/das16.c b/drivers/comedi/drivers/das16.c diff --git a/drivers/staging/comedi/drivers/das16m1.c b/drivers/comedi/drivers/das16m1.c diff --git a/drivers/staging/comedi/drivers/das1800.c b/drivers/comedi/drivers/das1800.c diff --git a/drivers/staging/comedi/drivers/das6402.c b/drivers/comedi/drivers/das6402.c diff --git a/drivers/staging/comedi/drivers/das800.c b/drivers/comedi/drivers/das800.c diff --git a/drivers/staging/comedi/drivers/dmm32at.c b/drivers/comedi/drivers/dmm32at.c diff --git a/drivers/staging/comedi/drivers/dt2801.c b/drivers/comedi/drivers/dt2801.c diff --git a/drivers/staging/comedi/drivers/dt2811.c b/drivers/comedi/drivers/dt2811.c diff --git a/drivers/staging/comedi/drivers/dt2814.c b/drivers/comedi/drivers/dt2814.c diff --git a/drivers/staging/comedi/drivers/dt2815.c b/drivers/comedi/drivers/dt2815.c diff --git a/drivers/staging/comedi/drivers/dt2817.c b/drivers/comedi/drivers/dt2817.c diff --git a/drivers/staging/comedi/drivers/dt282x.c b/drivers/comedi/drivers/dt282x.c diff --git a/drivers/staging/comedi/drivers/dt3000.c b/drivers/comedi/drivers/dt3000.c diff --git a/drivers/staging/comedi/drivers/dt9812.c b/drivers/comedi/drivers/dt9812.c diff --git a/drivers/staging/comedi/drivers/dyna_pci10xx.c b/drivers/comedi/drivers/dyna_pci10xx.c diff --git a/drivers/staging/comedi/drivers/fl512.c b/drivers/comedi/drivers/fl512.c diff --git a/drivers/staging/comedi/drivers/gsc_hpdi.c b/drivers/comedi/drivers/gsc_hpdi.c diff --git a/drivers/staging/comedi/drivers/icp_multi.c b/drivers/comedi/drivers/icp_multi.c diff --git a/drivers/staging/comedi/drivers/ii_pci20kc.c b/drivers/comedi/drivers/ii_pci20kc.c diff --git a/drivers/staging/comedi/drivers/jr3_pci.c b/drivers/comedi/drivers/jr3_pci.c diff --git a/drivers/staging/comedi/drivers/jr3_pci.h b/drivers/comedi/drivers/jr3_pci.h diff --git a/drivers/staging/comedi/drivers/ke_counter.c b/drivers/comedi/drivers/ke_counter.c diff --git a/drivers/staging/comedi/drivers/me4000.c b/drivers/comedi/drivers/me4000.c diff --git a/drivers/staging/comedi/drivers/me_daq.c b/drivers/comedi/drivers/me_daq.c diff --git a/drivers/staging/comedi/drivers/mf6x4.c b/drivers/comedi/drivers/mf6x4.c diff --git a/drivers/staging/comedi/drivers/mite.c b/drivers/comedi/drivers/mite.c diff --git a/drivers/staging/comedi/drivers/mite.h b/drivers/comedi/drivers/mite.h diff --git a/drivers/staging/comedi/drivers/mpc624.c b/drivers/comedi/drivers/mpc624.c diff --git a/drivers/staging/comedi/drivers/multiq3.c b/drivers/comedi/drivers/multiq3.c diff --git a/drivers/staging/comedi/drivers/ni_6527.c b/drivers/comedi/drivers/ni_6527.c diff --git a/drivers/staging/comedi/drivers/ni_65xx.c b/drivers/comedi/drivers/ni_65xx.c diff --git a/drivers/staging/comedi/drivers/ni_660x.c b/drivers/comedi/drivers/ni_660x.c diff --git a/drivers/staging/comedi/drivers/ni_670x.c b/drivers/comedi/drivers/ni_670x.c diff --git a/drivers/staging/comedi/drivers/ni_at_a2150.c b/drivers/comedi/drivers/ni_at_a2150.c diff --git a/drivers/staging/comedi/drivers/ni_at_ao.c b/drivers/comedi/drivers/ni_at_ao.c diff --git a/drivers/staging/comedi/drivers/ni_atmio.c b/drivers/comedi/drivers/ni_atmio.c diff --git a/drivers/staging/comedi/drivers/ni_atmio16d.c b/drivers/comedi/drivers/ni_atmio16d.c diff --git a/drivers/staging/comedi/drivers/ni_daq_700.c b/drivers/comedi/drivers/ni_daq_700.c diff --git a/drivers/staging/comedi/drivers/ni_daq_dio24.c b/drivers/comedi/drivers/ni_daq_dio24.c diff --git a/drivers/staging/comedi/drivers/ni_labpc.c b/drivers/comedi/drivers/ni_labpc.c diff --git a/drivers/staging/comedi/drivers/ni_labpc.h b/drivers/comedi/drivers/ni_labpc.h diff --git a/drivers/staging/comedi/drivers/ni_labpc_common.c b/drivers/comedi/drivers/ni_labpc_common.c diff --git a/drivers/staging/comedi/drivers/ni_labpc_cs.c b/drivers/comedi/drivers/ni_labpc_cs.c diff --git a/drivers/staging/comedi/drivers/ni_labpc_isadma.c b/drivers/comedi/drivers/ni_labpc_isadma.c diff --git a/drivers/staging/comedi/drivers/ni_labpc_isadma.h b/drivers/comedi/drivers/ni_labpc_isadma.h diff --git a/drivers/staging/comedi/drivers/ni_labpc_pci.c b/drivers/comedi/drivers/ni_labpc_pci.c diff --git a/drivers/staging/comedi/drivers/ni_labpc_regs.h b/drivers/comedi/drivers/ni_labpc_regs.h diff --git a/drivers/staging/comedi/drivers/ni_mio_common.c b/drivers/comedi/drivers/ni_mio_common.c diff --git a/drivers/staging/comedi/drivers/ni_mio_cs.c b/drivers/comedi/drivers/ni_mio_cs.c diff --git a/drivers/staging/comedi/drivers/ni_pcidio.c b/drivers/comedi/drivers/ni_pcidio.c diff --git a/drivers/staging/comedi/drivers/ni_pcimio.c b/drivers/comedi/drivers/ni_pcimio.c diff --git a/drivers/staging/comedi/drivers/ni_routes.c b/drivers/comedi/drivers/ni_routes.c diff --git a/drivers/staging/comedi/drivers/ni_routes.h b/drivers/comedi/drivers/ni_routes.h diff --git a/drivers/staging/comedi/drivers/ni_routing/readme b/drivers/comedi/drivers/ni_routing/readme diff --git a/drivers/staging/comedi/drivers/ni_routing/ni_device_routes.c b/drivers/comedi/drivers/ni_routing/ni_device_routes.c diff --git a/drivers/staging/comedi/drivers/ni_routing/ni_device_routes.h b/drivers/comedi/drivers/ni_routing/ni_device_routes.h diff --git a/drivers/staging/comedi/drivers/ni_routing/ni_device_routes/all.h b/drivers/comedi/drivers/ni_routing/ni_device_routes/all.h diff --git a/drivers/staging/comedi/drivers/ni_routing/ni_device_routes/pci-6070e.c b/drivers/comedi/drivers/ni_routing/ni_device_routes/pci-6070e.c diff --git a/drivers/staging/comedi/drivers/ni_routing/ni_device_routes/pci-6220.c b/drivers/comedi/drivers/ni_routing/ni_device_routes/pci-6220.c diff --git a/drivers/staging/comedi/drivers/ni_routing/ni_device_routes/pci-6221.c b/drivers/comedi/drivers/ni_routing/ni_device_routes/pci-6221.c diff --git a/drivers/staging/comedi/drivers/ni_routing/ni_device_routes/pci-6229.c b/drivers/comedi/drivers/ni_routing/ni_device_routes/pci-6229.c diff --git a/drivers/staging/comedi/drivers/ni_routing/ni_device_routes/pci-6251.c b/drivers/comedi/drivers/ni_routing/ni_device_routes/pci-6251.c diff --git a/drivers/staging/comedi/drivers/ni_routing/ni_device_routes/pci-6254.c b/drivers/comedi/drivers/ni_routing/ni_device_routes/pci-6254.c diff --git a/drivers/staging/comedi/drivers/ni_routing/ni_device_routes/pci-6259.c b/drivers/comedi/drivers/ni_routing/ni_device_routes/pci-6259.c diff --git a/drivers/staging/comedi/drivers/ni_routing/ni_device_routes/pci-6534.c b/drivers/comedi/drivers/ni_routing/ni_device_routes/pci-6534.c diff --git a/drivers/staging/comedi/drivers/ni_routing/ni_device_routes/pci-6602.c b/drivers/comedi/drivers/ni_routing/ni_device_routes/pci-6602.c diff --git a/drivers/staging/comedi/drivers/ni_routing/ni_device_routes/pci-6713.c b/drivers/comedi/drivers/ni_routing/ni_device_routes/pci-6713.c diff --git a/drivers/staging/comedi/drivers/ni_routing/ni_device_routes/pci-6723.c b/drivers/comedi/drivers/ni_routing/ni_device_routes/pci-6723.c diff --git a/drivers/staging/comedi/drivers/ni_routing/ni_device_routes/pci-6733.c b/drivers/comedi/drivers/ni_routing/ni_device_routes/pci-6733.c diff --git a/drivers/staging/comedi/drivers/ni_routing/ni_device_routes/pxi-6030e.c b/drivers/comedi/drivers/ni_routing/ni_device_routes/pxi-6030e.c diff --git a/drivers/staging/comedi/drivers/ni_routing/ni_device_routes/pxi-6224.c b/drivers/comedi/drivers/ni_routing/ni_device_routes/pxi-6224.c diff --git a/drivers/staging/comedi/drivers/ni_routing/ni_device_routes/pxi-6225.c b/drivers/comedi/drivers/ni_routing/ni_device_routes/pxi-6225.c diff --git a/drivers/staging/comedi/drivers/ni_routing/ni_device_routes/pxi-6251.c b/drivers/comedi/drivers/ni_routing/ni_device_routes/pxi-6251.c diff --git a/drivers/staging/comedi/drivers/ni_routing/ni_device_routes/pxi-6733.c b/drivers/comedi/drivers/ni_routing/ni_device_routes/pxi-6733.c diff --git a/drivers/staging/comedi/drivers/ni_routing/ni_device_routes/pxie-6251.c b/drivers/comedi/drivers/ni_routing/ni_device_routes/pxie-6251.c diff --git a/drivers/staging/comedi/drivers/ni_routing/ni_device_routes/pxie-6535.c b/drivers/comedi/drivers/ni_routing/ni_device_routes/pxie-6535.c diff --git a/drivers/staging/comedi/drivers/ni_routing/ni_device_routes/pxie-6738.c b/drivers/comedi/drivers/ni_routing/ni_device_routes/pxie-6738.c diff --git a/drivers/staging/comedi/drivers/ni_routing/ni_route_values.c b/drivers/comedi/drivers/ni_routing/ni_route_values.c diff --git a/drivers/staging/comedi/drivers/ni_routing/ni_route_values.h b/drivers/comedi/drivers/ni_routing/ni_route_values.h diff --git a/drivers/staging/comedi/drivers/ni_routing/ni_route_values/all.h b/drivers/comedi/drivers/ni_routing/ni_route_values/all.h diff --git a/drivers/staging/comedi/drivers/ni_routing/ni_route_values/ni_660x.c b/drivers/comedi/drivers/ni_routing/ni_route_values/ni_660x.c diff --git a/drivers/staging/comedi/drivers/ni_routing/ni_route_values/ni_eseries.c b/drivers/comedi/drivers/ni_routing/ni_route_values/ni_eseries.c diff --git a/drivers/staging/comedi/drivers/ni_routing/ni_route_values/ni_mseries.c b/drivers/comedi/drivers/ni_routing/ni_route_values/ni_mseries.c diff --git a/drivers/staging/comedi/drivers/ni_routing/tools/.gitignore b/drivers/comedi/drivers/ni_routing/tools/.gitignore diff --git a/drivers/staging/comedi/drivers/ni_routing/tools/makefile b/drivers/comedi/drivers/ni_routing/tools/makefile diff --git a/drivers/staging/comedi/drivers/ni_routing/tools/convert_c_to_py.c b/drivers/comedi/drivers/ni_routing/tools/convert_c_to_py.c diff --git a/drivers/staging/comedi/drivers/ni_routing/tools/convert_csv_to_c.py b/drivers/comedi/drivers/ni_routing/tools/convert_csv_to_c.py diff --git a/drivers/staging/comedi/drivers/ni_routing/tools/convert_py_to_csv.py b/drivers/comedi/drivers/ni_routing/tools/convert_py_to_csv.py diff --git a/drivers/staging/comedi/drivers/ni_routing/tools/csv_collection.py b/drivers/comedi/drivers/ni_routing/tools/csv_collection.py diff --git a/drivers/staging/comedi/drivers/ni_routing/tools/make_blank_csv.py b/drivers/comedi/drivers/ni_routing/tools/make_blank_csv.py diff --git a/drivers/staging/comedi/drivers/ni_routing/tools/ni_names.py b/drivers/comedi/drivers/ni_routing/tools/ni_names.py diff --git a/drivers/staging/comedi/drivers/ni_stc.h b/drivers/comedi/drivers/ni_stc.h diff --git a/drivers/staging/comedi/drivers/ni_tio.c b/drivers/comedi/drivers/ni_tio.c diff --git a/drivers/staging/comedi/drivers/ni_tio.h b/drivers/comedi/drivers/ni_tio.h diff --git a/drivers/staging/comedi/drivers/ni_tio_internal.h b/drivers/comedi/drivers/ni_tio_internal.h diff --git a/drivers/staging/comedi/drivers/ni_tiocmd.c b/drivers/comedi/drivers/ni_tiocmd.c diff --git a/drivers/staging/comedi/drivers/ni_usb6501.c b/drivers/comedi/drivers/ni_usb6501.c diff --git a/drivers/staging/comedi/drivers/pcl711.c b/drivers/comedi/drivers/pcl711.c diff --git a/drivers/staging/comedi/drivers/pcl724.c b/drivers/comedi/drivers/pcl724.c diff --git a/drivers/staging/comedi/drivers/pcl726.c b/drivers/comedi/drivers/pcl726.c diff --git a/drivers/staging/comedi/drivers/pcl730.c b/drivers/comedi/drivers/pcl730.c diff --git a/drivers/staging/comedi/drivers/pcl812.c b/drivers/comedi/drivers/pcl812.c diff --git a/drivers/staging/comedi/drivers/pcl816.c b/drivers/comedi/drivers/pcl816.c diff --git a/drivers/staging/comedi/drivers/pcl818.c b/drivers/comedi/drivers/pcl818.c diff --git a/drivers/staging/comedi/drivers/pcm3724.c b/drivers/comedi/drivers/pcm3724.c diff --git a/drivers/staging/comedi/drivers/pcmad.c b/drivers/comedi/drivers/pcmad.c diff --git a/drivers/staging/comedi/drivers/pcmda12.c b/drivers/comedi/drivers/pcmda12.c diff --git a/drivers/staging/comedi/drivers/pcmmio.c b/drivers/comedi/drivers/pcmmio.c diff --git a/drivers/staging/comedi/drivers/pcmuio.c b/drivers/comedi/drivers/pcmuio.c diff --git a/drivers/staging/comedi/drivers/plx9052.h b/drivers/comedi/drivers/plx9052.h diff --git a/drivers/staging/comedi/drivers/plx9080.h b/drivers/comedi/drivers/plx9080.h diff --git a/drivers/staging/comedi/drivers/quatech_daqp_cs.c b/drivers/comedi/drivers/quatech_daqp_cs.c diff --git a/drivers/staging/comedi/drivers/rtd520.c b/drivers/comedi/drivers/rtd520.c diff --git a/drivers/staging/comedi/drivers/rti800.c b/drivers/comedi/drivers/rti800.c diff --git a/drivers/staging/comedi/drivers/rti802.c b/drivers/comedi/drivers/rti802.c diff --git a/drivers/staging/comedi/drivers/s526.c b/drivers/comedi/drivers/s526.c diff --git a/drivers/staging/comedi/drivers/s626.c b/drivers/comedi/drivers/s626.c diff --git a/drivers/staging/comedi/drivers/s626.h b/drivers/comedi/drivers/s626.h diff --git a/drivers/staging/comedi/drivers/ssv_dnp.c b/drivers/comedi/drivers/ssv_dnp.c diff --git a/drivers/staging/comedi/drivers/tests/makefile b/drivers/comedi/drivers/tests/makefile diff --git a/drivers/staging/comedi/drivers/tests/comedi_example_test.c b/drivers/comedi/drivers/tests/comedi_example_test.c diff --git a/drivers/staging/comedi/drivers/tests/ni_routes_test.c b/drivers/comedi/drivers/tests/ni_routes_test.c diff --git a/drivers/staging/comedi/drivers/tests/unittest.h b/drivers/comedi/drivers/tests/unittest.h diff --git a/drivers/staging/comedi/drivers/usbdux.c b/drivers/comedi/drivers/usbdux.c diff --git a/drivers/staging/comedi/drivers/usbduxfast.c b/drivers/comedi/drivers/usbduxfast.c diff --git a/drivers/staging/comedi/drivers/usbduxsigma.c b/drivers/comedi/drivers/usbduxsigma.c diff --git a/drivers/staging/comedi/drivers/vmk80xx.c b/drivers/comedi/drivers/vmk80xx.c diff --git a/drivers/staging/comedi/drivers/z8536.h b/drivers/comedi/drivers/z8536.h diff --git a/drivers/staging/comedi/kcomedilib/makefile b/drivers/comedi/kcomedilib/makefile diff --git a/drivers/staging/comedi/kcomedilib/kcomedilib_main.c b/drivers/comedi/kcomedilib/kcomedilib_main.c diff --git a/drivers/staging/comedi/proc.c b/drivers/comedi/proc.c diff --git a/drivers/staging/comedi/range.c b/drivers/comedi/range.c diff --git a/drivers/staging/kconfig b/drivers/staging/kconfig --- a/drivers/staging/kconfig +++ b/drivers/staging/kconfig -source "drivers/staging/comedi/kconfig" - diff --git a/drivers/staging/makefile b/drivers/staging/makefile --- a/drivers/staging/makefile +++ b/drivers/staging/makefile -obj-$(config_comedi) += comedi/
|
Drivers in the Staging area
|
8ffdff6a8cfbdc174a3a390b6f825a277b5bb895
|
greg kroah hartman
|
drivers
|
drivers, kcomedilib, ni_device_routes, ni_route_values, ni_routing, tests, tools
|
|
media: imx: imx7_media-csi: add support for additional bayer patterns
|
the csi driver only supports the bggr bayer patterns currently. the hardware supports all patterns (the only pattern-dependent hardware operation is statistics calculation, as de-bayering isn't supported), enable them in the driver too.
|
this release includes the landlock security module, which aims to make easier to sandbox applications; support for the clang control flow integrity, which aims to abort the program upon detecting certain forms of undefined behavior; support for randomising the stack address offset in each syscall; support for concurrent tbl flushing; preparatory apple m1 support; support for incoming amd and intel graphics chips; bpf support for calling kernel functions directly; a virtio sound driver for improved sound experience on virtualized guests; io_uring support for multi shot mode and a misc cgroup for miscellaneous resources. as always, there are many other features, new drivers, improvements and fixes.
|
add support for additional bayer patterns
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'security', 'networking', 'architectures arm x86 mips powerpc riscv s390 ia64 xtensa']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'remote processors', 'clock', 'phy ("physical layer" framework)', 'various']
|
['media', 'imx', 'imx7_media-csi']
|
['c']
| 1
| 12
| 0
|
--- diff --git a/drivers/staging/media/imx/imx7-media-csi.c b/drivers/staging/media/imx/imx7-media-csi.c --- a/drivers/staging/media/imx/imx7-media-csi.c +++ b/drivers/staging/media/imx/imx7-media-csi.c + case v4l2_pix_fmt_sgbrg8: + case v4l2_pix_fmt_sgrbg8: + case v4l2_pix_fmt_srggb8: + case v4l2_pix_fmt_sgbrg16: + case v4l2_pix_fmt_sgrbg16: + case v4l2_pix_fmt_srggb16: + case v4l2_pix_fmt_sgbrg8: + case v4l2_pix_fmt_sgrbg8: + case v4l2_pix_fmt_srggb8: + case v4l2_pix_fmt_sgbrg16: + case v4l2_pix_fmt_sgrbg16: + case v4l2_pix_fmt_srggb16:
|
Drivers in the Staging area
|
42849cf0869fc8df5fa7c9cfdbd7dceb59d0f93a
|
laurent pinchart rui miguel silva rmfrfs gmail com
|
drivers
|
staging
|
imx, media
|
staging: dpaa2-switch: add .ndo_start_xmit() callback
|
implement the .ndo_start_xmit() callback for the switch port interfaces. for each of the switch ports, gather the corresponding queue destination id (qdid) necessary for tx enqueueing.
|
this release includes the landlock security module, which aims to make easier to sandbox applications; support for the clang control flow integrity, which aims to abort the program upon detecting certain forms of undefined behavior; support for randomising the stack address offset in each syscall; support for concurrent tbl flushing; preparatory apple m1 support; support for incoming amd and intel graphics chips; bpf support for calling kernel functions directly; a virtio sound driver for improved sound experience on virtualized guests; io_uring support for multi shot mode and a misc cgroup for miscellaneous resources. as always, there are many other features, new drivers, improvements and fixes.
|
add .ndo_start_xmit() callback
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'security', 'networking', 'architectures arm x86 mips powerpc riscv s390 ia64 xtensa']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'remote processors', 'clock', 'phy ("physical layer" framework)', 'various']
|
['dpaa2-switch']
|
['h', 'c']
| 5
| 245
| 16
|
--- diff --git a/drivers/staging/fsl-dpaa2/ethsw/dpsw-cmd.h b/drivers/staging/fsl-dpaa2/ethsw/dpsw-cmd.h --- a/drivers/staging/fsl-dpaa2/ethsw/dpsw-cmd.h +++ b/drivers/staging/fsl-dpaa2/ethsw/dpsw-cmd.h +#define dpsw_cmdid_if_get_attr dpsw_cmd_id(0x042) + +#define dpsw_admit_untagged_shift 0 +#define dpsw_admit_untagged_size 4 +#define dpsw_enabled_shift 5 +#define dpsw_enabled_size 1 +#define dpsw_accept_all_vlan_shift 6 +#define dpsw_accept_all_vlan_size 1 + +struct dpsw_rsp_if_get_attr { + /* cmd word 0 */ + /* from lsb: admit_untagged:4 enabled:1 accept_all_vlan:1 */ + u8 conf; + u8 pad1; + u8 num_tcs; + u8 pad2; + __le16 qdid; + /* cmd word 1 */ + __le32 options; + __le32 pad3; + /* cmd word 2 */ + __le32 rate; +}; + diff --git a/drivers/staging/fsl-dpaa2/ethsw/dpsw.c b/drivers/staging/fsl-dpaa2/ethsw/dpsw.c --- a/drivers/staging/fsl-dpaa2/ethsw/dpsw.c +++ b/drivers/staging/fsl-dpaa2/ethsw/dpsw.c +/** + * dpsw_if_get_attributes() - function obtains attributes of interface + * @mc_io: pointer to mc portal's i/o object + * @cmd_flags: command flags; one or more of 'mc_cmd_flag_' + * @token: token of dpsw object + * @if_id: interface identifier + * @attr: returned interface attributes + * + * return: completion status. '0' on success; error code otherwise. + */ +int dpsw_if_get_attributes(struct fsl_mc_io *mc_io, u32 cmd_flags, u16 token, + u16 if_id, struct dpsw_if_attr *attr) +{ + struct dpsw_rsp_if_get_attr *rsp_params; + struct fsl_mc_command cmd = { 0 }; + struct dpsw_cmd_if *cmd_params; + int err; + + cmd.header = mc_encode_cmd_header(dpsw_cmdid_if_get_attr, cmd_flags, + token); + cmd_params = (struct dpsw_cmd_if *)cmd.params; + cmd_params->if_id = cpu_to_le16(if_id); + + err = mc_send_command(mc_io, &cmd); + if (err) + return err; + + rsp_params = (struct dpsw_rsp_if_get_attr *)cmd.params; + attr->num_tcs = rsp_params->num_tcs; + attr->rate = le32_to_cpu(rsp_params->rate); + attr->options = le32_to_cpu(rsp_params->options); + attr->qdid = le16_to_cpu(rsp_params->qdid); + attr->enabled = dpsw_get_field(rsp_params->conf, enabled); + attr->accept_all_vlan = dpsw_get_field(rsp_params->conf, + accept_all_vlan); + attr->admit_untagged = dpsw_get_field(rsp_params->conf, + admit_untagged); + + return 0; +} + diff --git a/drivers/staging/fsl-dpaa2/ethsw/dpsw.h b/drivers/staging/fsl-dpaa2/ethsw/dpsw.h --- a/drivers/staging/fsl-dpaa2/ethsw/dpsw.h +++ b/drivers/staging/fsl-dpaa2/ethsw/dpsw.h +/** + * struct dpsw_if_attr - structure representing dpsw interface attributes + * @num_tcs: number of traffic classes + * @rate: transmit rate in bits per second + * @options: interface configuration options (bitmap) + * @enabled: indicates if interface is enabled + * @accept_all_vlan: the device discards/accepts incoming frames + * for vlans that do not include this interface + * @admit_untagged: when set to 'dpsw_admit_only_vlan_tagged', the device + * discards untagged frames or priority-tagged frames received on + * this interface; + * when set to 'dpsw_admit_all', untagged frames or priority- + * tagged frames received on this interface are accepted + * @qdid: control frames transmit qdid + */ +struct dpsw_if_attr { + u8 num_tcs; + u32 rate; + u32 options; + int enabled; + int accept_all_vlan; + enum dpsw_accepted_frames admit_untagged; + u16 qdid; +}; + +int dpsw_if_get_attributes(struct fsl_mc_io *mc_io, u32 cmd_flags, u16 token, + u16 if_id, struct dpsw_if_attr *attr); + diff --git a/drivers/staging/fsl-dpaa2/ethsw/ethsw.c b/drivers/staging/fsl-dpaa2/ethsw/ethsw.c --- a/drivers/staging/fsl-dpaa2/ethsw/ethsw.c +++ b/drivers/staging/fsl-dpaa2/ethsw/ethsw.c - if (state.up) + if (state.up) { - else + netif_tx_start_all_queues(netdev); + } else { + netif_tx_stop_all_queues(netdev); + } - /* no need to allow tx as control interface is disabled */ - netif_tx_stop_all_queues(netdev); - -static netdev_tx_t dpaa2_switch_port_dropframe(struct sk_buff *skb, - struct net_device *netdev) -{ - /* we don't support i/o for now, drop the frame */ - dev_kfree_skb_any(skb); - - return netdev_tx_ok; -} - +static int dpaa2_switch_build_single_fd(struct ethsw_core *ethsw, + struct sk_buff *skb, + struct dpaa2_fd *fd) +{ + struct device *dev = ethsw->dev; + struct sk_buff **skbh; + dma_addr_t addr; + u8 *buff_start; + void *hwa; + + buff_start = ptr_align(skb->data - dpaa2_switch_tx_data_offset - + dpaa2_switch_tx_buf_align, + dpaa2_switch_tx_buf_align); + + /* clear fas to have consistent values for tx confirmation. it is + * located in the first 8 bytes of the buffer's hardware annotation + * area + */ + hwa = buff_start + dpaa2_switch_swa_size; + memset(hwa, 0, 8); + + /* store a backpointer to the skb at the beginning of the buffer + * (in the private data area) such that we can release it + * on tx confirm + */ + skbh = (struct sk_buff **)buff_start; + *skbh = skb; + + addr = dma_map_single(dev, buff_start, + skb_tail_pointer(skb) - buff_start, + dma_to_device); + if (unlikely(dma_mapping_error(dev, addr))) + return -enomem; + + /* setup the fd fields */ + memset(fd, 0, sizeof(*fd)); + + dpaa2_fd_set_addr(fd, addr); + dpaa2_fd_set_offset(fd, (u16)(skb->data - buff_start)); + dpaa2_fd_set_len(fd, skb->len); + dpaa2_fd_set_format(fd, dpaa2_fd_single); + + return 0; +} + +static netdev_tx_t dpaa2_switch_port_tx(struct sk_buff *skb, + struct net_device *net_dev) +{ + struct ethsw_port_priv *port_priv = netdev_priv(net_dev); + struct ethsw_core *ethsw = port_priv->ethsw_data; + int retries = dpaa2_switch_swp_busy_retries; + struct dpaa2_fd fd; + int err; + + if (unlikely(skb_headroom(skb) < dpaa2_switch_needed_headroom)) { + struct sk_buff *ns; + + ns = skb_realloc_headroom(skb, dpaa2_switch_needed_headroom); + if (unlikely(!ns)) { + net_err_ratelimited("%s: error reallocating skb headroom ", net_dev->name); + goto err_free_skb; + } + dev_consume_skb_any(skb); + skb = ns; + } + + /* we'll be holding a back-reference to the skb until tx confirmation */ + skb = skb_unshare(skb, gfp_atomic); + if (unlikely(!skb)) { + /* skb_unshare() has already freed the skb */ + net_err_ratelimited("%s: error copying the socket buffer ", net_dev->name); + goto err_exit; + } + + /* at this stage, we do not support non-linear skbs so just try to + * linearize the skb and if that's not working, just drop the packet. + */ + err = skb_linearize(skb); + if (err) { + net_err_ratelimited("%s: skb_linearize error (%d)! ", net_dev->name, err); + goto err_free_skb; + } + + err = dpaa2_switch_build_single_fd(ethsw, skb, &fd); + if (unlikely(err)) { + net_err_ratelimited("%s: ethsw_build_*_fd() %d ", net_dev->name, err); + goto err_free_skb; + } + + do { + err = dpaa2_io_service_enqueue_qd(null, + port_priv->tx_qdid, + 8, 0, &fd); + retries--; + } while (err == -ebusy && retries); + + if (unlikely(err < 0)) { + dpaa2_switch_free_fd(ethsw, &fd); + goto err_exit; + } + + return netdev_tx_ok; + +err_free_skb: + dev_kfree_skb(skb); +err_exit: + return netdev_tx_ok; +} + - .ndo_start_xmit = dpaa2_switch_port_dropframe, + .ndo_start_xmit = dpaa2_switch_port_tx, +static void dpaa2_switch_tx_conf(struct dpaa2_switch_fq *fq, + const struct dpaa2_fd *fd) +{ + dpaa2_switch_free_fd(fq->ethsw, fd); +} + - dpaa2_switch_rx(fq, dpaa2_dq_fd(dq)); + if (fq->type == dpsw_queue_rx) + dpaa2_switch_rx(fq, dpaa2_dq_fd(dq)); + else + dpaa2_switch_tx_conf(fq, dpaa2_dq_fd(dq)); + struct ethsw_core *ethsw = port_priv->ethsw_data; + struct dpsw_if_attr dpsw_if_attr; + /* get the tx queue for this specific port */ + err = dpsw_if_get_attributes(ethsw->mc_io, 0, ethsw->dpsw_handle, + port_priv->idx, &dpsw_if_attr); + if (err) { + netdev_err(netdev, "dpsw_if_get_attributes err %d ", err); + return err; + } + port_priv->tx_qdid = dpsw_if_attr.qdid; + + port_netdev->needed_headroom = dpaa2_switch_needed_headroom; + diff --git a/drivers/staging/fsl-dpaa2/ethsw/ethsw.h b/drivers/staging/fsl-dpaa2/ethsw/ethsw.h --- a/drivers/staging/fsl-dpaa2/ethsw/ethsw.h +++ b/drivers/staging/fsl-dpaa2/ethsw/ethsw.h +/* hardware annotation buffer size */ +#define dpaa2_switch_hwa_size 64 +/* software annotation buffer size */ +#define dpaa2_switch_swa_size 64 + +#define dpaa2_switch_tx_buf_align 64 + +#define dpaa2_switch_tx_data_offset \ + (dpaa2_switch_hwa_size + dpaa2_switch_swa_size) + +#define dpaa2_switch_needed_headroom \ + (dpaa2_switch_tx_data_offset + dpaa2_switch_tx_buf_align) + + u16 tx_qdid;
|
Drivers in the Staging area
|
7fd94d86b7f4a7f71223bdc1348b897411f2224b
|
ioana ciornei
|
drivers
|
staging
|
ethsw, fsl-dpaa2
|
staging:iio:cdc:ad7150: add sampling_frequency support
|
device uses a fixed sampling frequency. let us expose it to userspace.
|
this release includes the landlock security module, which aims to make easier to sandbox applications; support for the clang control flow integrity, which aims to abort the program upon detecting certain forms of undefined behavior; support for randomising the stack address offset in each syscall; support for concurrent tbl flushing; preparatory apple m1 support; support for incoming amd and intel graphics chips; bpf support for calling kernel functions directly; a virtio sound driver for improved sound experience on virtualized guests; io_uring support for multi shot mode and a misc cgroup for miscellaneous resources. as always, there are many other features, new drivers, improvements and fixes.
|
add sampling_frequency support
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'security', 'networking', 'architectures arm x86 mips powerpc riscv s390 ia64 xtensa']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'remote processors', 'clock', 'phy ("physical layer" framework)', 'various']
|
['iio:cdc:ad7150']
|
['c']
| 1
| 6
| 0
|
--- diff --git a/drivers/staging/iio/cdc/ad7150.c b/drivers/staging/iio/cdc/ad7150.c --- a/drivers/staging/iio/cdc/ad7150.c +++ b/drivers/staging/iio/cdc/ad7150.c + return iio_val_int; + case iio_chan_info_samp_freq: + /* strangely same for both 1 and 2 chan parts */ + *val = 100; + .info_mask_shared_by_all = bit(iio_chan_info_samp_freq),\ + .info_mask_shared_by_all = bit(iio_chan_info_samp_freq),\
|
Drivers in the Staging area
|
d5723c679bb81123f9c038392ba2d4aab928ba32
|
jonathan cameron alexandru ardelean alexandru ardelean analog com
|
drivers
|
staging
|
cdc, iio
|
drivers: most: add alsa sound driver
|
this patch moves the alsa sound driver out of the staging area and adds it to the stable part of the most driver. modifications to the makefiles and kconfigs are done accordingly to not break the build.
|
this release includes the landlock security module, which aims to make easier to sandbox applications; support for the clang control flow integrity, which aims to abort the program upon detecting certain forms of undefined behavior; support for randomising the stack address offset in each syscall; support for concurrent tbl flushing; preparatory apple m1 support; support for incoming amd and intel graphics chips; bpf support for calling kernel functions directly; a virtio sound driver for improved sound experience on virtualized guests; io_uring support for multi shot mode and a misc cgroup for miscellaneous resources. as always, there are many other features, new drivers, improvements and fixes.
|
add alsa sound driver
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'security', 'networking', 'architectures arm x86 mips powerpc riscv s390 ia64 xtensa']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'remote processors', 'clock', 'phy ("physical layer" framework)', 'various']
|
['most']
|
['kconfig', 'c', 'makefile']
| 7
| 11
| 21
|
--- diff --git a/drivers/most/kconfig b/drivers/most/kconfig --- a/drivers/most/kconfig +++ b/drivers/most/kconfig + +config most_snd + tristate "sound" + depends on snd + select snd_pcm + help + say y here if you want to commumicate via alsa/sound devices. + + to compile this driver as a module, choose m here: the + module will be called most_sound. diff --git a/drivers/most/makefile b/drivers/most/makefile --- a/drivers/most/makefile +++ b/drivers/most/makefile +obj-$(config_most_snd) += most_snd.o diff --git a/drivers/staging/most/sound/sound.c b/drivers/most/most_snd.c diff --git a/drivers/staging/most/kconfig b/drivers/staging/most/kconfig --- a/drivers/staging/most/kconfig +++ b/drivers/staging/most/kconfig -source "drivers/staging/most/sound/kconfig" - diff --git a/drivers/staging/most/makefile b/drivers/staging/most/makefile --- a/drivers/staging/most/makefile +++ b/drivers/staging/most/makefile -obj-$(config_most_sound) += sound/ diff --git a/drivers/staging/most/sound/kconfig b/drivers/staging/most/sound/kconfig --- a/drivers/staging/most/sound/kconfig +++ /dev/null -# spdx-license-identifier: gpl-2.0 -# -# most alsa configuration -# - -config most_sound - tristate "sound" - depends on snd - select snd_pcm - help - say y here if you want to commumicate via alsa/sound devices. - - to compile this driver as a module, choose m here: the - module will be called most_sound. diff --git a/drivers/staging/most/sound/makefile b/drivers/staging/most/sound/makefile --- a/drivers/staging/most/sound/makefile +++ /dev/null -# spdx-license-identifier: gpl-2.0 -obj-$(config_most_sound) += most_sound.o - -most_sound-objs := sound.o
|
Drivers in the Staging area
|
13b41b5783068d01c259940975a2ab393b5acec5
|
christian gromm
|
drivers
|
staging
|
most, sound
|
staging: clocking-wizard: add support for fractional support
|
currently the set rate granularity is to integral divisors. add support for the fractional divisors. only the first output0 is fractional in the hardware.
|
this release includes the landlock security module, which aims to make easier to sandbox applications; support for the clang control flow integrity, which aims to abort the program upon detecting certain forms of undefined behavior; support for randomising the stack address offset in each syscall; support for concurrent tbl flushing; preparatory apple m1 support; support for incoming amd and intel graphics chips; bpf support for calling kernel functions directly; a virtio sound driver for improved sound experience on virtualized guests; io_uring support for multi shot mode and a misc cgroup for miscellaneous resources. as always, there are many other features, new drivers, improvements and fixes.
|
add support for fractional support
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'security', 'networking', 'architectures arm x86 mips powerpc riscv s390 ia64 xtensa']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'remote processors', 'clock', 'phy ("physical layer" framework)', 'various']
|
['clocking-wizard']
|
['c']
| 1
| 137
| 16
|
--- diff --git a/drivers/staging/clocking-wizard/clk-xlnx-clock-wizard.c b/drivers/staging/clocking-wizard/clk-xlnx-clock-wizard.c --- a/drivers/staging/clocking-wizard/clk-xlnx-clock-wizard.c +++ b/drivers/staging/clocking-wizard/clk-xlnx-clock-wizard.c +#define wzrd_clkfbout_frac_shift 16 +#define wzrd_clkfbout_frac_mask (0x3ff << wzrd_clkfbout_frac_shift) +#define wzrd_clkout_frac_shift 8 +#define wzrd_clkout_frac_mask 0x3ff + wzrd_clk_mul_frac, +static unsigned long clk_wzrd_recalc_ratef(struct clk_hw *hw, + unsigned long parent_rate) +{ + unsigned int val; + u32 div, frac; + struct clk_wzrd_divider *divider = to_clk_wzrd_divider(hw); + void __iomem *div_addr = divider->base + divider->offset; + + val = readl(div_addr); + div = val & div_mask(divider->width); + frac = (val >> wzrd_clkout_frac_shift) & wzrd_clkout_frac_mask; + + return mult_frac(parent_rate, 1000, (div * 1000) + frac); +} + +static int clk_wzrd_dynamic_reconfig_f(struct clk_hw *hw, unsigned long rate, + unsigned long parent_rate) +{ + int err; + u32 value, pre; + unsigned long rate_div, f, clockout0_div; + struct clk_wzrd_divider *divider = to_clk_wzrd_divider(hw); + void __iomem *div_addr = divider->base + divider->offset; + + rate_div = ((parent_rate * 1000) / rate); + clockout0_div = rate_div / 1000; + + pre = div_round_closest((parent_rate * 1000), rate); + f = (u32)(pre - (clockout0_div * 1000)); + f = f & wzrd_clkout_frac_mask; + f = f << wzrd_clkout_divide_width; + + value = (f | (clockout0_div & wzrd_clkout_divide_mask)); + + /* set divisor and clear phase offset */ + writel(value, div_addr); + writel(0x0, div_addr + wzrd_dr_div_to_phase_offset); + + /* check status register */ + err = readl_poll_timeout(divider->base + wzrd_dr_status_reg_offset, value, + value & wzrd_dr_lock_bit_mask, + wzrd_usec_poll, wzrd_timeout_poll); + if (err) + return err; + + /* initiate reconfiguration */ + writel(wzrd_dr_begin_dyna_reconf, + divider->base + wzrd_dr_init_reg_offset); + + /* check status register */ + return readl_poll_timeout(divider->base + wzrd_dr_status_reg_offset, value, + value & wzrd_dr_lock_bit_mask, + wzrd_usec_poll, wzrd_timeout_poll); +} + +static long clk_wzrd_round_rate_f(struct clk_hw *hw, unsigned long rate, + unsigned long *prate) +{ + return rate; +} + +static const struct clk_ops clk_wzrd_clk_divider_ops_f = { + .round_rate = clk_wzrd_round_rate_f, + .set_rate = clk_wzrd_dynamic_reconfig_f, + .recalc_rate = clk_wzrd_recalc_ratef, +}; + +static struct clk *clk_wzrd_register_divf(struct device *dev, + const char *name, + const char *parent_name, + unsigned long flags, + void __iomem *base, u16 offset, + u8 shift, u8 width, + u8 clk_divider_flags, + const struct clk_div_table *table, + spinlock_t *lock) +{ + struct clk_wzrd_divider *div; + struct clk_hw *hw; + struct clk_init_data init; + int ret; + + div = devm_kzalloc(dev, sizeof(*div), gfp_kernel); + if (!div) + return err_ptr(-enomem); + + init.name = name; + + init.ops = &clk_wzrd_clk_divider_ops_f; + + init.flags = flags; + init.parent_names = &parent_name; + init.num_parents = 1; + + div->base = base; + div->offset = offset; + div->shift = shift; + div->width = width; + div->flags = clk_divider_flags; + div->lock = lock; + div->hw.init = &init; + div->table = table; + + hw = &div->hw; + ret = devm_clk_hw_register(dev, hw); + if (ret) + return err_ptr(ret); + + return hw->clk; +} + - u32 reg; + u32 reg, reg_f, mult; - /* we don't support fractional div/mul yet */ - reg = readl(clk_wzrd->base + wzrd_clk_cfg_reg(0)) & - wzrd_clkfbout_frac_en; - reg |= readl(clk_wzrd->base + wzrd_clk_cfg_reg(2)) & - wzrd_clkout0_frac_en; - if (reg) - dev_warn(&pdev->dev, "fractional div/mul not supported "); - - /* register multiplier */ - reg = (readl(clk_wzrd->base + wzrd_clk_cfg_reg(0)) & - wzrd_clkfbout_mult_mask) >> wzrd_clkfbout_mult_shift; + reg = readl(clk_wzrd->base + wzrd_clk_cfg_reg(0)); + reg_f = reg & wzrd_clkfbout_frac_mask; + reg_f = reg_f >> wzrd_clkfbout_frac_shift; + + reg = reg & wzrd_clkfbout_mult_mask; + reg = reg >> wzrd_clkfbout_mult_shift; + mult = (reg * 1000) + reg_f; - 0, reg, 1); - kfree(clk_name); + 0, mult, 1000); - clk_wzrd->clkout[i] = clk_wzrd_register_divider(&pdev->dev, - clkout_name, + if (!i) + clk_wzrd->clkout[i] = clk_wzrd_register_divf + (&pdev->dev, clkout_name, + clk_name, flags, + clk_wzrd->base, (wzrd_clk_cfg_reg(2) + i * 12), + wzrd_clkout_divide_shift, + wzrd_clkout_divide_width, + clk_divider_one_based | clk_divider_allow_zero, + null, &clkwzrd_lock); + else + clk_wzrd->clkout[i] = clk_wzrd_register_divider + (&pdev->dev, clkout_name,
|
Drivers in the Staging area
|
91d695d71841ab4ed7d26e27ee194aed03328095
|
shubhrajyoti datta
|
drivers
|
staging
|
clocking-wizard
|
staging: clocking-wizard: add support for dynamic reconfiguration
|
the patch adds support for dynamic reconfiguration of clock output rate. output clocks are registered as dividers and set rate callback function is used for dynamic reconfiguration.
|
this release includes the landlock security module, which aims to make easier to sandbox applications; support for the clang control flow integrity, which aims to abort the program upon detecting certain forms of undefined behavior; support for randomising the stack address offset in each syscall; support for concurrent tbl flushing; preparatory apple m1 support; support for incoming amd and intel graphics chips; bpf support for calling kernel functions directly; a virtio sound driver for improved sound experience on virtualized guests; io_uring support for multi shot mode and a misc cgroup for miscellaneous resources. as always, there are many other features, new drivers, improvements and fixes.
|
add support for dynamic reconfiguration
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'security', 'networking', 'architectures arm x86 mips powerpc riscv s390 ia64 xtensa']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'remote processors', 'clock', 'phy ("physical layer" framework)', 'various']
|
['clocking-wizard']
|
['c']
| 1
| 173
| 5
|
--- diff --git a/drivers/staging/clocking-wizard/clk-xlnx-clock-wizard.c b/drivers/staging/clocking-wizard/clk-xlnx-clock-wizard.c --- a/drivers/staging/clocking-wizard/clk-xlnx-clock-wizard.c +++ b/drivers/staging/clocking-wizard/clk-xlnx-clock-wizard.c +#include <linux/iopoll.h> +#define wzrd_clkout_divide_width 8 +#define wzrd_dr_max_int_div_value 255 +#define wzrd_dr_status_reg_offset 0x04 +#define wzrd_dr_lock_bit_mask 0x00000001 +#define wzrd_dr_init_reg_offset 0x25c +#define wzrd_dr_div_to_phase_offset 4 +#define wzrd_dr_begin_dyna_reconf 0x03 + +#define wzrd_usec_poll 10 +#define wzrd_timeout_poll 1000 +/* get the mask from width */ +#define div_mask(width) ((1 << (width)) - 1) + +/* extract divider instance from clock hardware instance */ +#define to_clk_wzrd_divider(_hw) container_of(_hw, struct clk_wzrd_divider, hw) + +/** + * struct clk_wzrd_divider - clock divider specific to clk_wzrd + * + * @hw: handle between common and hardware-specific interfaces + * @base: base address of register containing the divider + * @offset: offset address of register containing the divider + * @shift: shift to the divider bit field + * @width: width of the divider bit field + * @flags: clk_wzrd divider flags + * @table: array of value/divider pairs, last entry should have div = 0 + * @lock: register lock + */ +struct clk_wzrd_divider { + struct clk_hw hw; + void __iomem *base; + u16 offset; + u8 shift; + u8 width; + u8 flags; + const struct clk_div_table *table; + spinlock_t *lock; /* divider lock */ +}; + +/* spin lock variable for clk_wzrd */ +static define_spinlock(clkwzrd_lock); + +static unsigned long clk_wzrd_recalc_rate(struct clk_hw *hw, + unsigned long parent_rate) +{ + struct clk_wzrd_divider *divider = to_clk_wzrd_divider(hw); + void __iomem *div_addr = divider->base + divider->offset; + unsigned int val; + + val = readl(div_addr) >> divider->shift; + val &= div_mask(divider->width); + + return divider_recalc_rate(hw, parent_rate, val, divider->table, + divider->flags, divider->width); +} + +static int clk_wzrd_dynamic_reconfig(struct clk_hw *hw, unsigned long rate, + unsigned long parent_rate) +{ + int err; + u32 value; + unsigned long flags = 0; + struct clk_wzrd_divider *divider = to_clk_wzrd_divider(hw); + void __iomem *div_addr = divider->base + divider->offset; + + if (divider->lock) + spin_lock_irqsave(divider->lock, flags); + else + __acquire(divider->lock); + + value = div_round_closest(parent_rate, rate); + + /* cap the value to max */ + min_t(u32, value, wzrd_dr_max_int_div_value); + + /* set divisor and clear phase offset */ + writel(value, div_addr); + writel(0x00, div_addr + wzrd_dr_div_to_phase_offset); + + /* check status register */ + err = readl_poll_timeout(divider->base + wzrd_dr_status_reg_offset, + value, value & wzrd_dr_lock_bit_mask, + wzrd_usec_poll, wzrd_timeout_poll); + if (err) + goto err_reconfig; + + /* initiate reconfiguration */ + writel(wzrd_dr_begin_dyna_reconf, + divider->base + wzrd_dr_init_reg_offset); + + /* check status register */ + err = readl_poll_timeout(divider->base + wzrd_dr_status_reg_offset, + value, value & wzrd_dr_lock_bit_mask, + wzrd_usec_poll, wzrd_timeout_poll); +err_reconfig: + if (divider->lock) + spin_unlock_irqrestore(divider->lock, flags); + else + __release(divider->lock); + return err; +} + +static long clk_wzrd_round_rate(struct clk_hw *hw, unsigned long rate, + unsigned long *prate) +{ + u8 div; + + /* + * since we don't change parent rate we just round rate to closest + * achievable + */ + div = div_round_closest(*prate, rate); + + return *prate / div; +} + +static const struct clk_ops clk_wzrd_clk_divider_ops = { + .round_rate = clk_wzrd_round_rate, + .set_rate = clk_wzrd_dynamic_reconfig, + .recalc_rate = clk_wzrd_recalc_rate, +}; + +static struct clk *clk_wzrd_register_divider(struct device *dev, + const char *name, + const char *parent_name, + unsigned long flags, + void __iomem *base, u16 offset, + u8 shift, u8 width, + u8 clk_divider_flags, + const struct clk_div_table *table, + spinlock_t *lock) +{ + struct clk_wzrd_divider *div; + struct clk_hw *hw; + struct clk_init_data init; + int ret; + + div = devm_kzalloc(dev, sizeof(*div), gfp_kernel); + if (!div) + return err_ptr(-enomem); + + init.name = name; + init.ops = &clk_wzrd_clk_divider_ops; + init.flags = flags; + init.parent_names = &parent_name; + init.num_parents = 1; + + div->base = base; + div->offset = offset; + div->shift = shift; + div->width = width; + div->flags = clk_divider_flags; + div->lock = lock; + div->hw.init = &init; + div->table = table; + + hw = &div->hw; + ret = devm_clk_hw_register(dev, hw); + if (ret) + hw = err_ptr(ret); + + return hw->clk; +} + - reg = readl(clk_wzrd->base + wzrd_clk_cfg_reg(2) + i * 12); - reg &= wzrd_clkout_divide_mask; - reg >>= wzrd_clkout_divide_shift; - clk_wzrd->clkout[i] = clk_register_fixed_factor - (&pdev->dev, clkout_name, clk_name, 0, 1, reg); + clk_wzrd->clkout[i] = clk_wzrd_register_divider(&pdev->dev, + clkout_name, + clk_name, 0, + clk_wzrd->base, (wzrd_clk_cfg_reg(2) + i * 12), + wzrd_clkout_divide_shift, + wzrd_clkout_divide_width, + clk_divider_one_based | clk_divider_allow_zero, + null, &clkwzrd_lock);
|
Drivers in the Staging area
|
5a853722eb32188647a541802d51d0db423b9baf
|
shubhrajyoti datta
|
drivers
|
staging
|
clocking-wizard
|
staging: wimax: delete from the tree.
|
as stated in f54ec58fee83 ("wimax: move out to staging"), the wimax code is dead with no known users. it has stayed in staging for 5 months, with no one willing to take up the codebase for maintance and support, so let's just remove it entirely for now.
|
this release includes the landlock security module, which aims to make easier to sandbox applications; support for the clang control flow integrity, which aims to abort the program upon detecting certain forms of undefined behavior; support for randomising the stack address offset in each syscall; support for concurrent tbl flushing; preparatory apple m1 support; support for incoming amd and intel graphics chips; bpf support for calling kernel functions directly; a virtio sound driver for improved sound experience on virtualized guests; io_uring support for multi shot mode and a misc cgroup for miscellaneous resources. as always, there are many other features, new drivers, improvements and fixes.
|
delete from the tree
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'security', 'networking', 'architectures arm x86 mips powerpc riscv s390 ia64 xtensa']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'remote processors', 'clock', 'phy ("physical layer" framework)', 'various']
|
['wimax']
|
['c', 'h', 'kconfig', 'todo', 'rst', 'makefile']
| 41
| 0
| 15,265
|
--- diff --git a/drivers/staging/kconfig b/drivers/staging/kconfig --- a/drivers/staging/kconfig +++ b/drivers/staging/kconfig -source "drivers/staging/wimax/kconfig" - diff --git a/drivers/staging/makefile b/drivers/staging/makefile --- a/drivers/staging/makefile +++ b/drivers/staging/makefile -obj-$(config_wimax) += wimax/ diff --git a/drivers/staging/wimax/documentation/i2400m.rst b/drivers/staging/wimax/documentation/i2400m.rst --- a/drivers/staging/wimax/documentation/i2400m.rst +++ /dev/null -.. include:: <isonum.txt> - -==================================================== -driver for the intel wireless wimax connection 2400m -==================================================== - -:copyright: |copy| 2008 intel corporation < linux-wimax@intel.com > - - this provides a driver for the intel wireless wimax connection 2400m - and a basic linux kernel wimax stack. - -1. requirements -=============== - - * linux installation with linux kernel 2.6.22 or newer (if building - from a separate tree) - * intel i2400m echo peak or baxter peak; this includes the intel - wireless wimax/wifi link 5x50 series. - * build tools: - - + linux kernel development package for the target kernel; to - build against your currently running kernel, you need to have - the kernel development package corresponding to the running - image installed (usually if your kernel is named - linux-version, the development package is called - linux-dev-version or linux-headers-version). - + gnu c compiler, make - -2. compilation and installation -=============================== - -2.1. compilation of the drivers included in the kernel ------------------------------------------------------- - - configure the kernel; to enable the wimax drivers select drivers > - networking drivers > wimax device support. enable all of them as - modules (easier). - - if usb or sdio are not enabled in the kernel configuration, the options - to build the i2400m usb or sdio drivers will not show. enable said - subsystems and go back to the wimax menu to enable the drivers. - - compile and install your kernel as usual. - -2.2. compilation of the drivers distributed as an standalone module -------------------------------------------------------------------- - - to compile:: - - $ cd source/directory - $ make - - once built you can load and unload using the provided load.sh script; - load.sh will load the modules, load.sh u will unload them. - - to install in the default kernel directories (and enable auto loading - when the device is plugged):: - - $ make install - $ depmod -a - - if your kernel development files are located in a non standard - directory or if you want to build for a kernel that is not the - currently running one, set kdir to the right location:: - - $ make kdir=/path/to/kernel/dev/tree - - for more information, please contact linux-wimax@intel.com. - -3. installing the firmware --------------------------- - - the firmware can be obtained from http://linuxwimax.org or might have - been supplied with your hardware. - - it has to be installed in the target system:: - - $ cp firmwarefile.sbcf /lib/firmware/i2400m-fw-bustype-1.3.sbcf - - * note: if your firmware came in an .rpm or .deb file, just install - it as normal, with the rpm (rpm -i firmware.rpm) or dpkg - (dpkg -i firmware.deb) commands. no further action is needed. - * bustype will be usb or sdio, depending on the hardware you have. - each hardware type comes with its own firmware and will not work - with other types. - -4. design -========= - - this package contains two major parts: a wimax kernel stack and a - driver for the intel i2400m. - - the wimax stack is designed to provide for common wimax control - services to current and future wimax devices from any vendor; please - see readme.wimax for details. - - the i2400m kernel driver is broken up in two main parts: the bus - generic driver and the bus-specific drivers. the bus generic driver - forms the drivercore and contain no knowledge of the actual method we - use to connect to the device. the bus specific drivers are just the - glue to connect the bus-generic driver and the device. currently only - usb and sdio are supported. see drivers/net/wimax/i2400m/i2400m.h for - more information. - - the bus generic driver is logically broken up in two parts: os-glue and - hardware-glue. the os-glue interfaces with linux. the hardware-glue - interfaces with the device on using an interface provided by the - bus-specific driver. the reason for this breakup is to be able to - easily reuse the hardware-glue to write drivers for other oses; note - the hardware glue part is written as a native linux driver; no - abstraction layers are used, so to port to another os, the linux kernel - api calls should be replaced with the target os's. - -5. usage -======== - - to load the driver, follow the instructions in the install section; - once the driver is loaded, plug in the device (unless it is permanently - plugged in). the driver will enumerate the device, upload the firmware - and output messages in the kernel log (dmesg, /var/log/messages or - /var/log/kern.log) such as:: - - ... - i2400m_usb 5-4:1.0: firmware interface version 8.0.0 - i2400m_usb 5-4:1.0: wimax interface wmx0 (00:1d:e1:01:94:2c) ready - - at this point the device is ready to work. - - current versions require the intel wimax network service in userspace - to make things work. see the network service's readme for instructions - on how to scan, connect and disconnect. - -5.1. module parameters ----------------------- - - module parameters can be set at kernel or module load time or by - echoing values:: - - $ echo value > /sys/module/modulename/parameters/parametername - - to make changes permanent, for example, for the i2400m module, you can - also create a file named /etc/modprobe.d/i2400m containing:: - - options i2400m idle_mode_disabled=1 - - to find which parameters are supported by a module, run:: - - $ modinfo path/to/module.ko - - during kernel bootup (if the driver is linked in the kernel), specify - the following to the kernel command line:: - - i2400m.parameter=value - -5.1.1. i2400m: idle_mode_disabled -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - - the i2400m module supports a parameter to disable idle mode. this - parameter, once set, will take effect only when the device is - reinitialized by the driver (eg: following a reset or a reconnect). - -5.2. debug operations: debugfs entries --------------------------------------- - - the driver will register debugfs entries that allow the user to tweak - debug settings. there are three main container directories where - entries are placed, which correspond to the three blocks a i2400m wimax - driver has: - - * /sys/kernel/debug/wimax:devname/ for the generic wimax stack - controls - * /sys/kernel/debug/wimax:devname/i2400m for the i2400m generic - driver controls - * /sys/kernel/debug/wimax:devname/i2400m-usb (or -sdio) for the - bus-specific i2400m-usb or i2400m-sdio controls). - - of course, if debugfs is mounted in a directory other than - /sys/kernel/debug, those paths will change. - -5.2.1. increasing debug output -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - - the files named *dl_* indicate knobs for controlling the debug output - of different submodules:: - - # find /sys/kernel/debug/wimax\:wmx0 -name \*dl_\* - /sys/kernel/debug/wimax:wmx0/i2400m-usb/dl_tx - /sys/kernel/debug/wimax:wmx0/i2400m-usb/dl_rx - /sys/kernel/debug/wimax:wmx0/i2400m-usb/dl_notif - /sys/kernel/debug/wimax:wmx0/i2400m-usb/dl_fw - /sys/kernel/debug/wimax:wmx0/i2400m-usb/dl_usb - /sys/kernel/debug/wimax:wmx0/i2400m/dl_tx - /sys/kernel/debug/wimax:wmx0/i2400m/dl_rx - /sys/kernel/debug/wimax:wmx0/i2400m/dl_rfkill - /sys/kernel/debug/wimax:wmx0/i2400m/dl_netdev - /sys/kernel/debug/wimax:wmx0/i2400m/dl_fw - /sys/kernel/debug/wimax:wmx0/i2400m/dl_debugfs - /sys/kernel/debug/wimax:wmx0/i2400m/dl_driver - /sys/kernel/debug/wimax:wmx0/i2400m/dl_control - /sys/kernel/debug/wimax:wmx0/wimax_dl_stack - /sys/kernel/debug/wimax:wmx0/wimax_dl_op_rfkill - /sys/kernel/debug/wimax:wmx0/wimax_dl_op_reset - /sys/kernel/debug/wimax:wmx0/wimax_dl_op_msg - /sys/kernel/debug/wimax:wmx0/wimax_dl_id_table - /sys/kernel/debug/wimax:wmx0/wimax_dl_debugfs - - by reading the file you can obtain the current value of said debug - level; by writing to it, you can set it. - - to increase the debug level of, for example, the i2400m's generic tx - engine, just write:: - - $ echo 3 > /sys/kernel/debug/wimax:wmx0/i2400m/dl_tx - - increasing numbers yield increasing debug information; for details of - what is printed and the available levels, check the source. the code - uses 0 for disabled and increasing values until 8. - -5.2.2. rx and tx statistics -^^^^^^^^^^^^^^^^^^^^^^^^^^^ - - the i2400m/rx_stats and i2400m/tx_stats provide statistics about the - data reception/delivery from the device:: - - $ cat /sys/kernel/debug/wimax:wmx0/i2400m/rx_stats - 45 1 3 34 3104 48 480 - - the numbers reported are: - - * packets/rx-buffer: total, min, max - * rx-buffers: total rx buffers received, accumulated rx buffer size - in bytes, min size received, max size received - - thus, to find the average buffer size received, divide accumulated - rx-buffer / total rx-buffers. - - to clear the statistics back to 0, write anything to the rx_stats file:: - - $ echo 1 > /sys/kernel/debug/wimax:wmx0/i2400m_rx_stats - - likewise for tx. - - note the packets this debug file refers to are not network packet, but - packets in the sense of the device-specific protocol for communication - to the host. see drivers/net/wimax/i2400m/tx.c. - -5.2.3. tracing messages received from user space -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - - to echo messages received from user space into the trace pipe that the - i2400m driver creates, set the debug file i2400m/trace_msg_from_user to - 1:: - - $ echo 1 > /sys/kernel/debug/wimax:wmx0/i2400m/trace_msg_from_user - -5.2.4. performing a device reset -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - - by writing a 0, a 1 or a 2 to the file - /sys/kernel/debug/wimax:wmx0/reset, the driver performs a warm (without - disconnecting from the bus), cold (disconnecting from the bus) or bus - (bus specific) reset on the device. - -5.2.5. asking the device to enter power saving mode -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - - by writing any value to the /sys/kernel/debug/wimax:wmx0 file, the - device will attempt to enter power saving mode. - -6. troubleshooting -================== - -6.1. driver complains about ''i2400m-fw-usb-1.2.sbcf: request failed'' ----------------------------------------------------------------------- - - if upon connecting the device, the following is output in the kernel - log:: - - i2400m_usb 5-4:1.0: fw i2400m-fw-usb-1.3.sbcf: request failed: -2 - - this means that the driver cannot locate the firmware file named - /lib/firmware/i2400m-fw-usb-1.2.sbcf. check that the file is present in - the right location. diff --git a/drivers/staging/wimax/documentation/index.rst b/drivers/staging/wimax/documentation/index.rst --- a/drivers/staging/wimax/documentation/index.rst +++ /dev/null -.. spdx-license-identifier: gpl-2.0 - -=============== -wimax subsystem -=============== - -.. toctree:: - :maxdepth: 2 - - wimax - - i2400m - -.. only:: subproject and html - - indices - ======= - - * :ref:'genindex' diff --git a/drivers/staging/wimax/documentation/wimax.rst b/drivers/staging/wimax/documentation/wimax.rst --- a/drivers/staging/wimax/documentation/wimax.rst +++ /dev/null -.. include:: <isonum.txt> - -======================== -linux kernel wimax stack -======================== - -:copyright: |copy| 2008 intel corporation < linux-wimax@intel.com > - - this provides a basic linux kernel wimax stack to provide a common - control api for wimax devices, usable from kernel and user space. - -1. design -========= - - the wimax stack is designed to provide for common wimax control - services to current and future wimax devices from any vendor. - - because currently there is only one and we don't know what would be the - common services, the apis it currently provides are very minimal. - however, it is done in such a way that it is easily extensible to - accommodate future requirements. - - the stack works by embedding a struct wimax_dev in your device's - control structures. this provides a set of callbacks that the wimax - stack will call in order to implement control operations requested by - the user. as well, the stack provides api functions that the driver - calls to notify about changes of state in the device. - - the stack exports the api calls needed to control the device to user - space using generic netlink as a marshalling mechanism. you can access - them using your own code or use the wrappers provided for your - convenience in libwimax (in the wimax-tools package). - - for detailed information on the stack, please see - include/linux/wimax.h. - -2. usage -======== - - for usage in a driver (registration, api, etc) please refer to the - instructions in the header file include/linux/wimax.h. - - when a device is registered with the wimax stack, a set of debugfs - files will appear in /sys/kernel/debug/wimax:wmxx can tweak for - control. - -2.1. obtaining debug information: debugfs entries -------------------------------------------------- - - the wimax stack is compiled, by default, with debug messages that can - be used to diagnose issues. by default, said messages are disabled. - - the drivers will register debugfs entries that allow the user to tweak - debug settings. - - each driver, when registering with the stack, will cause a debugfs - directory named wimax:devicename to be created; optionally, it might - create more subentries below it. - -2.1.1. increasing debug output ------------------------------- - - the files named *dl_* indicate knobs for controlling the debug output - of different submodules of the wimax stack:: - - # find /sys/kernel/debug/wimax\:wmx0 -name \*dl_\* - /sys/kernel/debug/wimax:wmx0/wimax_dl_stack - /sys/kernel/debug/wimax:wmx0/wimax_dl_op_rfkill - /sys/kernel/debug/wimax:wmx0/wimax_dl_op_reset - /sys/kernel/debug/wimax:wmx0/wimax_dl_op_msg - /sys/kernel/debug/wimax:wmx0/wimax_dl_id_table - /sys/kernel/debug/wimax:wmx0/wimax_dl_debugfs - /sys/kernel/debug/wimax:wmx0/.... # other driver specific files - - note: - of course, if debugfs is mounted in a directory other than - /sys/kernel/debug, those paths will change. - - by reading the file you can obtain the current value of said debug - level; by writing to it, you can set it. - - to increase the debug level of, for example, the id-table submodule, - just write: - - $ echo 3 > /sys/kernel/debug/wimax:wmx0/wimax_dl_id_table - - increasing numbers yield increasing debug information; for details of - what is printed and the available levels, check the source. the code - uses 0 for disabled and increasing values until 8. diff --git a/drivers/staging/wimax/kconfig b/drivers/staging/wimax/kconfig --- a/drivers/staging/wimax/kconfig +++ /dev/null -# spdx-license-identifier: gpl-2.0-only -# -# wimax lan device configuration -# - -menuconfig wimax - tristate "wimax wireless broadband support" - depends on net - depends on rfkill || !rfkill - help - - select to configure support for devices that provide - wireless broadband connectivity using the wimax protocol - (ieee 802.16). - - please note that most of these devices require signing up - for a service plan with a provider. - - the different wimax drivers can be enabled in the menu entry - - device drivers > network device support > wimax wireless - broadband devices - - if unsure, it is safe to select m (module). - -if wimax - -config wimax_debug_level - int "wimax debug level" - depends on wimax - default 8 - help - - select the maximum debug verbosity level to be compiled into - the wimax stack code. - - by default, debug messages are disabled at runtime and can - be selectively enabled for different parts of the code using - the sysfs debug-levels file. - - if set at zero, this will compile out all the debug code. - - it is recommended that it is left at 8. - -source "drivers/staging/wimax/i2400m/kconfig" - -endif diff --git a/drivers/staging/wimax/makefile b/drivers/staging/wimax/makefile --- a/drivers/staging/wimax/makefile +++ /dev/null -# spdx-license-identifier: gpl-2.0 - -obj-$(config_wimax) += wimax.o - -wimax-y := \ - id-table.o \ - op-msg.o \ - op-reset.o \ - op-rfkill.o \ - op-state-get.o \ - stack.o - -wimax-$(config_debug_fs) += debugfs.o - -obj-$(config_wimax_i2400m) += i2400m/ diff --git a/drivers/staging/wimax/todo b/drivers/staging/wimax/todo --- a/drivers/staging/wimax/todo +++ /dev/null -there are no known users of this driver as of october 2020, and it will -be removed unless someone turns out to still need it in future releases. - -according to https://en.wikipedia.org/wiki/list_of_wimax_networks, there -have been many public wimax networks, but it appears that many of these -have migrated to lte or discontinued their service altogether. as most -pcs and phones lack wimax hardware support, the remaining networks tend -to use standalone routers. these almost certainly run linux, but not a -modern kernel or the mainline wimax driver stack. - -networkmanager appears to have dropped userspace support in 2015 -https://bugzilla.gnome.org/show_bug.cgi?id=747846, the www.linuxwimax.org -site had already shut down earlier. - -wimax is apparently still being deployed on airport campus networks -("aeromacs"), but in a frequency band that was not supported by the old -intel 2400m (used in sandy bridge laptops and earlier), which is the -only driver using the kernel's wimax stack. diff --git a/drivers/staging/wimax/debug-levels.h b/drivers/staging/wimax/debug-levels.h --- a/drivers/staging/wimax/debug-levels.h +++ /dev/null -/* spdx-license-identifier: gpl-2.0-only */ -/* - * linux wimax stack - * debug levels control file for the wimax module - * - * copyright (c) 2007-2008 intel corporation <linux-wimax@intel.com> - * inaky perez-gonzalez <inaky.perez-gonzalez@intel.com> - */ -#ifndef __debug_levels__h__ -#define __debug_levels__h__ - -/* maximum compile and run time debug level for all submodules */ -#define d_modulename wimax -#define d_master config_wimax_debug_level - -#include "linux-wimax-debug.h" - -/* list of all the enabled modules */ -enum d_module { - d_submodule_declare(debugfs), - d_submodule_declare(id_table), - d_submodule_declare(op_msg), - d_submodule_declare(op_reset), - d_submodule_declare(op_rfkill), - d_submodule_declare(op_state_get), - d_submodule_declare(stack), -}; - -#endif /* #ifndef __debug_levels__h__ */ diff --git a/drivers/staging/wimax/debugfs.c b/drivers/staging/wimax/debugfs.c --- a/drivers/staging/wimax/debugfs.c +++ /dev/null -// spdx-license-identifier: gpl-2.0-only -/* - * linux wimax - * debugfs support - * - * copyright (c) 2005-2006 intel corporation <linux-wimax@intel.com> - * inaky perez-gonzalez <inaky.perez-gonzalez@intel.com> - */ -#include <linux/debugfs.h> -#include "linux-wimax.h" -#include "wimax-internal.h" - -#define d_submodule debugfs -#include "debug-levels.h" - -void wimax_debugfs_add(struct wimax_dev *wimax_dev) -{ - struct net_device *net_dev = wimax_dev->net_dev; - struct dentry *dentry; - char buf[128]; - - snprintf(buf, sizeof(buf), "wimax:%s", net_dev->name); - dentry = debugfs_create_dir(buf, null); - wimax_dev->debugfs_dentry = dentry; - - d_level_register_debugfs("wimax_dl_", debugfs, dentry); - d_level_register_debugfs("wimax_dl_", id_table, dentry); - d_level_register_debugfs("wimax_dl_", op_msg, dentry); - d_level_register_debugfs("wimax_dl_", op_reset, dentry); - d_level_register_debugfs("wimax_dl_", op_rfkill, dentry); - d_level_register_debugfs("wimax_dl_", op_state_get, dentry); - d_level_register_debugfs("wimax_dl_", stack, dentry); -} - -void wimax_debugfs_rm(struct wimax_dev *wimax_dev) -{ - debugfs_remove_recursive(wimax_dev->debugfs_dentry); -} diff --git a/drivers/staging/wimax/i2400m/kconfig b/drivers/staging/wimax/i2400m/kconfig --- a/drivers/staging/wimax/i2400m/kconfig +++ /dev/null -# spdx-license-identifier: gpl-2.0-only - -config wimax_i2400m - tristate - depends on wimax - select fw_loader - -comment "enable usb support to see wimax usb drivers" - depends on usb = n - -config wimax_i2400m_usb - tristate "intel wireless wimax connection 2400 over usb (including 5x50)" - depends on wimax && usb - select wimax_i2400m - help - select if you have a device based on the intel wimax - connection 2400 over usb (like any of the intel wireless - wimax/wifi link 5x50 series). - - if unsure, it is safe to select m (module). - -config wimax_i2400m_debug_level - int "wimax i2400m debug level" - depends on wimax_i2400m - default 8 - help - - select the maximum debug verbosity level to be compiled into - the wimax i2400m driver code. - - by default, this is disabled at runtime and can be - selectively enabled at runtime for different parts of the - code using the sysfs debug-levels file. - - if set at zero, this will compile out all the debug code. - - it is recommended that it is left at 8. diff --git a/drivers/staging/wimax/i2400m/makefile b/drivers/staging/wimax/i2400m/makefile --- a/drivers/staging/wimax/i2400m/makefile +++ /dev/null -# spdx-license-identifier: gpl-2.0 - -obj-$(config_wimax_i2400m) += i2400m.o -obj-$(config_wimax_i2400m_usb) += i2400m-usb.o - -i2400m-y := \ - control.o \ - driver.o \ - fw.o \ - op-rfkill.o \ - sysfs.o \ - netdev.o \ - tx.o \ - rx.o - -i2400m-$(config_debug_fs) += debugfs.o - -i2400m-usb-y := \ - usb-fw.o \ - usb-notif.o \ - usb-tx.o \ - usb-rx.o \ - usb.o diff --git a/drivers/staging/wimax/i2400m/control.c b/drivers/staging/wimax/i2400m/control.c --- a/drivers/staging/wimax/i2400m/control.c +++ /dev/null -/* - * intel wireless wimax connection 2400m - * miscellaneous control functions for managing the device - * - * - * copyright (c) 2007-2008 intel corporation. all rights reserved. - * - * redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * - * * redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * * redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in - * the documentation and/or other materials provided with the - * distribution. - * * neither the name of intel corporation nor the names of its - * contributors may be used to endorse or promote products derived - * from this software without specific prior written permission. - * - * this software is provided by the copyright holders and contributors - * "as is" and any express or implied warranties, including, but not - * limited to, the implied warranties of merchantability and fitness for - * a particular purpose are disclaimed. in no event shall the copyright - * owner or contributors be liable for any direct, indirect, incidental, - * special, exemplary, or consequential damages (including, but not - * limited to, procurement of substitute goods or services; loss of use, - * data, or profits; or business interruption) however caused and on any - * theory of liability, whether in contract, strict liability, or tort - * (including negligence or otherwise) arising in any way out of the use - * of this software, even if advised of the possibility of such damage. - * - * - * intel corporation <linux-wimax@intel.com> - * inaky perez-gonzalez <inaky.perez-gonzalez@intel.com> - * - initial implementation - * - * this is a collection of functions used to control the device (plus - * a few helpers). - * - * there are utilities for handling tlv buffers, hooks on the device's - * reports to act on device changes of state [i2400m_report_hook()], - * on acks to commands [i2400m_msg_ack_hook()], a helper for sending - * commands to the device and blocking until a reply arrives - * [i2400m_msg_to_dev()], a few high level commands for manipulating - * the device state, powersving mode and configuration plus the - * routines to setup the device once communication is stablished with - * it [i2400m_dev_initialize()]. - * - * roadmap - * - * i2400m_dev_initialize() called by i2400m_dev_start() - * i2400m_set_init_config() - * i2400m_cmd_get_state() - * i2400m_dev_shutdown() called by i2400m_dev_stop() - * i2400m_reset() - * - * i2400m_{cmd,get,set}_*() - * i2400m_msg_to_dev() - * i2400m_msg_check_status() - * - * i2400m_report_hook() called on reception of an event - * i2400m_report_state_hook() - * i2400m_tlv_buffer_walk() - * i2400m_tlv_match() - * i2400m_report_tlv_system_state() - * i2400m_report_tlv_rf_switches_status() - * i2400m_report_tlv_media_status() - * i2400m_cmd_enter_powersave() - * - * i2400m_msg_ack_hook() called on reception of a reply to a - * command, get or set - */ - -#include <stdarg.h> -#include "i2400m.h" -#include <linux/kernel.h> -#include <linux/slab.h> -#include "linux-wimax-i2400m.h" -#include <linux/export.h> -#include <linux/moduleparam.h> - - -#define d_submodule control -#include "debug-levels.h" - -static int i2400m_idle_mode_disabled;/* 0 (idle mode enabled) by default */ -module_param_named(idle_mode_disabled, i2400m_idle_mode_disabled, int, 0644); -module_parm_desc(idle_mode_disabled, - "if true, the device will not enable idle mode negotiation " - "with the base station (when connected) to save power."); - -/* 0 (power saving enabled) by default */ -static int i2400m_power_save_disabled; -module_param_named(power_save_disabled, i2400m_power_save_disabled, int, 0644); -module_parm_desc(power_save_disabled, - "if true, the driver will not tell the device to enter " - "power saving mode when it reports it is ready for it. " - "false by default (so the device is told to do power " - "saving)."); - -static int i2400m_passive_mode; /* 0 (passive mode disabled) by default */ -module_param_named(passive_mode, i2400m_passive_mode, int, 0644); -module_parm_desc(passive_mode, - "if true, the driver will not do any device setup " - "and leave it up to user space, who must be properly " - "setup."); - - -/* - * return if a tlv is of a give type and size - * - * @tlv_hdr: pointer to the tlv - * @tlv_type: type of the tlv we are looking for - * @tlv_size: expected size of the tlv we are looking for (if -1, - * don't check the size). this includes the header - * returns: 0 if the tlv matches - * < 0 if it doesn't match at all - * > 0 total tlv + payload size, if the type matches, but not - * the size - */ -static -ssize_t i2400m_tlv_match(const struct i2400m_tlv_hdr *tlv, - enum i2400m_tlv tlv_type, ssize_t tlv_size) -{ - if (le16_to_cpu(tlv->type) != tlv_type) /* not our type? skip */ - return -1; - if (tlv_size != -1 - && le16_to_cpu(tlv->length) + sizeof(*tlv) != tlv_size) { - size_t size = le16_to_cpu(tlv->length) + sizeof(*tlv); - printk(kern_warning "w: tlv type 0x%x mismatched because of " - "size (got %zu vs %zd expected) ", - tlv_type, size, tlv_size); - return size; - } - return 0; -} - - -/* - * given a buffer of tlvs, iterate over them - * - * @i2400m: device instance - * @tlv_buf: pointer to the beginning of the tlv buffer - * @buf_size: buffer size in bytes - * @tlv_pos: seek position; this is assumed to be a pointer returned - * by i2400m_tlv_buffer_walk() [and thus, validated]. the - * tlv returned will be the one following this one. - * - * usage: - * - * tlv_itr = null; - * while (tlv_itr = i2400m_tlv_buffer_walk(i2400m, buf, size, tlv_itr)) { - * ... - * // do stuff with tlv_itr, don't modify it - * ... - * } - */ -static -const struct i2400m_tlv_hdr *i2400m_tlv_buffer_walk( - struct i2400m *i2400m, - const void *tlv_buf, size_t buf_size, - const struct i2400m_tlv_hdr *tlv_pos) -{ - struct device *dev = i2400m_dev(i2400m); - const struct i2400m_tlv_hdr *tlv_top = tlv_buf + buf_size; - size_t offset, length, avail_size; - unsigned type; - - if (tlv_pos == null) /* take the first one? */ - tlv_pos = tlv_buf; - else /* nope, the next one */ - tlv_pos = (void *) tlv_pos - + le16_to_cpu(tlv_pos->length) + sizeof(*tlv_pos); - if (tlv_pos == tlv_top) { /* buffer done */ - tlv_pos = null; - goto error_beyond_end; - } - if (tlv_pos > tlv_top) { - tlv_pos = null; - warn_on(1); - goto error_beyond_end; - } - offset = (void *) tlv_pos - (void *) tlv_buf; - avail_size = buf_size - offset; - if (avail_size < sizeof(*tlv_pos)) { - dev_err(dev, "hw bug? tlv_buf %p [%zu bytes], tlv @%zu: " - "short header ", tlv_buf, buf_size, offset); - goto error_short_header; - } - type = le16_to_cpu(tlv_pos->type); - length = le16_to_cpu(tlv_pos->length); - if (avail_size < sizeof(*tlv_pos) + length) { - dev_err(dev, "hw bug? tlv_buf %p [%zu bytes], " - "tlv type 0x%04x @%zu: " - "short data (%zu bytes vs %zu needed) ", - tlv_buf, buf_size, type, offset, avail_size, - sizeof(*tlv_pos) + length); - goto error_short_header; - } -error_short_header: -error_beyond_end: - return tlv_pos; -} - - -/* - * find a tlv in a buffer of sequential tlvs - * - * @i2400m: device descriptor - * @tlv_hdr: pointer to the first tlv in the sequence - * @size: size of the buffer in bytes; all tlvs are assumed to fit - * fully in the buffer (otherwise we'll complain). - * @tlv_type: type of the tlv we are looking for - * @tlv_size: expected size of the tlv we are looking for (if -1, - * don't check the size). this includes the header - * - * returns: null if the tlv is not found, otherwise a pointer to - * it. if the sizes don't match, an error is printed and null - * returned. - */ -static -const struct i2400m_tlv_hdr *i2400m_tlv_find( - struct i2400m *i2400m, - const struct i2400m_tlv_hdr *tlv_hdr, size_t size, - enum i2400m_tlv tlv_type, ssize_t tlv_size) -{ - ssize_t match; - struct device *dev = i2400m_dev(i2400m); - const struct i2400m_tlv_hdr *tlv = null; - while ((tlv = i2400m_tlv_buffer_walk(i2400m, tlv_hdr, size, tlv))) { - match = i2400m_tlv_match(tlv, tlv_type, tlv_size); - if (match == 0) /* found it :) */ - break; - if (match > 0) - dev_warn(dev, "tlv type 0x%04x found with size " - "mismatch (%zu vs %zd needed) ", - tlv_type, match, tlv_size); - } - return tlv; -} - - -static const struct -{ - char *msg; - int errno; -} ms_to_errno[i2400m_ms_max] = { - [i2400m_ms_done_ok] = { "", 0 }, - [i2400m_ms_done_in_progress] = { "", 0 }, - [i2400m_ms_invalid_op] = { "invalid opcode", -enosys }, - [i2400m_ms_bad_state] = { "invalid state", -eilseq }, - [i2400m_ms_illegal_value] = { "illegal value", -einval }, - [i2400m_ms_missing_params] = { "missing parameters", -enomsg }, - [i2400m_ms_version_error] = { "bad version", -eio }, - [i2400m_ms_accessibility_error] = { "accesibility error", -eio }, - [i2400m_ms_busy] = { "busy", -ebusy }, - [i2400m_ms_corrupted_tlv] = { "corrupted tlv", -eilseq }, - [i2400m_ms_uninitialized] = { "uninitialized", -eilseq }, - [i2400m_ms_unknown_error] = { "unknown error", -eio }, - [i2400m_ms_production_error] = { "production error", -eio }, - [i2400m_ms_no_rf] = { "no rf", -eio }, - [i2400m_ms_not_ready_for_powersave] = - { "not ready for powersave", -eacces }, - [i2400m_ms_thermal_critical] = { "thermal critical", -el3hlt }, -}; - - -/* - * i2400m_msg_check_status - translate a message's status code - * - * @i2400m: device descriptor - * @l3l4_hdr: message header - * @strbuf: buffer to place a formatted error message (unless null). - * @strbuf_size: max amount of available space; larger messages will - * be truncated. - * - * returns: errno code corresponding to the status code in @l3l4_hdr - * and a message in @strbuf describing the error. - */ -int i2400m_msg_check_status(const struct i2400m_l3l4_hdr *l3l4_hdr, - char *strbuf, size_t strbuf_size) -{ - int result; - enum i2400m_ms status = le16_to_cpu(l3l4_hdr->status); - const char *str; - - if (status == 0) - return 0; - if (status >= array_size(ms_to_errno)) { - str = "unknown status code"; - result = -ebadr; - } else { - str = ms_to_errno[status].msg; - result = ms_to_errno[status].errno; - } - if (strbuf) - snprintf(strbuf, strbuf_size, "%s (%d)", str, status); - return result; -} - - -/* - * act on a tlv system state reported by the device - * - * @i2400m: device descriptor - * @ss: validated system state tlv - */ -static -void i2400m_report_tlv_system_state(struct i2400m *i2400m, - const struct i2400m_tlv_system_state *ss) -{ - struct device *dev = i2400m_dev(i2400m); - struct wimax_dev *wimax_dev = &i2400m->wimax_dev; - enum i2400m_system_state i2400m_state = le32_to_cpu(ss->state); - - d_fnstart(3, dev, "(i2400m %p ss %p [%u]) ", i2400m, ss, i2400m_state); - - if (i2400m->state != i2400m_state) { - i2400m->state = i2400m_state; - wake_up_all(&i2400m->state_wq); - } - switch (i2400m_state) { - case i2400m_ss_uninitialized: - case i2400m_ss_init: - case i2400m_ss_config: - case i2400m_ss_production: - wimax_state_change(wimax_dev, wimax_st_uninitialized); - break; - - case i2400m_ss_rf_off: - case i2400m_ss_rf_shutdown: - wimax_state_change(wimax_dev, wimax_st_radio_off); - break; - - case i2400m_ss_ready: - case i2400m_ss_standby: - case i2400m_ss_sleepactive: - wimax_state_change(wimax_dev, wimax_st_ready); - break; - - case i2400m_ss_connecting: - case i2400m_ss_wimax_connected: - wimax_state_change(wimax_dev, wimax_st_ready); - break; - - case i2400m_ss_scan: - case i2400m_ss_out_of_zone: - wimax_state_change(wimax_dev, wimax_st_scanning); - break; - - case i2400m_ss_idle: - d_printf(1, dev, "entering bs-negotiated idle mode "); - fallthrough; - case i2400m_ss_disconnecting: - case i2400m_ss_data_path_connected: - wimax_state_change(wimax_dev, wimax_st_connected); - break; - - default: - /* huh? just in case, shut it down */ - dev_err(dev, "hw bug? unknown state %u: shutting down ", - i2400m_state); - i2400m_reset(i2400m, i2400m_rt_warm); - break; - } - d_fnend(3, dev, "(i2400m %p ss %p [%u]) = void ", - i2400m, ss, i2400m_state); -} - - -/* - * parse and act on a tlv media status sent by the device - * - * @i2400m: device descriptor - * @ms: validated media status tlv - * - * this will set the carrier up on down based on the device's link - * report. this is done asides of what the wimax stack does based on - * the device's state as sometimes we need to do a link-renew (the bs - * wants us to renew a dhcp lease, for example). - * - * in fact, doc says that every time we get a link-up, we should do a - * dhcp negotiation... - */ -static -void i2400m_report_tlv_media_status(struct i2400m *i2400m, - const struct i2400m_tlv_media_status *ms) -{ - struct device *dev = i2400m_dev(i2400m); - struct wimax_dev *wimax_dev = &i2400m->wimax_dev; - struct net_device *net_dev = wimax_dev->net_dev; - enum i2400m_media_status status = le32_to_cpu(ms->media_status); - - d_fnstart(3, dev, "(i2400m %p ms %p [%u]) ", i2400m, ms, status); - - switch (status) { - case i2400m_media_status_link_up: - netif_carrier_on(net_dev); - break; - case i2400m_media_status_link_down: - netif_carrier_off(net_dev); - break; - /* - * this is the network telling us we need to retrain the dhcp - * lease -- so far, we are trusting the wimax network service - * in user space to pick this up and poke the dhcp client. - */ - case i2400m_media_status_link_renew: - netif_carrier_on(net_dev); - break; - default: - dev_err(dev, "hw bug? unknown media status %u ", - status); - } - d_fnend(3, dev, "(i2400m %p ms %p [%u]) = void ", - i2400m, ms, status); -} - - -/* - * process a tlv from a 'state report' - * - * @i2400m: device descriptor - * @tlv: pointer to the tlv header; it has been already validated for - * consistent size. - * @tag: for error messages - * - * act on the tlvs from a 'state report'. - */ -static -void i2400m_report_state_parse_tlv(struct i2400m *i2400m, - const struct i2400m_tlv_hdr *tlv, - const char *tag) -{ - struct device *dev = i2400m_dev(i2400m); - const struct i2400m_tlv_media_status *ms; - const struct i2400m_tlv_system_state *ss; - const struct i2400m_tlv_rf_switches_status *rfss; - - if (0 == i2400m_tlv_match(tlv, i2400m_tlv_system_state, sizeof(*ss))) { - ss = container_of(tlv, typeof(*ss), hdr); - d_printf(2, dev, "%s: system state tlv " - "found (0x%04x), state 0x%08x ", - tag, i2400m_tlv_system_state, - le32_to_cpu(ss->state)); - i2400m_report_tlv_system_state(i2400m, ss); - } - if (0 == i2400m_tlv_match(tlv, i2400m_tlv_rf_status, sizeof(*rfss))) { - rfss = container_of(tlv, typeof(*rfss), hdr); - d_printf(2, dev, "%s: rf status tlv " - "found (0x%04x), sw 0x%02x hw 0x%02x ", - tag, i2400m_tlv_rf_status, - rfss->sw_rf_switch, - rfss->hw_rf_switch); - i2400m_report_tlv_rf_switches_status(i2400m, rfss); - } - if (0 == i2400m_tlv_match(tlv, i2400m_tlv_media_status, sizeof(*ms))) { - ms = container_of(tlv, typeof(*ms), hdr); - d_printf(2, dev, "%s: media status tlv: %u ", - tag, le32_to_cpu(ms->media_status)); - i2400m_report_tlv_media_status(i2400m, ms); - } -} - - -/* - * parse a 'state report' and extract information - * - * @i2400m: device descriptor - * @l3l4_hdr: pointer to message; it has been already validated for - * consistent size. - * @size: size of the message (header + payload). the header length - * declaration is assumed to be congruent with @size (as in - * sizeof(*l3l4_hdr) + l3l4_hdr->length == size) - * - * walk over the tlvs in a report state and act on them. - */ -static -void i2400m_report_state_hook(struct i2400m *i2400m, - const struct i2400m_l3l4_hdr *l3l4_hdr, - size_t size, const char *tag) -{ - struct device *dev = i2400m_dev(i2400m); - const struct i2400m_tlv_hdr *tlv; - size_t tlv_size = le16_to_cpu(l3l4_hdr->length); - - d_fnstart(4, dev, "(i2400m %p, l3l4_hdr %p, size %zu, %s) ", - i2400m, l3l4_hdr, size, tag); - tlv = null; - - while ((tlv = i2400m_tlv_buffer_walk(i2400m, &l3l4_hdr->pl, - tlv_size, tlv))) - i2400m_report_state_parse_tlv(i2400m, tlv, tag); - d_fnend(4, dev, "(i2400m %p, l3l4_hdr %p, size %zu, %s) = void ", - i2400m, l3l4_hdr, size, tag); -} - - -/* - * i2400m_report_hook - (maybe) act on a report - * - * @i2400m: device descriptor - * @l3l4_hdr: pointer to message; it has been already validated for - * consistent size. - * @size: size of the message (header + payload). the header length - * declaration is assumed to be congruent with @size (as in - * sizeof(*l3l4_hdr) + l3l4_hdr->length == size) - * - * extract information we might need (like carrien on/off) from a - * device report. - */ -void i2400m_report_hook(struct i2400m *i2400m, - const struct i2400m_l3l4_hdr *l3l4_hdr, size_t size) -{ - struct device *dev = i2400m_dev(i2400m); - unsigned msg_type; - - d_fnstart(3, dev, "(i2400m %p l3l4_hdr %p size %zu) ", - i2400m, l3l4_hdr, size); - /* chew on the message, we might need some information from - * here */ - msg_type = le16_to_cpu(l3l4_hdr->type); - switch (msg_type) { - case i2400m_mt_report_state: /* carrier detection... */ - i2400m_report_state_hook(i2400m, - l3l4_hdr, size, "report state"); - break; - /* if the device is ready for power save, then ask it to do - * it. */ - case i2400m_mt_report_powersave_ready: /* zzzzz */ - if (l3l4_hdr->status == cpu_to_le16(i2400m_ms_done_ok)) { - if (i2400m_power_save_disabled) - d_printf(1, dev, "ready for powersave, " - "not requesting (disabled by module " - "parameter) "); - else { - d_printf(1, dev, "ready for powersave, " - "requesting "); - i2400m_cmd_enter_powersave(i2400m); - } - } - break; - } - d_fnend(3, dev, "(i2400m %p l3l4_hdr %p size %zu) = void ", - i2400m, l3l4_hdr, size); -} - - -/* - * i2400m_msg_ack_hook - process cmd/set/get ack for internal status - * - * @i2400m: device descriptor - * @l3l4_hdr: pointer to message; it has been already validated for - * consistent size. - * @size: size of the message - * - * extract information we might need from acks to commands and act on - * it. this is akin to i2400m_report_hook(). note most of this - * processing should be done in the function that calls the - * command. this is here for some cases where it can't happen... - */ -static void i2400m_msg_ack_hook(struct i2400m *i2400m, - const struct i2400m_l3l4_hdr *l3l4_hdr, - size_t size) -{ - int result; - struct device *dev = i2400m_dev(i2400m); - unsigned int ack_type; - char strerr[32]; - - /* chew on the message, we might need some information from - * here */ - ack_type = le16_to_cpu(l3l4_hdr->type); - switch (ack_type) { - case i2400m_mt_cmd_enter_powersave: - /* this is just left here for the sake of example, as - * the processing is done somewhere else. */ - if (0) { - result = i2400m_msg_check_status( - l3l4_hdr, strerr, sizeof(strerr)); - if (result >= 0) - d_printf(1, dev, "ready for power save: %zd ", - size); - } - break; - } -} - - -/* - * i2400m_msg_size_check() - verify message size and header are congruent - * - * it is ok if the total message size is larger than the expected - * size, as there can be padding. - */ -int i2400m_msg_size_check(struct i2400m *i2400m, - const struct i2400m_l3l4_hdr *l3l4_hdr, - size_t msg_size) -{ - int result; - struct device *dev = i2400m_dev(i2400m); - size_t expected_size; - d_fnstart(4, dev, "(i2400m %p l3l4_hdr %p msg_size %zu) ", - i2400m, l3l4_hdr, msg_size); - if (msg_size < sizeof(*l3l4_hdr)) { - dev_err(dev, "bad size for message header " - "(expected at least %zu, got %zu) ", - (size_t) sizeof(*l3l4_hdr), msg_size); - result = -eio; - goto error_hdr_size; - } - expected_size = le16_to_cpu(l3l4_hdr->length) + sizeof(*l3l4_hdr); - if (msg_size < expected_size) { - dev_err(dev, "bad size for message code 0x%04x (expected %zu, " - "got %zu) ", le16_to_cpu(l3l4_hdr->type), - expected_size, msg_size); - result = -eio; - } else - result = 0; -error_hdr_size: - d_fnend(4, dev, - "(i2400m %p l3l4_hdr %p msg_size %zu) = %d ", - i2400m, l3l4_hdr, msg_size, result); - return result; -} - - - -/* - * cancel a wait for a command ack - * - * @i2400m: device descriptor - * @code: [negative] errno code to cancel with (don't use - * -einprogress) - * - * if there is an ack already filled out, free it. - */ -void i2400m_msg_to_dev_cancel_wait(struct i2400m *i2400m, int code) -{ - struct sk_buff *ack_skb; - unsigned long flags; - - spin_lock_irqsave(&i2400m->rx_lock, flags); - ack_skb = i2400m->ack_skb; - if (ack_skb && !is_err(ack_skb)) - kfree_skb(ack_skb); - i2400m->ack_skb = err_ptr(code); - spin_unlock_irqrestore(&i2400m->rx_lock, flags); -} - - -/** - * i2400m_msg_to_dev - send a control message to the device and get a response - * - * @i2400m: device descriptor - * - * @buf: pointer to the buffer containing the message to be sent; it - * has to start with a &struct i2400m_l3l4_hdr and then - * followed by the payload. once this function returns, the - * buffer can be reused. - * - * @buf_len: buffer size - * - * returns: - * - * pointer to skb containing the ack message. you need to check the - * pointer with is_err(), as it might be an error code. error codes - * could happen because: - * - * - the message wasn't formatted correctly - * - couldn't send the message - * - failed waiting for a response - * - the ack message wasn't formatted correctly - * - * the returned skb has been allocated with wimax_msg_to_user_alloc(), - * it contains the response in a netlink attribute and is ready to be - * passed up to user space with wimax_msg_to_user_send(). to access - * the payload and its length, use wimax_msg_{data,len}() on the skb. - * - * the skb has to be freed with kfree_skb() once done. - * - * description: - * - * this function delivers a message/command to the device and waits - * for an ack to be received. the format is described in - * linux/wimax/i2400m.h. in summary, a command/get/set is followed by an - * ack. - * - * this function will not check the ack status, that's left up to the - * caller. once done with the ack skb, it has to be kfree_skb()ed. - * - * the i2400m handles only one message at the same time, thus we need - * the mutex to exclude other players. - * - * we write the message and then wait for an answer to come back. the - * rx path intercepts control messages and handles them in - * i2400m_rx_ctl(). reports (notifications) are (maybe) processed - * locally and then forwarded (as needed) to user space on the wimax - * stack message pipe. acks are saved and passed back to us through an - * skb in i2400m->ack_skb which is ready to be given to generic - * netlink if need be. - */ -struct sk_buff *i2400m_msg_to_dev(struct i2400m *i2400m, - const void *buf, size_t buf_len) -{ - int result; - struct device *dev = i2400m_dev(i2400m); - const struct i2400m_l3l4_hdr *msg_l3l4_hdr; - struct sk_buff *ack_skb; - const struct i2400m_l3l4_hdr *ack_l3l4_hdr; - size_t ack_len; - int ack_timeout; - unsigned msg_type; - unsigned long flags; - - d_fnstart(3, dev, "(i2400m %p buf %p len %zu) ", - i2400m, buf, buf_len); - - rmb(); /* make sure we see what i2400m_dev_reset_handle() */ - if (i2400m->boot_mode) - return err_ptr(-el3rst); - - msg_l3l4_hdr = buf; - /* check msg & payload consistency */ - result = i2400m_msg_size_check(i2400m, msg_l3l4_hdr, buf_len); - if (result < 0) - goto error_bad_msg; - msg_type = le16_to_cpu(msg_l3l4_hdr->type); - d_printf(1, dev, "cmd/get/set 0x%04x %zu bytes ", - msg_type, buf_len); - d_dump(2, dev, buf, buf_len); - - /* setup the completion, ack_skb ("we are waiting") and send - * the message to the device */ - mutex_lock(&i2400m->msg_mutex); - spin_lock_irqsave(&i2400m->rx_lock, flags); - i2400m->ack_skb = err_ptr(-einprogress); - spin_unlock_irqrestore(&i2400m->rx_lock, flags); - init_completion(&i2400m->msg_completion); - result = i2400m_tx(i2400m, buf, buf_len, i2400m_pt_ctrl); - if (result < 0) { - dev_err(dev, "can't send message 0x%04x: %d ", - le16_to_cpu(msg_l3l4_hdr->type), result); - goto error_tx; - } - - /* some commands take longer to execute because of crypto ops, - * so we give them some more leeway on timeout */ - switch (msg_type) { - case i2400m_mt_get_tls_operation_result: - case i2400m_mt_cmd_send_eap_response: - ack_timeout = 5 * hz; - break; - default: - ack_timeout = hz; - } - - if (unlikely(i2400m->trace_msg_from_user)) - wimax_msg(&i2400m->wimax_dev, "echo", buf, buf_len, gfp_kernel); - /* the rx path in rx.c will put any response for this message - * in i2400m->ack_skb and wake us up. if we cancel the wait, - * we need to change the value of i2400m->ack_skb to something - * not -einprogress so rx knows there is no one waiting. */ - result = wait_for_completion_interruptible_timeout( - &i2400m->msg_completion, ack_timeout); - if (result == 0) { - dev_err(dev, "timeout waiting for reply to message 0x%04x ", - msg_type); - result = -etimedout; - i2400m_msg_to_dev_cancel_wait(i2400m, result); - goto error_wait_for_completion; - } else if (result < 0) { - dev_err(dev, "error waiting for reply to message 0x%04x: %d ", - msg_type, result); - i2400m_msg_to_dev_cancel_wait(i2400m, result); - goto error_wait_for_completion; - } - - /* pull out the ack data from i2400m->ack_skb -- see if it is - * an error and act accordingly */ - spin_lock_irqsave(&i2400m->rx_lock, flags); - ack_skb = i2400m->ack_skb; - if (is_err(ack_skb)) - result = ptr_err(ack_skb); - else - result = 0; - i2400m->ack_skb = null; - spin_unlock_irqrestore(&i2400m->rx_lock, flags); - if (result < 0) - goto error_ack_status; - ack_l3l4_hdr = wimax_msg_data_len(ack_skb, &ack_len); - - /* check the ack and deliver it if it is ok */ - if (unlikely(i2400m->trace_msg_from_user)) - wimax_msg(&i2400m->wimax_dev, "echo", - ack_l3l4_hdr, ack_len, gfp_kernel); - result = i2400m_msg_size_check(i2400m, ack_l3l4_hdr, ack_len); - if (result < 0) { - dev_err(dev, "hw bug? reply to message 0x%04x: %d ", - msg_type, result); - goto error_bad_ack_len; - } - if (msg_type != le16_to_cpu(ack_l3l4_hdr->type)) { - dev_err(dev, "hw bug? bad reply 0x%04x to message 0x%04x ", - le16_to_cpu(ack_l3l4_hdr->type), msg_type); - result = -eio; - goto error_bad_ack_type; - } - i2400m_msg_ack_hook(i2400m, ack_l3l4_hdr, ack_len); - mutex_unlock(&i2400m->msg_mutex); - d_fnend(3, dev, "(i2400m %p buf %p len %zu) = %p ", - i2400m, buf, buf_len, ack_skb); - return ack_skb; - -error_bad_ack_type: -error_bad_ack_len: - kfree_skb(ack_skb); -error_ack_status: -error_wait_for_completion: -error_tx: - mutex_unlock(&i2400m->msg_mutex); -error_bad_msg: - d_fnend(3, dev, "(i2400m %p buf %p len %zu) = %d ", - i2400m, buf, buf_len, result); - return err_ptr(result); -} - - -/* - * definitions for the enter power save command - * - * the enter power save command requests the device to go into power - * saving mode. the device will ack or nak the command depending on it - * being ready for it. if it acks, we tell the usb subsystem to - * - * as well, the device might request to go into power saving mode by - * sending a report (report_powersave_ready), in which case, we issue - * this command. the hookups in the rx coder allow - */ -enum { - i2400m_wakeup_enabled = 0x01, - i2400m_wakeup_disabled = 0x02, - i2400m_tlv_type_wakeup_mode = 144, -}; - -struct i2400m_cmd_enter_power_save { - struct i2400m_l3l4_hdr hdr; - struct i2400m_tlv_hdr tlv; - __le32 val; -} __packed; - - -/* - * request entering power save - * - * this command is (mainly) executed when the device indicates that it - * is ready to go into powersave mode via a report_powersave_ready. - */ -int i2400m_cmd_enter_powersave(struct i2400m *i2400m) -{ - int result; - struct device *dev = i2400m_dev(i2400m); - struct sk_buff *ack_skb; - struct i2400m_cmd_enter_power_save *cmd; - char strerr[32]; - - result = -enomem; - cmd = kzalloc(sizeof(*cmd), gfp_kernel); - if (cmd == null) - goto error_alloc; - cmd->hdr.type = cpu_to_le16(i2400m_mt_cmd_enter_powersave); - cmd->hdr.length = cpu_to_le16(sizeof(*cmd) - sizeof(cmd->hdr)); - cmd->hdr.version = cpu_to_le16(i2400m_l3l4_version); - cmd->tlv.type = cpu_to_le16(i2400m_tlv_type_wakeup_mode); - cmd->tlv.length = cpu_to_le16(sizeof(cmd->val)); - cmd->val = cpu_to_le32(i2400m_wakeup_enabled); - - ack_skb = i2400m_msg_to_dev(i2400m, cmd, sizeof(*cmd)); - result = ptr_err(ack_skb); - if (is_err(ack_skb)) { - dev_err(dev, "failed to issue 'enter power save' command: %d ", - result); - goto error_msg_to_dev; - } - result = i2400m_msg_check_status(wimax_msg_data(ack_skb), - strerr, sizeof(strerr)); - if (result == -eacces) - d_printf(1, dev, "cannot enter power save mode "); - else if (result < 0) - dev_err(dev, "'enter power save' (0x%04x) command failed: " - "%d - %s ", i2400m_mt_cmd_enter_powersave, - result, strerr); - else - d_printf(1, dev, "device ready to power save "); - kfree_skb(ack_skb); -error_msg_to_dev: - kfree(cmd); -error_alloc: - return result; -} -export_symbol_gpl(i2400m_cmd_enter_powersave); - - -/* - * definitions for getting device information - */ -enum { - i2400m_tlv_detailed_device_info = 140 -}; - -/** - * i2400m_get_device_info - query the device for detailed device information - * - * @i2400m: device descriptor - * - * returns: an skb whose skb->data points to a 'struct - * i2400m_tlv_detailed_device_info'. when done, kfree_skb() it. the - * skb is *guaranteed* to contain the whole tlv data structure. - * - * on error, is_err(skb) is true and err_ptr(skb) is the error - * code. - */ -struct sk_buff *i2400m_get_device_info(struct i2400m *i2400m) -{ - int result; - struct device *dev = i2400m_dev(i2400m); - struct sk_buff *ack_skb; - struct i2400m_l3l4_hdr *cmd; - const struct i2400m_l3l4_hdr *ack; - size_t ack_len; - const struct i2400m_tlv_hdr *tlv; - const struct i2400m_tlv_detailed_device_info *ddi; - char strerr[32]; - - ack_skb = err_ptr(-enomem); - cmd = kzalloc(sizeof(*cmd), gfp_kernel); - if (cmd == null) - goto error_alloc; - cmd->type = cpu_to_le16(i2400m_mt_get_device_info); - cmd->length = 0; - cmd->version = cpu_to_le16(i2400m_l3l4_version); - - ack_skb = i2400m_msg_to_dev(i2400m, cmd, sizeof(*cmd)); - if (is_err(ack_skb)) { - dev_err(dev, "failed to issue 'get device info' command: %ld ", - ptr_err(ack_skb)); - goto error_msg_to_dev; - } - ack = wimax_msg_data_len(ack_skb, &ack_len); - result = i2400m_msg_check_status(ack, strerr, sizeof(strerr)); - if (result < 0) { - dev_err(dev, "'get device info' (0x%04x) command failed: " - "%d - %s ", i2400m_mt_get_device_info, result, - strerr); - goto error_cmd_failed; - } - tlv = i2400m_tlv_find(i2400m, ack->pl, ack_len - sizeof(*ack), - i2400m_tlv_detailed_device_info, sizeof(*ddi)); - if (tlv == null) { - dev_err(dev, "get device info: " - "detailed device info tlv not found (0x%04x) ", - i2400m_tlv_detailed_device_info); - result = -eio; - goto error_no_tlv; - } - skb_pull(ack_skb, (void *) tlv - (void *) ack_skb->data); -error_msg_to_dev: - kfree(cmd); -error_alloc: - return ack_skb; - -error_no_tlv: -error_cmd_failed: - kfree_skb(ack_skb); - kfree(cmd); - return err_ptr(result); -} - - -/* firmware interface versions we support */ -enum { - i2400m_hdiv_major = 9, - i2400m_hdiv_minor = 1, - i2400m_hdiv_minor_2 = 2, -}; - - -/** - * i2400m_firmware_check - check firmware versions are compatible with - * the driver - * - * @i2400m: device descriptor - * - * returns: 0 if ok, < 0 errno code an error and a message in the - * kernel log. - * - * long function, but quite simple; first chunk launches the command - * and double checks the reply for the right tlv. then we process the - * tlv (where the meat is). - * - * once we process the tlv that gives us the firmware's interface - * version, we encode it and save it in i2400m->fw_version for future - * reference. - */ -int i2400m_firmware_check(struct i2400m *i2400m) -{ - int result; - struct device *dev = i2400m_dev(i2400m); - struct sk_buff *ack_skb; - struct i2400m_l3l4_hdr *cmd; - const struct i2400m_l3l4_hdr *ack; - size_t ack_len; - const struct i2400m_tlv_hdr *tlv; - const struct i2400m_tlv_l4_message_versions *l4mv; - char strerr[32]; - unsigned major, minor, branch; - - result = -enomem; - cmd = kzalloc(sizeof(*cmd), gfp_kernel); - if (cmd == null) - goto error_alloc; - cmd->type = cpu_to_le16(i2400m_mt_get_lm_version); - cmd->length = 0; - cmd->version = cpu_to_le16(i2400m_l3l4_version); - - ack_skb = i2400m_msg_to_dev(i2400m, cmd, sizeof(*cmd)); - if (is_err(ack_skb)) { - result = ptr_err(ack_skb); - dev_err(dev, "failed to issue 'get lm version' command: %-d ", - result); - goto error_msg_to_dev; - } - ack = wimax_msg_data_len(ack_skb, &ack_len); - result = i2400m_msg_check_status(ack, strerr, sizeof(strerr)); - if (result < 0) { - dev_err(dev, "'get lm version' (0x%04x) command failed: " - "%d - %s ", i2400m_mt_get_lm_version, result, - strerr); - goto error_cmd_failed; - } - tlv = i2400m_tlv_find(i2400m, ack->pl, ack_len - sizeof(*ack), - i2400m_tlv_l4_message_versions, sizeof(*l4mv)); - if (tlv == null) { - dev_err(dev, "get lm version: tlv not found (0x%04x) ", - i2400m_tlv_l4_message_versions); - result = -eio; - goto error_no_tlv; - } - l4mv = container_of(tlv, typeof(*l4mv), hdr); - major = le16_to_cpu(l4mv->major); - minor = le16_to_cpu(l4mv->minor); - branch = le16_to_cpu(l4mv->branch); - result = -einval; - if (major != i2400m_hdiv_major) { - dev_err(dev, "unsupported major fw version " - "%u.%u.%u ", major, minor, branch); - goto error_bad_major; - } - result = 0; - if (minor > i2400m_hdiv_minor_2 || minor < i2400m_hdiv_minor) - dev_warn(dev, "untested minor fw version %u.%u.%u ", - major, minor, branch); - /* yes, we ignore the branch -- we don't have to track it */ - i2400m->fw_version = major << 16 | minor; - dev_info(dev, "firmware interface version %u.%u.%u ", - major, minor, branch); -error_bad_major: -error_no_tlv: -error_cmd_failed: - kfree_skb(ack_skb); -error_msg_to_dev: - kfree(cmd); -error_alloc: - return result; -} - - -/* - * send an doexitidle command to the device to ask it to go out of - * basestation-idle mode. - * - * @i2400m: device descriptor - * - * this starts a renegotiation with the basestation that might involve - * another crypto handshake with user space. - * - * returns: 0 if ok, < 0 errno code on error. - */ -int i2400m_cmd_exit_idle(struct i2400m *i2400m) -{ - int result; - struct device *dev = i2400m_dev(i2400m); - struct sk_buff *ack_skb; - struct i2400m_l3l4_hdr *cmd; - char strerr[32]; - - result = -enomem; - cmd = kzalloc(sizeof(*cmd), gfp_kernel); - if (cmd == null) - goto error_alloc; - cmd->type = cpu_to_le16(i2400m_mt_cmd_exit_idle); - cmd->length = 0; - cmd->version = cpu_to_le16(i2400m_l3l4_version); - - ack_skb = i2400m_msg_to_dev(i2400m, cmd, sizeof(*cmd)); - result = ptr_err(ack_skb); - if (is_err(ack_skb)) { - dev_err(dev, "failed to issue 'exit idle' command: %d ", - result); - goto error_msg_to_dev; - } - result = i2400m_msg_check_status(wimax_msg_data(ack_skb), - strerr, sizeof(strerr)); - kfree_skb(ack_skb); -error_msg_to_dev: - kfree(cmd); -error_alloc: - return result; - -} - - -/* - * query the device for its state, update the wimax stack's idea of it - * - * @i2400m: device descriptor - * - * returns: 0 if ok, < 0 errno code on error. - * - * executes a 'get state' command and parses the returned - * tlvs. - * - * because this is almost identical to a 'report state', we use - * i2400m_report_state_hook() to parse the answer. this will set the - * carrier state, as well as the rf kill switches state. - */ -static int i2400m_cmd_get_state(struct i2400m *i2400m) -{ - int result; - struct device *dev = i2400m_dev(i2400m); - struct sk_buff *ack_skb; - struct i2400m_l3l4_hdr *cmd; - const struct i2400m_l3l4_hdr *ack; - size_t ack_len; - char strerr[32]; - - result = -enomem; - cmd = kzalloc(sizeof(*cmd), gfp_kernel); - if (cmd == null) - goto error_alloc; - cmd->type = cpu_to_le16(i2400m_mt_get_state); - cmd->length = 0; - cmd->version = cpu_to_le16(i2400m_l3l4_version); - - ack_skb = i2400m_msg_to_dev(i2400m, cmd, sizeof(*cmd)); - if (is_err(ack_skb)) { - dev_err(dev, "failed to issue 'get state' command: %ld ", - ptr_err(ack_skb)); - result = ptr_err(ack_skb); - goto error_msg_to_dev; - } - ack = wimax_msg_data_len(ack_skb, &ack_len); - result = i2400m_msg_check_status(ack, strerr, sizeof(strerr)); - if (result < 0) { - dev_err(dev, "'get state' (0x%04x) command failed: " - "%d - %s ", i2400m_mt_get_state, result, strerr); - goto error_cmd_failed; - } - i2400m_report_state_hook(i2400m, ack, ack_len - sizeof(*ack), - "get state"); - result = 0; - kfree_skb(ack_skb); -error_cmd_failed: -error_msg_to_dev: - kfree(cmd); -error_alloc: - return result; -} - -/** - * set basic configuration settings - * - * @i2400m: device descriptor - * @arg: array of pointers to the tlv headers to send for - * configuration (each followed by its payload). - * tlv headers and payloads must be properly initialized, with the - * right endianess (le). - * @args: number of pointers in the @arg array - */ -static int i2400m_set_init_config(struct i2400m *i2400m, - const struct i2400m_tlv_hdr **arg, - size_t args) -{ - int result; - struct device *dev = i2400m_dev(i2400m); - struct sk_buff *ack_skb; - struct i2400m_l3l4_hdr *cmd; - char strerr[32]; - unsigned argc, argsize, tlv_size; - const struct i2400m_tlv_hdr *tlv_hdr; - void *buf, *itr; - - d_fnstart(3, dev, "(i2400m %p arg %p args %zu) ", i2400m, arg, args); - result = 0; - if (args == 0) - goto none; - /* compute the size of all the tlvs, so we can alloc a - * contiguous command block to copy them. */ - argsize = 0; - for (argc = 0; argc < args; argc++) { - tlv_hdr = arg[argc]; - argsize += sizeof(*tlv_hdr) + le16_to_cpu(tlv_hdr->length); - } - warn_on(argc >= 9); /* as per hw spec */ - - /* alloc the space for the command and tlvs*/ - result = -enomem; - buf = kzalloc(sizeof(*cmd) + argsize, gfp_kernel); - if (buf == null) - goto error_alloc; - cmd = buf; - cmd->type = cpu_to_le16(i2400m_mt_set_init_config); - cmd->length = cpu_to_le16(argsize); - cmd->version = cpu_to_le16(i2400m_l3l4_version); - - /* copy the tlvs */ - itr = buf + sizeof(*cmd); - for (argc = 0; argc < args; argc++) { - tlv_hdr = arg[argc]; - tlv_size = sizeof(*tlv_hdr) + le16_to_cpu(tlv_hdr->length); - memcpy(itr, tlv_hdr, tlv_size); - itr += tlv_size; - } - - /* send the message! */ - ack_skb = i2400m_msg_to_dev(i2400m, buf, sizeof(*cmd) + argsize); - result = ptr_err(ack_skb); - if (is_err(ack_skb)) { - dev_err(dev, "failed to issue 'init config' command: %d ", - result); - - goto error_msg_to_dev; - } - result = i2400m_msg_check_status(wimax_msg_data(ack_skb), - strerr, sizeof(strerr)); - if (result < 0) - dev_err(dev, "'init config' (0x%04x) command failed: %d - %s ", - i2400m_mt_set_init_config, result, strerr); - kfree_skb(ack_skb); -error_msg_to_dev: - kfree(buf); -error_alloc: -none: - d_fnend(3, dev, "(i2400m %p arg %p args %zu) = %d ", - i2400m, arg, args, result); - return result; - -} - -/** - * i2400m_set_idle_timeout - set the device's idle mode timeout - * - * @i2400m: i2400m device descriptor - * - * @msecs: milliseconds for the timeout to enter idle mode. between - * 100 to 300000 (5m); 0 to disable. in increments of 100. - * - * after this @msecs of the link being idle (no data being sent or - * received), the device will negotiate with the basestation entering - * idle mode for saving power. the connection is maintained, but - * getting out of it (done in tx.c) will require some negotiation, - * possible crypto re-handshake and a possible dhcp re-lease. - * - * only available if fw_version >= 0x00090002. - * - * returns: 0 if ok, < 0 errno code on error. - */ -int i2400m_set_idle_timeout(struct i2400m *i2400m, unsigned msecs) -{ - int result; - struct device *dev = i2400m_dev(i2400m); - struct sk_buff *ack_skb; - struct { - struct i2400m_l3l4_hdr hdr; - struct i2400m_tlv_config_idle_timeout cit; - } *cmd; - const struct i2400m_l3l4_hdr *ack; - size_t ack_len; - char strerr[32]; - - result = -enosys; - if (i2400m_le_v1_3(i2400m)) - goto error_alloc; - result = -enomem; - cmd = kzalloc(sizeof(*cmd), gfp_kernel); - if (cmd == null) - goto error_alloc; - cmd->hdr.type = cpu_to_le16(i2400m_mt_get_state); - cmd->hdr.length = cpu_to_le16(sizeof(*cmd) - sizeof(cmd->hdr)); - cmd->hdr.version = cpu_to_le16(i2400m_l3l4_version); - - cmd->cit.hdr.type = - cpu_to_le16(i2400m_tlv_config_idle_timeout); - cmd->cit.hdr.length = cpu_to_le16(sizeof(cmd->cit.timeout)); - cmd->cit.timeout = cpu_to_le32(msecs); - - ack_skb = i2400m_msg_to_dev(i2400m, cmd, sizeof(*cmd)); - if (is_err(ack_skb)) { - dev_err(dev, "failed to issue 'set idle timeout' command: " - "%ld ", ptr_err(ack_skb)); - result = ptr_err(ack_skb); - goto error_msg_to_dev; - } - ack = wimax_msg_data_len(ack_skb, &ack_len); - result = i2400m_msg_check_status(ack, strerr, sizeof(strerr)); - if (result < 0) { - dev_err(dev, "'set idle timeout' (0x%04x) command failed: " - "%d - %s ", i2400m_mt_get_state, result, strerr); - goto error_cmd_failed; - } - result = 0; - kfree_skb(ack_skb); -error_cmd_failed: -error_msg_to_dev: - kfree(cmd); -error_alloc: - return result; -} - - -/** - * i2400m_dev_initialize - initialize the device once communications are ready - * - * @i2400m: device descriptor - * - * returns: 0 if ok, < 0 errno code on error. - * - * configures the device to work the way we like it. - * - * at the point of this call, the device is registered with the wimax - * and netdev stacks, firmware is uploaded and we can talk to the - * device normally. - */ -int i2400m_dev_initialize(struct i2400m *i2400m) -{ - int result; - struct device *dev = i2400m_dev(i2400m); - struct i2400m_tlv_config_idle_parameters idle_params; - struct i2400m_tlv_config_idle_timeout idle_timeout; - struct i2400m_tlv_config_d2h_data_format df; - struct i2400m_tlv_config_dl_host_reorder dlhr; - const struct i2400m_tlv_hdr *args[9]; - unsigned argc = 0; - - d_fnstart(3, dev, "(i2400m %p) ", i2400m); - if (i2400m_passive_mode) - goto out_passive; - /* disable idle mode? (enabled by default) */ - if (i2400m_idle_mode_disabled) { - if (i2400m_le_v1_3(i2400m)) { - idle_params.hdr.type = - cpu_to_le16(i2400m_tlv_config_idle_parameters); - idle_params.hdr.length = cpu_to_le16( - sizeof(idle_params) - sizeof(idle_params.hdr)); - idle_params.idle_timeout = 0; - idle_params.idle_paging_interval = 0; - args[argc++] = &idle_params.hdr; - } else { - idle_timeout.hdr.type = - cpu_to_le16(i2400m_tlv_config_idle_timeout); - idle_timeout.hdr.length = cpu_to_le16( - sizeof(idle_timeout) - sizeof(idle_timeout.hdr)); - idle_timeout.timeout = 0; - args[argc++] = &idle_timeout.hdr; - } - } - if (i2400m_ge_v1_4(i2400m)) { - /* enable extended rx data format? */ - df.hdr.type = - cpu_to_le16(i2400m_tlv_config_d2h_data_format); - df.hdr.length = cpu_to_le16( - sizeof(df) - sizeof(df.hdr)); - df.format = 1; - args[argc++] = &df.hdr; - - /* enable rx data reordering? - * (switch flipped in rx.c:i2400m_rx_setup() after fw upload) */ - if (i2400m->rx_reorder) { - dlhr.hdr.type = - cpu_to_le16(i2400m_tlv_config_dl_host_reorder); - dlhr.hdr.length = cpu_to_le16( - sizeof(dlhr) - sizeof(dlhr.hdr)); - dlhr.reorder = 1; - args[argc++] = &dlhr.hdr; - } - } - result = i2400m_set_init_config(i2400m, args, argc); - if (result < 0) - goto error; -out_passive: - /* - * update state: here it just calls a get state; parsing the - * result (system state tlv and rf status tlv [done in the rx - * path hooks]) will set the hardware and software rf-kill - * status. - */ - result = i2400m_cmd_get_state(i2400m); -error: - if (result < 0) - dev_err(dev, "failed to initialize the device: %d ", result); - d_fnend(3, dev, "(i2400m %p) = %d ", i2400m, result); - return result; -} - - -/** - * i2400m_dev_shutdown - shutdown a running device - * - * @i2400m: device descriptor - * - * release resources acquired during the running of the device; in - * theory, should also tell the device to go to sleep, switch off the - * radio, all that, but at this point, in most cases (driver - * disconnection, reset handling) we can't even talk to the device. - */ -void i2400m_dev_shutdown(struct i2400m *i2400m) -{ - struct device *dev = i2400m_dev(i2400m); - - d_fnstart(3, dev, "(i2400m %p) ", i2400m); - d_fnend(3, dev, "(i2400m %p) = void ", i2400m); -} diff --git a/drivers/staging/wimax/i2400m/debug-levels.h b/drivers/staging/wimax/i2400m/debug-levels.h --- a/drivers/staging/wimax/i2400m/debug-levels.h +++ /dev/null -/* spdx-license-identifier: gpl-2.0-only */ -/* - * intel wireless wimax connection 2400m - * debug levels control file for the i2400m module - * - * copyright (c) 2007-2008 intel corporation <linux-wimax@intel.com> - * inaky perez-gonzalez <inaky.perez-gonzalez@intel.com> - */ -#ifndef __debug_levels__h__ -#define __debug_levels__h__ - -/* maximum compile and run time debug level for all submodules */ -#define d_modulename i2400m -#define d_master config_wimax_i2400m_debug_level - -#include "../linux-wimax-debug.h" - -/* list of all the enabled modules */ -enum d_module { - d_submodule_declare(control), - d_submodule_declare(driver), - d_submodule_declare(debugfs), - d_submodule_declare(fw), - d_submodule_declare(netdev), - d_submodule_declare(rfkill), - d_submodule_declare(rx), - d_submodule_declare(sysfs), - d_submodule_declare(tx), -}; - - -#endif /* #ifndef __debug_levels__h__ */ diff --git a/drivers/staging/wimax/i2400m/debugfs.c b/drivers/staging/wimax/i2400m/debugfs.c --- a/drivers/staging/wimax/i2400m/debugfs.c +++ /dev/null -// spdx-license-identifier: gpl-2.0-only -/* - * intel wireless wimax connection 2400m - * debugfs interfaces to manipulate driver and device information - * - * copyright (c) 2007 intel corporation <linux-wimax@intel.com> - * inaky perez-gonzalez <inaky.perez-gonzalez@intel.com> - */ - -#include <linux/debugfs.h> -#include <linux/netdevice.h> -#include <linux/etherdevice.h> -#include <linux/spinlock.h> -#include <linux/device.h> -#include <linux/export.h> -#include "i2400m.h" - - -#define d_submodule debugfs -#include "debug-levels.h" - -static -int debugfs_netdev_queue_stopped_get(void *data, u64 *val) -{ - struct i2400m *i2400m = data; - *val = netif_queue_stopped(i2400m->wimax_dev.net_dev); - return 0; -} -define_debugfs_attribute(fops_netdev_queue_stopped, - debugfs_netdev_queue_stopped_get, - null, "%llu "); - -/* - * we don't allow partial reads of this file, as then the reader would - * get weirdly confused data as it is updated. - * - * so or you read it all or nothing; if you try to read with an offset - * != 0, we consider you are done reading. - */ -static -ssize_t i2400m_rx_stats_read(struct file *filp, char __user *buffer, - size_t count, loff_t *ppos) -{ - struct i2400m *i2400m = filp->private_data; - char buf[128]; - unsigned long flags; - - if (*ppos != 0) - return 0; - if (count < sizeof(buf)) - return -enospc; - spin_lock_irqsave(&i2400m->rx_lock, flags); - snprintf(buf, sizeof(buf), "%u %u %u %u %u %u %u ", - i2400m->rx_pl_num, i2400m->rx_pl_min, - i2400m->rx_pl_max, i2400m->rx_num, - i2400m->rx_size_acc, - i2400m->rx_size_min, i2400m->rx_size_max); - spin_unlock_irqrestore(&i2400m->rx_lock, flags); - return simple_read_from_buffer(buffer, count, ppos, buf, strlen(buf)); -} - - -/* any write clears the stats */ -static -ssize_t i2400m_rx_stats_write(struct file *filp, const char __user *buffer, - size_t count, loff_t *ppos) -{ - struct i2400m *i2400m = filp->private_data; - unsigned long flags; - - spin_lock_irqsave(&i2400m->rx_lock, flags); - i2400m->rx_pl_num = 0; - i2400m->rx_pl_max = 0; - i2400m->rx_pl_min = uint_max; - i2400m->rx_num = 0; - i2400m->rx_size_acc = 0; - i2400m->rx_size_min = uint_max; - i2400m->rx_size_max = 0; - spin_unlock_irqrestore(&i2400m->rx_lock, flags); - return count; -} - -static -const struct file_operations i2400m_rx_stats_fops = { - .owner = this_module, - .open = simple_open, - .read = i2400m_rx_stats_read, - .write = i2400m_rx_stats_write, - .llseek = default_llseek, -}; - - -/* see i2400m_rx_stats_read() */ -static -ssize_t i2400m_tx_stats_read(struct file *filp, char __user *buffer, - size_t count, loff_t *ppos) -{ - struct i2400m *i2400m = filp->private_data; - char buf[128]; - unsigned long flags; - - if (*ppos != 0) - return 0; - if (count < sizeof(buf)) - return -enospc; - spin_lock_irqsave(&i2400m->tx_lock, flags); - snprintf(buf, sizeof(buf), "%u %u %u %u %u %u %u ", - i2400m->tx_pl_num, i2400m->tx_pl_min, - i2400m->tx_pl_max, i2400m->tx_num, - i2400m->tx_size_acc, - i2400m->tx_size_min, i2400m->tx_size_max); - spin_unlock_irqrestore(&i2400m->tx_lock, flags); - return simple_read_from_buffer(buffer, count, ppos, buf, strlen(buf)); -} - -/* any write clears the stats */ -static -ssize_t i2400m_tx_stats_write(struct file *filp, const char __user *buffer, - size_t count, loff_t *ppos) -{ - struct i2400m *i2400m = filp->private_data; - unsigned long flags; - - spin_lock_irqsave(&i2400m->tx_lock, flags); - i2400m->tx_pl_num = 0; - i2400m->tx_pl_max = 0; - i2400m->tx_pl_min = uint_max; - i2400m->tx_num = 0; - i2400m->tx_size_acc = 0; - i2400m->tx_size_min = uint_max; - i2400m->tx_size_max = 0; - spin_unlock_irqrestore(&i2400m->tx_lock, flags); - return count; -} - -static -const struct file_operations i2400m_tx_stats_fops = { - .owner = this_module, - .open = simple_open, - .read = i2400m_tx_stats_read, - .write = i2400m_tx_stats_write, - .llseek = default_llseek, -}; - - -/* write 1 to ask the device to go into suspend */ -static -int debugfs_i2400m_suspend_set(void *data, u64 val) -{ - int result; - struct i2400m *i2400m = data; - result = i2400m_cmd_enter_powersave(i2400m); - if (result >= 0) - result = 0; - return result; -} -define_debugfs_attribute(fops_i2400m_suspend, - null, debugfs_i2400m_suspend_set, - "%llu "); - -/* - * reset the device - * - * write 0 to ask the device to soft reset, 1 to cold reset, 2 to bus - * reset (as defined by enum i2400m_reset_type). - */ -static -int debugfs_i2400m_reset_set(void *data, u64 val) -{ - int result; - struct i2400m *i2400m = data; - enum i2400m_reset_type rt = val; - switch(rt) { - case i2400m_rt_warm: - case i2400m_rt_cold: - case i2400m_rt_bus: - result = i2400m_reset(i2400m, rt); - if (result >= 0) - result = 0; - break; - default: - result = -einval; - } - return result; -} -define_debugfs_attribute(fops_i2400m_reset, - null, debugfs_i2400m_reset_set, - "%llu "); - -void i2400m_debugfs_add(struct i2400m *i2400m) -{ - struct dentry *dentry = i2400m->wimax_dev.debugfs_dentry; - - dentry = debugfs_create_dir("i2400m", dentry); - i2400m->debugfs_dentry = dentry; - - d_level_register_debugfs("dl_", control, dentry); - d_level_register_debugfs("dl_", driver, dentry); - d_level_register_debugfs("dl_", debugfs, dentry); - d_level_register_debugfs("dl_", fw, dentry); - d_level_register_debugfs("dl_", netdev, dentry); - d_level_register_debugfs("dl_", rfkill, dentry); - d_level_register_debugfs("dl_", rx, dentry); - d_level_register_debugfs("dl_", tx, dentry); - - debugfs_create_size_t("tx_in", 0400, dentry, &i2400m->tx_in); - debugfs_create_size_t("tx_out", 0400, dentry, &i2400m->tx_out); - debugfs_create_u32("state", 0600, dentry, &i2400m->state); - - /* - * trace received messages from user space - * - * in order to tap the bidirectional message stream in the - * 'msg' pipe, user space can read from the 'msg' pipe; - * however, due to limitations in libnl, we can't know what - * the different applications are sending down to the kernel. - * - * so we have this hack where the driver will echo any message - * received on the msg pipe from user space [through a call to - * wimax_dev->op_msg_from_user() into - * i2400m_op_msg_from_user()] into the 'trace' pipe that this - * driver creates. - * - * so then, reading from both the 'trace' and 'msg' pipes in - * user space will provide a full dump of the traffic. - * - * write 1 to activate, 0 to clear. - * - * it is not really very atomic, but it is also not too - * critical. - */ - debugfs_create_u8("trace_msg_from_user", 0600, dentry, - &i2400m->trace_msg_from_user); - - debugfs_create_file("netdev_queue_stopped", 0400, dentry, i2400m, - &fops_netdev_queue_stopped); - - debugfs_create_file("rx_stats", 0600, dentry, i2400m, - &i2400m_rx_stats_fops); - - debugfs_create_file("tx_stats", 0600, dentry, i2400m, - &i2400m_tx_stats_fops); - - debugfs_create_file("suspend", 0200, dentry, i2400m, - &fops_i2400m_suspend); - - debugfs_create_file("reset", 0200, dentry, i2400m, &fops_i2400m_reset); -} - -void i2400m_debugfs_rm(struct i2400m *i2400m) -{ - debugfs_remove_recursive(i2400m->debugfs_dentry); -} diff --git a/drivers/staging/wimax/i2400m/driver.c b/drivers/staging/wimax/i2400m/driver.c --- a/drivers/staging/wimax/i2400m/driver.c +++ /dev/null -// spdx-license-identifier: gpl-2.0-only -/* - * intel wireless wimax connection 2400m - * generic probe/disconnect, reset and message passing - * - * copyright (c) 2007-2008 intel corporation <linux-wimax@intel.com> - * inaky perez-gonzalez <inaky.perez-gonzalez@intel.com> - * - * see i2400m.h for driver documentation. this contains helpers for - * the driver model glue [_setup()/_release()], handling device resets - * [_dev_reset_handle()], and the backends for the wimax stack ops - * reset [_op_reset()] and message from user [_op_msg_from_user()]. - * - * roadmap: - * - * i2400m_op_msg_from_user() - * i2400m_msg_to_dev() - * wimax_msg_to_user_send() - * - * i2400m_op_reset() - * i240m->bus_reset() - * - * i2400m_dev_reset_handle() - * __i2400m_dev_reset_handle() - * __i2400m_dev_stop() - * __i2400m_dev_start() - * - * i2400m_setup() - * i2400m->bus_setup() - * i2400m_bootrom_init() - * register_netdev() - * wimax_dev_add() - * i2400m_dev_start() - * __i2400m_dev_start() - * i2400m_dev_bootstrap() - * i2400m_tx_setup() - * i2400m->bus_dev_start() - * i2400m_firmware_check() - * i2400m_check_mac_addr() - * - * i2400m_release() - * i2400m_dev_stop() - * __i2400m_dev_stop() - * i2400m_dev_shutdown() - * i2400m->bus_dev_stop() - * i2400m_tx_release() - * i2400m->bus_release() - * wimax_dev_rm() - * unregister_netdev() - */ -#include "i2400m.h" -#include <linux/etherdevice.h> -#include "linux-wimax-i2400m.h" -#include <linux/module.h> -#include <linux/moduleparam.h> -#include <linux/suspend.h> -#include <linux/slab.h> - -#define d_submodule driver -#include "debug-levels.h" - - -static char i2400m_debug_params[128]; -module_param_string(debug, i2400m_debug_params, sizeof(i2400m_debug_params), - 0644); -module_parm_desc(debug, - "string of space-separated name:value pairs, where names " - "are the different debug submodules and value are the " - "initial debug value to set."); - -static char i2400m_barkers_params[128]; -module_param_string(barkers, i2400m_barkers_params, - sizeof(i2400m_barkers_params), 0644); -module_parm_desc(barkers, - "string of comma-separated 32-bit values; each is " - "recognized as the value the device sends as a reboot " - "signal; values are appended to a list--setting one value " - "as zero cleans the existing list and starts a new one."); - -/* - * wimax stack operation: relay a message from user space - * - * @wimax_dev: device descriptor - * @pipe_name: named pipe the message is for - * @msg_buf: pointer to the message bytes - * @msg_len: length of the buffer - * @genl_info: passed by the generic netlink layer - * - * the wimax stack will call this function when a message was received - * from user space. - * - * for the i2400m, this is an l3l4 message, as specified in - * include/linux/wimax/i2400m.h, and thus prefixed with a 'struct - * i2400m_l3l4_hdr'. driver (and device) expect the messages to be - * coded in little endian. - * - * this function just verifies that the header declaration and the - * payload are consistent and then deals with it, either forwarding it - * to the device or processing it locally. - * - * in the i2400m, messages are basically commands that will carry an - * ack, so we use i2400m_msg_to_dev() and then deliver the ack back to - * user space. the rx.c code might intercept the response and use it - * to update the driver's state, but then it will pass it on so it can - * be relayed back to user space. - * - * note that asynchronous events from the device are processed and - * sent to user space in rx.c. - */ -static -int i2400m_op_msg_from_user(struct wimax_dev *wimax_dev, - const char *pipe_name, - const void *msg_buf, size_t msg_len, - const struct genl_info *genl_info) -{ - int result; - struct i2400m *i2400m = wimax_dev_to_i2400m(wimax_dev); - struct device *dev = i2400m_dev(i2400m); - struct sk_buff *ack_skb; - - d_fnstart(4, dev, "(wimax_dev %p [i2400m %p] msg_buf %p " - "msg_len %zu genl_info %p) ", wimax_dev, i2400m, - msg_buf, msg_len, genl_info); - ack_skb = i2400m_msg_to_dev(i2400m, msg_buf, msg_len); - result = ptr_err(ack_skb); - if (is_err(ack_skb)) - goto error_msg_to_dev; - result = wimax_msg_send(&i2400m->wimax_dev, ack_skb); -error_msg_to_dev: - d_fnend(4, dev, "(wimax_dev %p [i2400m %p] msg_buf %p msg_len %zu " - "genl_info %p) = %d ", wimax_dev, i2400m, msg_buf, msg_len, - genl_info, result); - return result; -} - - -/* - * context to wait for a reset to finalize - */ -struct i2400m_reset_ctx { - struct completion completion; - int result; -}; - - -/* - * wimax stack operation: reset a device - * - * @wimax_dev: device descriptor - * - * see the documentation for wimax_reset() and wimax_dev->op_reset for - * the requirements of this function. the wimax stack guarantees - * serialization on calls to this function. - * - * do a warm reset on the device; if it fails, resort to a cold reset - * and return -enodev. on successful warm reset, we need to block - * until it is complete. - * - * the bus-driver implementation of reset takes care of falling back - * to cold reset if warm fails. - */ -static -int i2400m_op_reset(struct wimax_dev *wimax_dev) -{ - int result; - struct i2400m *i2400m = wimax_dev_to_i2400m(wimax_dev); - struct device *dev = i2400m_dev(i2400m); - struct i2400m_reset_ctx ctx = { - .completion = completion_initializer_onstack(ctx.completion), - .result = 0, - }; - - d_fnstart(4, dev, "(wimax_dev %p) ", wimax_dev); - mutex_lock(&i2400m->init_mutex); - i2400m->reset_ctx = &ctx; - mutex_unlock(&i2400m->init_mutex); - result = i2400m_reset(i2400m, i2400m_rt_warm); - if (result < 0) - goto out; - result = wait_for_completion_timeout(&ctx.completion, 4*hz); - if (result == 0) - result = -etimedout; - else if (result > 0) - result = ctx.result; - /* if result < 0, pass it on */ - mutex_lock(&i2400m->init_mutex); - i2400m->reset_ctx = null; - mutex_unlock(&i2400m->init_mutex); -out: - d_fnend(4, dev, "(wimax_dev %p) = %d ", wimax_dev, result); - return result; -} - - -/* - * check the mac address we got from boot mode is ok - * - * @i2400m: device descriptor - * - * returns: 0 if ok, < 0 errno code on error. - */ -static -int i2400m_check_mac_addr(struct i2400m *i2400m) -{ - int result; - struct device *dev = i2400m_dev(i2400m); - struct sk_buff *skb; - const struct i2400m_tlv_detailed_device_info *ddi; - struct net_device *net_dev = i2400m->wimax_dev.net_dev; - - d_fnstart(3, dev, "(i2400m %p) ", i2400m); - skb = i2400m_get_device_info(i2400m); - if (is_err(skb)) { - result = ptr_err(skb); - dev_err(dev, "cannot verify mac address, error reading: %d ", - result); - goto error; - } - /* extract mac address */ - ddi = (void *) skb->data; - build_bug_on(eth_alen != sizeof(ddi->mac_address)); - d_printf(2, dev, "get device info: mac addr %pm ", - ddi->mac_address); - if (!memcmp(net_dev->perm_addr, ddi->mac_address, - sizeof(ddi->mac_address))) - goto ok; - dev_warn(dev, "warning: device reports a different mac address " - "to that of boot mode's "); - dev_warn(dev, "device reports %pm ", ddi->mac_address); - dev_warn(dev, "boot mode reported %pm ", net_dev->perm_addr); - if (is_zero_ether_addr(ddi->mac_address)) - dev_err(dev, "device reports an invalid mac address, " - "not updating "); - else { - dev_warn(dev, "updating mac address "); - net_dev->addr_len = eth_alen; - memcpy(net_dev->perm_addr, ddi->mac_address, eth_alen); - memcpy(net_dev->dev_addr, ddi->mac_address, eth_alen); - } -ok: - result = 0; - kfree_skb(skb); -error: - d_fnend(3, dev, "(i2400m %p) = %d ", i2400m, result); - return result; -} - - -/** - * __i2400m_dev_start - bring up driver communication with the device - * - * @i2400m: device descriptor - * @flags: boot mode flags - * - * returns: 0 if ok, < 0 errno code on error. - * - * uploads firmware and brings up all the resources needed to be able - * to communicate with the device. - * - * the workqueue has to be setup early, at least before rx handling - * (it's only real user for now) so it can process reports as they - * arrive. we also want to destroy it if we retry, to make sure it is - * flushed...easier like this. - * - * tx needs to be setup before the bus-specific code (otherwise on - * shutdown, the bus-tx code could try to access it). - */ -static -int __i2400m_dev_start(struct i2400m *i2400m, enum i2400m_bri flags) -{ - int result; - struct wimax_dev *wimax_dev = &i2400m->wimax_dev; - struct net_device *net_dev = wimax_dev->net_dev; - struct device *dev = i2400m_dev(i2400m); - int times = i2400m->bus_bm_retries; - - d_fnstart(3, dev, "(i2400m %p) ", i2400m); -retry: - result = i2400m_dev_bootstrap(i2400m, flags); - if (result < 0) { - dev_err(dev, "cannot bootstrap device: %d ", result); - goto error_bootstrap; - } - result = i2400m_tx_setup(i2400m); - if (result < 0) - goto error_tx_setup; - result = i2400m_rx_setup(i2400m); - if (result < 0) - goto error_rx_setup; - i2400m->work_queue = create_singlethread_workqueue(wimax_dev->name); - if (i2400m->work_queue == null) { - result = -enomem; - dev_err(dev, "cannot create workqueue "); - goto error_create_workqueue; - } - if (i2400m->bus_dev_start) { - result = i2400m->bus_dev_start(i2400m); - if (result < 0) - goto error_bus_dev_start; - } - i2400m->ready = 1; - wmb(); /* see i2400m->ready's documentation */ - /* process pending reports from the device */ - queue_work(i2400m->work_queue, &i2400m->rx_report_ws); - result = i2400m_firmware_check(i2400m); /* fw versions ok? */ - if (result < 0) - goto error_fw_check; - /* at this point is ok to send commands to the device */ - result = i2400m_check_mac_addr(i2400m); - if (result < 0) - goto error_check_mac_addr; - result = i2400m_dev_initialize(i2400m); - if (result < 0) - goto error_dev_initialize; - - /* we don't want any additional unwanted error recovery triggered - * from any other context so if anything went wrong before we come - * here, let's keep i2400m->error_recovery untouched and leave it to - * dev_reset_handle(). see dev_reset_handle(). */ - - atomic_dec(&i2400m->error_recovery); - /* every thing works so far, ok, now we are ready to - * take error recovery if it's required. */ - - /* at this point, reports will come for the device and set it - * to the right state if it is different than uninitialized */ - d_fnend(3, dev, "(net_dev %p [i2400m %p]) = %d ", - net_dev, i2400m, result); - return result; - -error_dev_initialize: -error_check_mac_addr: -error_fw_check: - i2400m->ready = 0; - wmb(); /* see i2400m->ready's documentation */ - flush_workqueue(i2400m->work_queue); - if (i2400m->bus_dev_stop) - i2400m->bus_dev_stop(i2400m); -error_bus_dev_start: - destroy_workqueue(i2400m->work_queue); -error_create_workqueue: - i2400m_rx_release(i2400m); -error_rx_setup: - i2400m_tx_release(i2400m); -error_tx_setup: -error_bootstrap: - if (result == -el3rst && times-- > 0) { - flags = i2400m_bri_soft|i2400m_bri_mac_reinit; - goto retry; - } - d_fnend(3, dev, "(net_dev %p [i2400m %p]) = %d ", - net_dev, i2400m, result); - return result; -} - - -static -int i2400m_dev_start(struct i2400m *i2400m, enum i2400m_bri bm_flags) -{ - int result = 0; - mutex_lock(&i2400m->init_mutex); /* well, start the device */ - if (i2400m->updown == 0) { - result = __i2400m_dev_start(i2400m, bm_flags); - if (result >= 0) { - i2400m->updown = 1; - i2400m->alive = 1; - wmb();/* see i2400m->updown and i2400m->alive's doc */ - } - } - mutex_unlock(&i2400m->init_mutex); - return result; -} - - -/** - * i2400m_dev_stop - tear down driver communication with the device - * - * @i2400m: device descriptor - * - * returns: 0 if ok, < 0 errno code on error. - * - * releases all the resources allocated to communicate with the - * device. note we cannot destroy the workqueue earlier as until rx is - * fully destroyed, it could still try to schedule jobs. - */ -static -void __i2400m_dev_stop(struct i2400m *i2400m) -{ - struct wimax_dev *wimax_dev = &i2400m->wimax_dev; - struct device *dev = i2400m_dev(i2400m); - - d_fnstart(3, dev, "(i2400m %p) ", i2400m); - wimax_state_change(wimax_dev, __wimax_st_quiescing); - i2400m_msg_to_dev_cancel_wait(i2400m, -el3rst); - complete(&i2400m->msg_completion); - i2400m_net_wake_stop(i2400m); - i2400m_dev_shutdown(i2400m); - /* - * make sure no report hooks are running *before* we stop the - * communication infrastructure with the device. - */ - i2400m->ready = 0; /* nobody can queue work anymore */ - wmb(); /* see i2400m->ready's documentation */ - flush_workqueue(i2400m->work_queue); - - if (i2400m->bus_dev_stop) - i2400m->bus_dev_stop(i2400m); - destroy_workqueue(i2400m->work_queue); - i2400m_rx_release(i2400m); - i2400m_tx_release(i2400m); - wimax_state_change(wimax_dev, wimax_st_down); - d_fnend(3, dev, "(i2400m %p) = 0 ", i2400m); -} - - -/* - * watch out -- we only need to stop if there is a need for it. the - * device could have reset itself and failed to come up again (see - * _i2400m_dev_reset_handle()). - */ -static -void i2400m_dev_stop(struct i2400m *i2400m) -{ - mutex_lock(&i2400m->init_mutex); - if (i2400m->updown) { - __i2400m_dev_stop(i2400m); - i2400m->updown = 0; - i2400m->alive = 0; - wmb(); /* see i2400m->updown and i2400m->alive's doc */ - } - mutex_unlock(&i2400m->init_mutex); -} - - -/* - * listen to pm events to cache the firmware before suspend/hibernation - * - * when the device comes out of suspend, it might go into reset and - * firmware has to be uploaded again. at resume, most of the times, we - * can't load firmware images from disk, so we need to cache it. - * - * i2400m_fw_cache() will allocate a kobject and attach the firmware - * to it; that way we don't have to worry too much about the fw loader - * hitting a race condition. - * - * note: modus operandi stolen from the orinoco driver; thx. - */ -static -int i2400m_pm_notifier(struct notifier_block *notifier, - unsigned long pm_event, - void *unused) -{ - struct i2400m *i2400m = - container_of(notifier, struct i2400m, pm_notifier); - struct device *dev = i2400m_dev(i2400m); - - d_fnstart(3, dev, "(i2400m %p pm_event %lx) ", i2400m, pm_event); - switch (pm_event) { - case pm_hibernation_prepare: - case pm_suspend_prepare: - i2400m_fw_cache(i2400m); - break; - case pm_post_restore: - /* restore from hibernation failed. we need to clean - * up in exactly the same way, so fall through. */ - case pm_post_hibernation: - case pm_post_suspend: - i2400m_fw_uncache(i2400m); - break; - - case pm_restore_prepare: - default: - break; - } - d_fnend(3, dev, "(i2400m %p pm_event %lx) = void ", i2400m, pm_event); - return notify_done; -} - - -/* - * pre-reset is called before a device is going on reset - * - * this has to be followed by a call to i2400m_post_reset(), otherwise - * bad things might happen. - */ -int i2400m_pre_reset(struct i2400m *i2400m) -{ - struct device *dev = i2400m_dev(i2400m); - - d_fnstart(3, dev, "(i2400m %p) ", i2400m); - d_printf(1, dev, "pre-reset shut down "); - - mutex_lock(&i2400m->init_mutex); - if (i2400m->updown) { - netif_tx_disable(i2400m->wimax_dev.net_dev); - __i2400m_dev_stop(i2400m); - /* down't set updown to zero -- this way - * post_reset can restore properly */ - } - mutex_unlock(&i2400m->init_mutex); - if (i2400m->bus_release) - i2400m->bus_release(i2400m); - d_fnend(3, dev, "(i2400m %p) = 0 ", i2400m); - return 0; -} -export_symbol_gpl(i2400m_pre_reset); - - -/* - * restore device state after a reset - * - * do the work needed after a device reset to bring it up to the same - * state as it was before the reset. - * - * note: this requires i2400m->init_mutex taken - */ -int i2400m_post_reset(struct i2400m *i2400m) -{ - int result = 0; - struct device *dev = i2400m_dev(i2400m); - - d_fnstart(3, dev, "(i2400m %p) ", i2400m); - d_printf(1, dev, "post-reset start "); - if (i2400m->bus_setup) { - result = i2400m->bus_setup(i2400m); - if (result < 0) { - dev_err(dev, "bus-specific setup failed: %d ", - result); - goto error_bus_setup; - } - } - mutex_lock(&i2400m->init_mutex); - if (i2400m->updown) { - result = __i2400m_dev_start( - i2400m, i2400m_bri_soft | i2400m_bri_mac_reinit); - if (result < 0) - goto error_dev_start; - } - mutex_unlock(&i2400m->init_mutex); - d_fnend(3, dev, "(i2400m %p) = %d ", i2400m, result); - return result; - -error_dev_start: - if (i2400m->bus_release) - i2400m->bus_release(i2400m); - /* even if the device was up, it could not be recovered, so we - * mark it as down. */ - i2400m->updown = 0; - wmb(); /* see i2400m->updown's documentation */ - mutex_unlock(&i2400m->init_mutex); -error_bus_setup: - d_fnend(3, dev, "(i2400m %p) = %d ", i2400m, result); - return result; -} -export_symbol_gpl(i2400m_post_reset); - - -/* - * the device has rebooted; fix up the device and the driver - * - * tear down the driver communication with the device, reload the - * firmware and reinitialize the communication with the device. - * - * if someone calls a reset when the device's firmware is down, in - * theory we won't see it because we are not listening. however, just - * in case, leave the code to handle it. - * - * if there is a reset context, use it; this means someone is waiting - * for us to tell him when the reset operation is complete and the - * device is ready to rock again. - * - * note: if we are in the process of bringing up or down the - * communication with the device [running i2400m_dev_start() or - * _stop()], don't do anything, let it fail and handle it. - * - * this function is ran always in a thread context - * - * this function gets passed, as payload to i2400m_work() a 'const - * char *' ptr with a "reason" why the reset happened (for messages). - */ -static -void __i2400m_dev_reset_handle(struct work_struct *ws) -{ - struct i2400m *i2400m = container_of(ws, struct i2400m, reset_ws); - const char *reason = i2400m->reset_reason; - struct device *dev = i2400m_dev(i2400m); - struct i2400m_reset_ctx *ctx = i2400m->reset_ctx; - int result; - - d_fnstart(3, dev, "(ws %p i2400m %p reason %s) ", ws, i2400m, reason); - - i2400m->boot_mode = 1; - wmb(); /* make sure i2400m_msg_to_dev() sees boot_mode */ - - result = 0; - if (mutex_trylock(&i2400m->init_mutex) == 0) { - /* we are still in i2400m_dev_start() [let it fail] or - * i2400m_dev_stop() [we are shutting down anyway, so - * ignore it] or we are resetting somewhere else. */ - dev_err(dev, "device rebooted somewhere else? "); - i2400m_msg_to_dev_cancel_wait(i2400m, -el3rst); - complete(&i2400m->msg_completion); - goto out; - } - - dev_err(dev, "%s: reinitializing driver ", reason); - rmb(); - if (i2400m->updown) { - __i2400m_dev_stop(i2400m); - i2400m->updown = 0; - wmb(); /* see i2400m->updown's documentation */ - } - - if (i2400m->alive) { - result = __i2400m_dev_start(i2400m, - i2400m_bri_soft | i2400m_bri_mac_reinit); - if (result < 0) { - dev_err(dev, "%s: cannot start the device: %d ", - reason, result); - result = -euclean; - if (atomic_read(&i2400m->bus_reset_retries) - >= i2400m_bus_reset_retries) { - result = -enodev; - dev_err(dev, "tried too many times to " - "reset the device, giving up "); - } - } - } - - if (i2400m->reset_ctx) { - ctx->result = result; - complete(&ctx->completion); - } - mutex_unlock(&i2400m->init_mutex); - if (result == -euclean) { - /* - * we come here because the reset during operational mode - * wasn't successfully done and need to proceed to a bus - * reset. for the dev_reset_handle() to be able to handle - * the reset event later properly, we restore boot_mode back - * to the state before previous reset. ie: just like we are - * issuing the bus reset for the first time - */ - i2400m->boot_mode = 0; - wmb(); - - atomic_inc(&i2400m->bus_reset_retries); - /* ops, need to clean up [w/ init_mutex not held] */ - result = i2400m_reset(i2400m, i2400m_rt_bus); - if (result >= 0) - result = -enodev; - } else { - rmb(); - if (i2400m->alive) { - /* great, we expect the device state up and - * dev_start() actually brings the device state up */ - i2400m->updown = 1; - wmb(); - atomic_set(&i2400m->bus_reset_retries, 0); - } - } -out: - d_fnend(3, dev, "(ws %p i2400m %p reason %s) = void ", - ws, i2400m, reason); -} - - -/* - * i2400m_dev_reset_handle - handle a device's reset in a thread context - * - * schedule a device reset handling out on a thread context, so it - * is safe to call from atomic context. we can't use the i2400m's - * queue as we are going to destroy it and reinitialize it as part of - * the driver bringup/bringup process. - * - * see __i2400m_dev_reset_handle() for details; that takes care of - * reinitializing the driver to handle the reset, calling into the - * bus-specific functions ops as needed. - */ -int i2400m_dev_reset_handle(struct i2400m *i2400m, const char *reason) -{ - i2400m->reset_reason = reason; - return schedule_work(&i2400m->reset_ws); -} -export_symbol_gpl(i2400m_dev_reset_handle); - - -/* - * the actual work of error recovery. - * - * the current implementation of error recovery is to trigger a bus reset. - */ -static -void __i2400m_error_recovery(struct work_struct *ws) -{ - struct i2400m *i2400m = container_of(ws, struct i2400m, recovery_ws); - - i2400m_reset(i2400m, i2400m_rt_bus); -} - -/* - * schedule a work struct for error recovery. - * - * the intention of error recovery is to bring back the device to some - * known state whenever tx sees -110 (-etimeout) on copying the data to - * the device. the tx failure could mean a device bus stuck, so the current - * error recovery implementation is to trigger a bus reset to the device - * and hopefully it can bring back the device. - * - * the actual work of error recovery has to be in a thread context because - * it is kicked off in the tx thread (i2400ms->tx_workqueue) which is to be - * destroyed by the error recovery mechanism (currently a bus reset). - * - * also, there may be already a queue of tx works that all hit - * the -etimeout error condition because the device is stuck already. - * since bus reset is used as the error recovery mechanism and we don't - * want consecutive bus resets simply because the multiple tx works - * in the queue all hit the same device erratum, the flag "error_recovery" - * is introduced for preventing unwanted consecutive bus resets. - * - * error recovery shall only be invoked again if previous one was completed. - * the flag error_recovery is set when error recovery mechanism is scheduled, - * and is checked when we need to schedule another error recovery. if it is - * in place already, then we shouldn't schedule another one. - */ -void i2400m_error_recovery(struct i2400m *i2400m) -{ - if (atomic_add_return(1, &i2400m->error_recovery) == 1) - schedule_work(&i2400m->recovery_ws); - else - atomic_dec(&i2400m->error_recovery); -} -export_symbol_gpl(i2400m_error_recovery); - -/* - * alloc the command and ack buffers for boot mode - * - * get the buffers needed to deal with boot mode messages. - */ -static -int i2400m_bm_buf_alloc(struct i2400m *i2400m) -{ - i2400m->bm_cmd_buf = kzalloc(i2400m_bm_cmd_buf_size, gfp_kernel); - if (i2400m->bm_cmd_buf == null) - goto error_bm_cmd_kzalloc; - i2400m->bm_ack_buf = kzalloc(i2400m_bm_ack_buf_size, gfp_kernel); - if (i2400m->bm_ack_buf == null) - goto error_bm_ack_buf_kzalloc; - return 0; - -error_bm_ack_buf_kzalloc: - kfree(i2400m->bm_cmd_buf); -error_bm_cmd_kzalloc: - return -enomem; -} - - -/* - * free boot mode command and ack buffers. - */ -static -void i2400m_bm_buf_free(struct i2400m *i2400m) -{ - kfree(i2400m->bm_ack_buf); - kfree(i2400m->bm_cmd_buf); -} - - -/* - * i2400m_init - initialize a 'struct i2400m' from all zeroes - * - * this is a bus-generic api call. - */ -void i2400m_init(struct i2400m *i2400m) -{ - wimax_dev_init(&i2400m->wimax_dev); - - i2400m->boot_mode = 1; - i2400m->rx_reorder = 1; - init_waitqueue_head(&i2400m->state_wq); - - spin_lock_init(&i2400m->tx_lock); - i2400m->tx_pl_min = uint_max; - i2400m->tx_size_min = uint_max; - - spin_lock_init(&i2400m->rx_lock); - i2400m->rx_pl_min = uint_max; - i2400m->rx_size_min = uint_max; - init_list_head(&i2400m->rx_reports); - init_work(&i2400m->rx_report_ws, i2400m_report_hook_work); - - mutex_init(&i2400m->msg_mutex); - init_completion(&i2400m->msg_completion); - - mutex_init(&i2400m->init_mutex); - /* wake_tx_ws is initialized in i2400m_tx_setup() */ - - init_work(&i2400m->reset_ws, __i2400m_dev_reset_handle); - init_work(&i2400m->recovery_ws, __i2400m_error_recovery); - - atomic_set(&i2400m->bus_reset_retries, 0); - - i2400m->alive = 0; - - /* initialize error_recovery to 1 for denoting we - * are not yet ready to take any error recovery */ - atomic_set(&i2400m->error_recovery, 1); -} -export_symbol_gpl(i2400m_init); - - -int i2400m_reset(struct i2400m *i2400m, enum i2400m_reset_type rt) -{ - struct net_device *net_dev = i2400m->wimax_dev.net_dev; - - /* - * make sure we stop txs and down the carrier before - * resetting; this is needed to avoid things like - * i2400m_wake_tx() scheduling stuff in parallel. - */ - if (net_dev->reg_state == netreg_registered) { - netif_tx_disable(net_dev); - netif_carrier_off(net_dev); - } - return i2400m->bus_reset(i2400m, rt); -} -export_symbol_gpl(i2400m_reset); - - -/** - * i2400m_setup - bus-generic setup function for the i2400m device - * - * @i2400m: device descriptor (bus-specific parts have been initialized) - * @bm_flags: boot mode flags - * - * returns: 0 if ok, < 0 errno code on error. - * - * sets up basic device comunication infrastructure, boots the rom to - * read the mac address, registers with the wimax and network stacks - * and then brings up the device. - */ -int i2400m_setup(struct i2400m *i2400m, enum i2400m_bri bm_flags) -{ - int result; - struct device *dev = i2400m_dev(i2400m); - struct wimax_dev *wimax_dev = &i2400m->wimax_dev; - struct net_device *net_dev = i2400m->wimax_dev.net_dev; - - d_fnstart(3, dev, "(i2400m %p) ", i2400m); - - snprintf(wimax_dev->name, sizeof(wimax_dev->name), - "i2400m-%s:%s", dev->bus->name, dev_name(dev)); - - result = i2400m_bm_buf_alloc(i2400m); - if (result < 0) { - dev_err(dev, "cannot allocate bootmode scratch buffers "); - goto error_bm_buf_alloc; - } - - if (i2400m->bus_setup) { - result = i2400m->bus_setup(i2400m); - if (result < 0) { - dev_err(dev, "bus-specific setup failed: %d ", - result); - goto error_bus_setup; - } - } - - result = i2400m_bootrom_init(i2400m, bm_flags); - if (result < 0) { - dev_err(dev, "read mac addr: bootrom init " - "failed: %d ", result); - goto error_bootrom_init; - } - result = i2400m_read_mac_addr(i2400m); - if (result < 0) - goto error_read_mac_addr; - eth_random_addr(i2400m->src_mac_addr); - - i2400m->pm_notifier.notifier_call = i2400m_pm_notifier; - register_pm_notifier(&i2400m->pm_notifier); - - result = register_netdev(net_dev); /* okey dokey, bring it up */ - if (result < 0) { - dev_err(dev, "cannot register i2400m network device: %d ", - result); - goto error_register_netdev; - } - netif_carrier_off(net_dev); - - i2400m->wimax_dev.op_msg_from_user = i2400m_op_msg_from_user; - i2400m->wimax_dev.op_rfkill_sw_toggle = i2400m_op_rfkill_sw_toggle; - i2400m->wimax_dev.op_reset = i2400m_op_reset; - - result = wimax_dev_add(&i2400m->wimax_dev, net_dev); - if (result < 0) - goto error_wimax_dev_add; - - /* now setup all that requires a registered net and wimax device. */ - result = sysfs_create_group(&net_dev->dev.kobj, &i2400m_dev_attr_group); - if (result < 0) { - dev_err(dev, "cannot setup i2400m's sysfs: %d ", result); - goto error_sysfs_setup; - } - - i2400m_debugfs_add(i2400m); - - result = i2400m_dev_start(i2400m, bm_flags); - if (result < 0) - goto error_dev_start; - d_fnend(3, dev, "(i2400m %p) = %d ", i2400m, result); - return result; - -error_dev_start: - i2400m_debugfs_rm(i2400m); - sysfs_remove_group(&i2400m->wimax_dev.net_dev->dev.kobj, - &i2400m_dev_attr_group); -error_sysfs_setup: - wimax_dev_rm(&i2400m->wimax_dev); -error_wimax_dev_add: - unregister_netdev(net_dev); -error_register_netdev: - unregister_pm_notifier(&i2400m->pm_notifier); -error_read_mac_addr: -error_bootrom_init: - if (i2400m->bus_release) - i2400m->bus_release(i2400m); -error_bus_setup: - i2400m_bm_buf_free(i2400m); -error_bm_buf_alloc: - d_fnend(3, dev, "(i2400m %p) = %d ", i2400m, result); - return result; -} -export_symbol_gpl(i2400m_setup); - - -/* - * i2400m_release - release the bus-generic driver resources - * - * sends a disconnect message and undoes any setup done by i2400m_setup() - */ -void i2400m_release(struct i2400m *i2400m) -{ - struct device *dev = i2400m_dev(i2400m); - - d_fnstart(3, dev, "(i2400m %p) ", i2400m); - netif_stop_queue(i2400m->wimax_dev.net_dev); - - i2400m_dev_stop(i2400m); - - cancel_work_sync(&i2400m->reset_ws); - cancel_work_sync(&i2400m->recovery_ws); - - i2400m_debugfs_rm(i2400m); - sysfs_remove_group(&i2400m->wimax_dev.net_dev->dev.kobj, - &i2400m_dev_attr_group); - wimax_dev_rm(&i2400m->wimax_dev); - unregister_netdev(i2400m->wimax_dev.net_dev); - unregister_pm_notifier(&i2400m->pm_notifier); - if (i2400m->bus_release) - i2400m->bus_release(i2400m); - i2400m_bm_buf_free(i2400m); - d_fnend(3, dev, "(i2400m %p) = void ", i2400m); -} -export_symbol_gpl(i2400m_release); - - -/* - * debug levels control; see debug.h - */ -struct d_level d_level[] = { - d_submodule_define(control), - d_submodule_define(driver), - d_submodule_define(debugfs), - d_submodule_define(fw), - d_submodule_define(netdev), - d_submodule_define(rfkill), - d_submodule_define(rx), - d_submodule_define(sysfs), - d_submodule_define(tx), -}; -size_t d_level_size = array_size(d_level); - - -static -int __init i2400m_driver_init(void) -{ - d_parse_params(d_level, d_level_size, i2400m_debug_params, - "i2400m.debug"); - return i2400m_barker_db_init(i2400m_barkers_params); -} -module_init(i2400m_driver_init); - -static -void __exit i2400m_driver_exit(void) -{ - i2400m_barker_db_exit(); -} -module_exit(i2400m_driver_exit); - -module_author("intel corporation <linux-wimax@intel.com>"); -module_description("intel 2400m wimax networking bus-generic driver"); -module_license("gpl"); diff --git a/drivers/staging/wimax/i2400m/fw.c b/drivers/staging/wimax/i2400m/fw.c --- a/drivers/staging/wimax/i2400m/fw.c +++ /dev/null -/* - * intel wireless wimax connection 2400m - * firmware uploader - * - * - * copyright (c) 2007-2008 intel corporation. all rights reserved. - * - * redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * - * * redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * * redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in - * the documentation and/or other materials provided with the - * distribution. - * * neither the name of intel corporation nor the names of its - * contributors may be used to endorse or promote products derived - * from this software without specific prior written permission. - * - * this software is provided by the copyright holders and contributors - * "as is" and any express or implied warranties, including, but not - * limited to, the implied warranties of merchantability and fitness for - * a particular purpose are disclaimed. in no event shall the copyright - * owner or contributors be liable for any direct, indirect, incidental, - * special, exemplary, or consequential damages (including, but not - * limited to, procurement of substitute goods or services; loss of use, - * data, or profits; or business interruption) however caused and on any - * theory of liability, whether in contract, strict liability, or tort - * (including negligence or otherwise) arising in any way out of the use - * of this software, even if advised of the possibility of such damage. - * - * - * intel corporation <linux-wimax@intel.com> - * yanir lubetkin <yanirx.lubetkin@intel.com> - * inaky perez-gonzalez <inaky.perez-gonzalez@intel.com> - * - initial implementation - * - * - * the procedure - * - * the 2400m and derived devices work in two modes: boot-mode or - * normal mode. in boot mode we can execute only a handful of commands - * targeted at uploading the firmware and launching it. - * - * the 2400m enters boot mode when it is first connected to the - * system, when it crashes and when you ask it to reboot. there are - * two submodes of the boot mode: signed and non-signed. signed takes - * firmwares signed with a certain private key, non-signed takes any - * firmware. normal hardware takes only signed firmware. - * - * on boot mode, in usb, we write to the device using the bulk out - * endpoint and read from it in the notification endpoint. - * - * upon entrance to boot mode, the device sends (preceded with a few - * zero length packets (zlps) on the notification endpoint in usb) a - * reboot barker (4 le32 words with the same value). we ack it by - * sending the same barker to the device. the device acks with a - * reboot ack barker (4 le32 words with value i2400m_ack_barker) and - * then is fully booted. at this point we can upload the firmware. - * - * note that different iterations of the device and eeprom - * configurations will send different [re]boot barkers; these are - * collected in i2400m_barker_db along with the firmware - * characteristics they require. - * - * this process is accomplished by the i2400m_bootrom_init() - * function. all the device interaction happens through the - * i2400m_bm_cmd() [boot mode command]. special return values will - * indicate if the device did reset during the process. - * - * after this, we read the mac address and then (if needed) - * reinitialize the device. we need to read it ahead of time because - * in the future, we might not upload the firmware until userspace - * 'ifconfig up's the device. - * - * we can then upload the firmware file. the file is composed of a bcf - * header (basic data, keys and signatures) and a list of write - * commands and payloads. optionally more bcf headers might follow the - * main payload. we first upload the header [i2400m_dnload_init()] and - * then pass the commands and payloads verbatim to the i2400m_bm_cmd() - * function [i2400m_dnload_bcf()]. then we tell the device to jump to - * the new firmware [i2400m_dnload_finalize()]. - * - * once firmware is uploaded, we are good to go :) - * - * when we don't know in which mode we are, we first try by sending a - * warm reset request that will take us to boot-mode. if we time out - * waiting for a reboot barker, that means maybe we are already in - * boot mode, so we send a reboot barker. - * - * command execution - * - * this code (and process) is single threaded; for executing commands, - * we post a urb to the notification endpoint, post the command, wait - * for data on the notification buffer. we don't need to worry about - * others as we know we are the only ones in there. - * - * backend implementation - * - * this code is bus-generic; the bus-specific driver provides back end - * implementations to send a boot mode command to the device and to - * read an acknolwedgement from it (or an asynchronous notification) - * from it. - * - * firmware loading - * - * note that in some cases, we can't just load a firmware file (for - * example, when resuming). for that, we might cache the firmware - * file. thus, when doing the bootstrap, if there is a cache firmware - * file, it is used; if not, loading from disk is attempted. - * - * roadmap - * - * i2400m_barker_db_init called by i2400m_driver_init() - * i2400m_barker_db_add - * - * i2400m_barker_db_exit called by i2400m_driver_exit() - * - * i2400m_dev_bootstrap called by __i2400m_dev_start() - * request_firmware - * i2400m_fw_bootstrap - * i2400m_fw_check - * i2400m_fw_hdr_check - * i2400m_fw_dnload - * release_firmware - * - * i2400m_fw_dnload - * i2400m_bootrom_init - * i2400m_bm_cmd - * i2400m_reset - * i2400m_dnload_init - * i2400m_dnload_init_signed - * i2400m_dnload_init_nonsigned - * i2400m_download_chunk - * i2400m_bm_cmd - * i2400m_dnload_bcf - * i2400m_bm_cmd - * i2400m_dnload_finalize - * i2400m_bm_cmd - * - * i2400m_bm_cmd - * i2400m->bus_bm_cmd_send() - * i2400m->bus_bm_wait_for_ack - * __i2400m_bm_ack_verify - * i2400m_is_boot_barker - * - * i2400m_bm_cmd_prepare used by bus-drivers to prep - * commands before sending - * - * i2400m_pm_notifier called on power management events - * i2400m_fw_cache - * i2400m_fw_uncache - */ -#include <linux/firmware.h> -#include <linux/sched.h> -#include <linux/slab.h> -#include <linux/usb.h> -#include <linux/export.h> -#include "i2400m.h" - - -#define d_submodule fw -#include "debug-levels.h" - - -static const __le32 i2400m_ack_barker[4] = { - cpu_to_le32(i2400m_ack_barker), - cpu_to_le32(i2400m_ack_barker), - cpu_to_le32(i2400m_ack_barker), - cpu_to_le32(i2400m_ack_barker) -}; - - -/** - * prepare a boot-mode command for delivery - * - * @cmd: pointer to bootrom header to prepare - * - * computes checksum if so needed. after calling this function, do not - * modify the command or header as the checksum won't work anymore. - * - * we do it from here because some times we cannot do it in the - * original context the command was sent (it is a const), so when we - * copy it to our staging buffer, we add the checksum there. - */ -void i2400m_bm_cmd_prepare(struct i2400m_bootrom_header *cmd) -{ - if (i2400m_brh_get_use_checksum(cmd)) { - int i; - __le32 checksum = 0; - const u32 *checksum_ptr = (void *) cmd->payload; - - for (i = 0; i < le32_to_cpu(cmd->data_size) / 4; i++) - le32_add_cpu(&checksum, *checksum_ptr++); - - le32_add_cpu(&checksum, le32_to_cpu(cmd->command)); - le32_add_cpu(&checksum, le32_to_cpu(cmd->target_addr)); - le32_add_cpu(&checksum, le32_to_cpu(cmd->data_size)); - - cmd->block_checksum = checksum; - } -} -export_symbol_gpl(i2400m_bm_cmd_prepare); - - -/* - * database of known barkers. - * - * a barker is what the device sends indicating he is ready to be - * bootloaded. different versions of the device will send different - * barkers. depending on the barker, it might mean the device wants - * some kind of firmware or the other. - */ -static struct i2400m_barker_db { - __le32 data[4]; -} *i2400m_barker_db; -static size_t i2400m_barker_db_used, i2400m_barker_db_size; - - -static -int i2400m_zrealloc_2x(void **ptr, size_t *_count, size_t el_size, - gfp_t gfp_flags) -{ - size_t old_count = *_count, - new_count = old_count ? 2 * old_count : 2, - old_size = el_size * old_count, - new_size = el_size * new_count; - void *nptr = krealloc(*ptr, new_size, gfp_flags); - if (nptr) { - /* zero the other half or the whole thing if old_count - * was zero */ - if (old_size == 0) - memset(nptr, 0, new_size); - else - memset(nptr + old_size, 0, old_size); - *_count = new_count; - *ptr = nptr; - return 0; - } else - return -enomem; -} - - -/* - * add a barker to the database - * - * this cannot used outside of this module and only at at module_init - * time. this is to avoid the need to do locking. - */ -static -int i2400m_barker_db_add(u32 barker_id) -{ - int result; - - struct i2400m_barker_db *barker; - if (i2400m_barker_db_used >= i2400m_barker_db_size) { - result = i2400m_zrealloc_2x( - (void **) &i2400m_barker_db, &i2400m_barker_db_size, - sizeof(i2400m_barker_db[0]), gfp_kernel); - if (result < 0) - return result; - } - barker = i2400m_barker_db + i2400m_barker_db_used++; - barker->data[0] = cpu_to_le32(barker_id); - barker->data[1] = cpu_to_le32(barker_id); - barker->data[2] = cpu_to_le32(barker_id); - barker->data[3] = cpu_to_le32(barker_id); - return 0; -} - - -void i2400m_barker_db_exit(void) -{ - kfree(i2400m_barker_db); - i2400m_barker_db = null; - i2400m_barker_db_size = 0; - i2400m_barker_db_used = 0; -} - - -/* - * helper function to add all the known stable barkers to the barker - * database. - */ -static -int i2400m_barker_db_known_barkers(void) -{ - int result; - - result = i2400m_barker_db_add(i2400m_nboot_barker); - if (result < 0) - goto error_add; - result = i2400m_barker_db_add(i2400m_sboot_barker); - if (result < 0) - goto error_add; - result = i2400m_barker_db_add(i2400m_sboot_barker_6050); - if (result < 0) - goto error_add; -error_add: - return result; -} - - -/* - * initialize the barker database - * - * this can only be used from the module_init function for this - * module; this is to avoid the need to do locking. - * - * @options: command line argument with extra barkers to - * recognize. this is a comma-separated list of 32-bit hex - * numbers. they are appended to the existing list. setting 0 - * cleans the existing list and starts a new one. - */ -int i2400m_barker_db_init(const char *_options) -{ - int result; - char *options = null, *options_orig, *token; - - i2400m_barker_db = null; - i2400m_barker_db_size = 0; - i2400m_barker_db_used = 0; - - result = i2400m_barker_db_known_barkers(); - if (result < 0) - goto error_add; - /* parse command line options from i2400m.barkers */ - if (_options != null) { - unsigned barker; - - options_orig = kstrdup(_options, gfp_kernel); - if (options_orig == null) { - result = -enomem; - goto error_parse; - } - options = options_orig; - - while ((token = strsep(&options, ",")) != null) { - if (*token == '') /* eat joint commas */ - continue; - if (sscanf(token, "%x", &barker) != 1 - || barker > 0xffffffff) { - printk(kern_err "%s: can't recognize " - "i2400m.barkers value '%s' as " - "a 32-bit number ", - __func__, token); - result = -einval; - goto error_parse; - } - if (barker == 0) { - /* clean list and start new */ - i2400m_barker_db_exit(); - continue; - } - result = i2400m_barker_db_add(barker); - if (result < 0) - goto error_parse_add; - } - kfree(options_orig); - } - return 0; - -error_parse_add: -error_parse: - kfree(options_orig); -error_add: - kfree(i2400m_barker_db); - return result; -} - - -/* - * recognize a boot barker - * - * @buf: buffer where the boot barker. - * @buf_size: size of the buffer (has to be 16 bytes). it is passed - * here so the function can check it for the caller. - * - * note that as a side effect, upon identifying the obtained boot - * barker, this function will set i2400m->barker to point to the right - * barker database entry. subsequent calls to the function will result - * in verifying that the same type of boot barker is returned when the - * device [re]boots (as long as the same device instance is used). - * - * return: 0 if @buf matches a known boot barker. -enoent if the - * buffer in @buf doesn't match any boot barker in the database or - * -eilseq if the buffer doesn't have the right size. - */ -int i2400m_is_boot_barker(struct i2400m *i2400m, - const void *buf, size_t buf_size) -{ - int result; - struct device *dev = i2400m_dev(i2400m); - struct i2400m_barker_db *barker; - int i; - - result = -enoent; - if (buf_size != sizeof(i2400m_barker_db[i].data)) - return result; - - /* short circuit if we have already discovered the barker - * associated with the device. */ - if (i2400m->barker && - !memcmp(buf, i2400m->barker, sizeof(i2400m->barker->data))) - return 0; - - for (i = 0; i < i2400m_barker_db_used; i++) { - barker = &i2400m_barker_db[i]; - build_bug_on(sizeof(barker->data) != 16); - if (memcmp(buf, barker->data, sizeof(barker->data))) - continue; - - if (i2400m->barker == null) { - i2400m->barker = barker; - d_printf(1, dev, "boot barker set to #%u/%08x ", - i, le32_to_cpu(barker->data[0])); - if (barker->data[0] == le32_to_cpu(i2400m_nboot_barker)) - i2400m->sboot = 0; - else - i2400m->sboot = 1; - } else if (i2400m->barker != barker) { - dev_err(dev, "hw inconsistency: device " - "reports a different boot barker " - "than set (from %08x to %08x) ", - le32_to_cpu(i2400m->barker->data[0]), - le32_to_cpu(barker->data[0])); - result = -eio; - } else - d_printf(2, dev, "boot barker confirmed #%u/%08x ", - i, le32_to_cpu(barker->data[0])); - result = 0; - break; - } - return result; -} -export_symbol_gpl(i2400m_is_boot_barker); - - -/* - * verify the ack data received - * - * given a reply to a boot mode command, chew it and verify everything - * is ok. - * - * @opcode: opcode which generated this ack. for error messages. - * @ack: pointer to ack data we received - * @ack_size: size of that data buffer - * @flags: i2400m_bm_cmd_* flags we called the command with. - * - * way too long function -- maybe it should be further split - */ -static -ssize_t __i2400m_bm_ack_verify(struct i2400m *i2400m, int opcode, - struct i2400m_bootrom_header *ack, - size_t ack_size, int flags) -{ - ssize_t result = -enomem; - struct device *dev = i2400m_dev(i2400m); - - d_fnstart(8, dev, "(i2400m %p opcode %d ack %p size %zu) ", - i2400m, opcode, ack, ack_size); - if (ack_size < sizeof(*ack)) { - result = -eio; - dev_err(dev, "boot-mode cmd %d: hw bug? notification didn't " - "return enough data (%zu bytes vs %zu expected) ", - opcode, ack_size, sizeof(*ack)); - goto error_ack_short; - } - result = i2400m_is_boot_barker(i2400m, ack, ack_size); - if (result >= 0) { - result = -erestartsys; - d_printf(6, dev, "boot-mode cmd %d: hw boot barker ", opcode); - goto error_reboot; - } - if (ack_size == sizeof(i2400m_ack_barker) - && memcmp(ack, i2400m_ack_barker, sizeof(*ack)) == 0) { - result = -eisconn; - d_printf(3, dev, "boot-mode cmd %d: hw reboot ack barker ", - opcode); - goto error_reboot_ack; - } - result = 0; - if (flags & i2400m_bm_cmd_raw) - goto out_raw; - ack->data_size = le32_to_cpu(ack->data_size); - ack->target_addr = le32_to_cpu(ack->target_addr); - ack->block_checksum = le32_to_cpu(ack->block_checksum); - d_printf(5, dev, "boot-mode cmd %d: notification for opcode %u " - "response %u csum %u rr %u da %u ", - opcode, i2400m_brh_get_opcode(ack), - i2400m_brh_get_response(ack), - i2400m_brh_get_use_checksum(ack), - i2400m_brh_get_response_required(ack), - i2400m_brh_get_direct_access(ack)); - result = -eio; - if (i2400m_brh_get_signature(ack) != 0xcbbc) { - dev_err(dev, "boot-mode cmd %d: hw bug? wrong signature " - "0x%04x ", opcode, i2400m_brh_get_signature(ack)); - goto error_ack_signature; - } - if (opcode != -1 && opcode != i2400m_brh_get_opcode(ack)) { - dev_err(dev, "boot-mode cmd %d: hw bug? " - "received response for opcode %u, expected %u ", - opcode, i2400m_brh_get_opcode(ack), opcode); - goto error_ack_opcode; - } - if (i2400m_brh_get_response(ack) != 0) { /* failed? */ - dev_err(dev, "boot-mode cmd %d: error; hw response %u ", - opcode, i2400m_brh_get_response(ack)); - goto error_ack_failed; - } - if (ack_size < le32_to_cpu(ack->data_size) + sizeof(*ack)) { - dev_err(dev, "boot-mode cmd %d: sw bug " - "driver provided only %zu bytes for %zu bytes " - "of data ", opcode, ack_size, - (size_t) le32_to_cpu(ack->data_size) + sizeof(*ack)); - goto error_ack_short_buffer; - } - result = ack_size; - /* don't you love this stack of empty targets? well, i don't - * either, but it helps track exactly who comes in here and - * why :) */ -error_ack_short_buffer: -error_ack_failed: -error_ack_opcode: -error_ack_signature: -out_raw: -error_reboot_ack: -error_reboot: -error_ack_short: - d_fnend(8, dev, "(i2400m %p opcode %d ack %p size %zu) = %d ", - i2400m, opcode, ack, ack_size, (int) result); - return result; -} - - -/** - * i2400m_bm_cmd - execute a boot mode command - * - * @i2400m: device descriptor - * @cmd: buffer containing the command data (pointing at the header). - * this data can be anywhere (for usb, we will copy it to an - * specific buffer). make sure everything is in proper little - * endian. - * - * a raw buffer can be also sent, just cast it and set flags to - * i2400m_bm_cmd_raw. - * - * this function will generate a checksum for you if the - * checksum bit in the command is set (unless i2400m_bm_cmd_raw - * is set). - * - * you can use the i2400m->bm_cmd_buf to stage your commands and - * send them. - * - * if null, no command is sent (we just wait for an ack). - * - * @cmd_size: size of the command. will be auto padded to the - * bus-specific drivers padding requirements. - * - * @ack: buffer where to place the acknowledgement. if it is a regular - * command response, all fields will be returned with the right, - * native endianess. - * - * you *cannot* use i2400m->bm_ack_buf for this buffer. - * - * @ack_size: size of @ack, 16 aligned; you need to provide at least - * sizeof(*ack) bytes and then enough to contain the return data - * from the command - * - * @flags: see i2400m_bm_cmd_* above. - * - * returns: bytes received by the notification; if < 0, an errno code - * denoting an error or: - * - * -erestartsys the device has rebooted - * - * executes a boot-mode command and waits for a response, doing basic - * validation on it; if a zero length response is received, it retries - * waiting for a response until a non-zero one is received (timing out - * after %i2400m_boot_retries retries). - */ -static -ssize_t i2400m_bm_cmd(struct i2400m *i2400m, - const struct i2400m_bootrom_header *cmd, size_t cmd_size, - struct i2400m_bootrom_header *ack, size_t ack_size, - int flags) -{ - ssize_t result, rx_bytes; - struct device *dev = i2400m_dev(i2400m); - int opcode = cmd == null ? -1 : i2400m_brh_get_opcode(cmd); - - d_fnstart(6, dev, "(i2400m %p cmd %p size %zu ack %p size %zu) ", - i2400m, cmd, cmd_size, ack, ack_size); - bug_on(ack_size < sizeof(*ack)); - bug_on(i2400m->boot_mode == 0); - - if (cmd != null) { /* send the command */ - result = i2400m->bus_bm_cmd_send(i2400m, cmd, cmd_size, flags); - if (result < 0) - goto error_cmd_send; - if ((flags & i2400m_bm_cmd_raw) == 0) - d_printf(5, dev, - "boot-mode cmd %d csum %u rr %u da %u: " - "addr 0x%04x size %u block csum 0x%04x ", - opcode, i2400m_brh_get_use_checksum(cmd), - i2400m_brh_get_response_required(cmd), - i2400m_brh_get_direct_access(cmd), - cmd->target_addr, cmd->data_size, - cmd->block_checksum); - } - result = i2400m->bus_bm_wait_for_ack(i2400m, ack, ack_size); - if (result < 0) { - dev_err(dev, "boot-mode cmd %d: error waiting for an ack: %d ", - opcode, (int) result); /* bah, %zd doesn't work */ - goto error_wait_for_ack; - } - rx_bytes = result; - /* verify the ack and read more if necessary [result is the - * final amount of bytes we get in the ack] */ - result = __i2400m_bm_ack_verify(i2400m, opcode, ack, ack_size, flags); - if (result < 0) - goto error_bad_ack; - /* don't you love this stack of empty targets? well, i don't - * either, but it helps track exactly who comes in here and - * why :) */ - result = rx_bytes; -error_bad_ack: -error_wait_for_ack: -error_cmd_send: - d_fnend(6, dev, "(i2400m %p cmd %p size %zu ack %p size %zu) = %d ", - i2400m, cmd, cmd_size, ack, ack_size, (int) result); - return result; -} - - -/** - * i2400m_download_chunk - write a single chunk of data to the device's memory - * - * @i2400m: device descriptor - * @chunk: the buffer to write - * @__chunk_len: length of the buffer to write - * @addr: address in the device memory space - * @direct: bootrom write mode - * @do_csum: should a checksum validation be performed - */ -static int i2400m_download_chunk(struct i2400m *i2400m, const void *chunk, - size_t __chunk_len, unsigned long addr, - unsigned int direct, unsigned int do_csum) -{ - int ret; - size_t chunk_len = align(__chunk_len, i2400m_pl_align); - struct device *dev = i2400m_dev(i2400m); - struct { - struct i2400m_bootrom_header cmd; - u8 cmd_payload[]; - } __packed *buf; - struct i2400m_bootrom_header ack; - - d_fnstart(5, dev, "(i2400m %p chunk %p __chunk_len %zu addr 0x%08lx " - "direct %u do_csum %u) ", i2400m, chunk, __chunk_len, - addr, direct, do_csum); - buf = i2400m->bm_cmd_buf; - memcpy(buf->cmd_payload, chunk, __chunk_len); - memset(buf->cmd_payload + __chunk_len, 0xad, chunk_len - __chunk_len); - - buf->cmd.command = i2400m_brh_command(i2400m_brh_write, - __chunk_len & 0x3 ? 0 : do_csum, - __chunk_len & 0xf ? 0 : direct); - buf->cmd.target_addr = cpu_to_le32(addr); - buf->cmd.data_size = cpu_to_le32(__chunk_len); - ret = i2400m_bm_cmd(i2400m, &buf->cmd, sizeof(buf->cmd) + chunk_len, - &ack, sizeof(ack), 0); - if (ret >= 0) - ret = 0; - d_fnend(5, dev, "(i2400m %p chunk %p __chunk_len %zu addr 0x%08lx " - "direct %u do_csum %u) = %d ", i2400m, chunk, __chunk_len, - addr, direct, do_csum, ret); - return ret; -} - - -/* - * download a bcf file's sections to the device - * - * @i2400m: device descriptor - * @bcf: pointer to firmware data (first header followed by the - * payloads). assumed verified and consistent. - * @bcf_len: length (in bytes) of the @bcf buffer. - * - * returns: < 0 errno code on error or the offset to the jump instruction. - * - * given a bcf file, downloads each section (a command and a payload) - * to the device's address space. actually, it just executes each - * command i the bcf file. - * - * the section size has to be aligned to 4 bytes and the padding has - * to be taken from the firmware file, as the signature takes it into - * account. - */ -static -ssize_t i2400m_dnload_bcf(struct i2400m *i2400m, - const struct i2400m_bcf_hdr *bcf, size_t bcf_len) -{ - ssize_t ret; - struct device *dev = i2400m_dev(i2400m); - size_t offset, /* iterator offset */ - data_size, /* size of the data payload */ - section_size, /* size of the whole section (cmd + payload) */ - section = 1; - const struct i2400m_bootrom_header *bh; - struct i2400m_bootrom_header ack; - - d_fnstart(3, dev, "(i2400m %p bcf %p bcf_len %zu) ", - i2400m, bcf, bcf_len); - /* iterate over the command blocks in the bcf file that start - * after the header */ - offset = le32_to_cpu(bcf->header_len) * sizeof(u32); - while (1) { /* start sending the file */ - bh = (void *) bcf + offset; - data_size = le32_to_cpu(bh->data_size); - section_size = align(sizeof(*bh) + data_size, 4); - d_printf(7, dev, - "downloading section #%zu (@%zu %zu b) to 0x%08x ", - section, offset, sizeof(*bh) + data_size, - le32_to_cpu(bh->target_addr)); - /* - * we look for jump cmd from the bootmode header, - * either i2400m_brh_signed_jump for secure boot - * or i2400m_brh_jump for unsecure boot, the last chunk - * should be the bootmode header with jump cmd. - */ - if (i2400m_brh_get_opcode(bh) == i2400m_brh_signed_jump || - i2400m_brh_get_opcode(bh) == i2400m_brh_jump) { - d_printf(5, dev, "jump found @%zu ", offset); - break; - } - if (offset + section_size > bcf_len) { - dev_err(dev, "fw %s: bad section #%zu, " - "end (@%zu) beyond eof (@%zu) ", - i2400m->fw_name, section, - offset + section_size, bcf_len); - ret = -einval; - goto error_section_beyond_eof; - } - __i2400m_msleep(20); - ret = i2400m_bm_cmd(i2400m, bh, section_size, - &ack, sizeof(ack), i2400m_bm_cmd_raw); - if (ret < 0) { - dev_err(dev, "fw %s: section #%zu (@%zu %zu b) " - "failed %d ", i2400m->fw_name, section, - offset, sizeof(*bh) + data_size, (int) ret); - goto error_send; - } - offset += section_size; - section++; - } - ret = offset; -error_section_beyond_eof: -error_send: - d_fnend(3, dev, "(i2400m %p bcf %p bcf_len %zu) = %d ", - i2400m, bcf, bcf_len, (int) ret); - return ret; -} - - -/* - * indicate if the device emitted a reboot barker that indicates - * "signed boot" - */ -static -unsigned i2400m_boot_is_signed(struct i2400m *i2400m) -{ - return likely(i2400m->sboot); -} - - -/* - * do the final steps of uploading firmware - * - * @bcf_hdr: bcf header we are actually using - * @bcf: pointer to the firmware image (which matches the first header - * that is followed by the actual payloads). - * @offset: [byte] offset into @bcf for the command we need to send. - * - * depending on the boot mode (signed vs non-signed), different - * actions need to be taken. - */ -static -int i2400m_dnload_finalize(struct i2400m *i2400m, - const struct i2400m_bcf_hdr *bcf_hdr, - const struct i2400m_bcf_hdr *bcf, size_t offset) -{ - int ret = 0; - struct device *dev = i2400m_dev(i2400m); - struct i2400m_bootrom_header *cmd, ack; - struct { - struct i2400m_bootrom_header cmd; - u8 cmd_pl[0]; - } __packed *cmd_buf; - size_t signature_block_offset, signature_block_size; - - d_fnstart(3, dev, "offset %zu ", offset); - cmd = (void *) bcf + offset; - if (i2400m_boot_is_signed(i2400m) == 0) { - struct i2400m_bootrom_header jump_ack; - d_printf(1, dev, "unsecure boot, jumping to 0x%08x ", - le32_to_cpu(cmd->target_addr)); - cmd_buf = i2400m->bm_cmd_buf; - memcpy(&cmd_buf->cmd, cmd, sizeof(*cmd)); - cmd = &cmd_buf->cmd; - /* now cmd points to the actual bootrom_header in cmd_buf */ - i2400m_brh_set_opcode(cmd, i2400m_brh_jump); - cmd->data_size = 0; - ret = i2400m_bm_cmd(i2400m, cmd, sizeof(*cmd), - &jump_ack, sizeof(jump_ack), 0); - } else { - d_printf(1, dev, "secure boot, jumping to 0x%08x ", - le32_to_cpu(cmd->target_addr)); - cmd_buf = i2400m->bm_cmd_buf; - memcpy(&cmd_buf->cmd, cmd, sizeof(*cmd)); - signature_block_offset = - sizeof(*bcf_hdr) - + le32_to_cpu(bcf_hdr->key_size) * sizeof(u32) - + le32_to_cpu(bcf_hdr->exponent_size) * sizeof(u32); - signature_block_size = - le32_to_cpu(bcf_hdr->modulus_size) * sizeof(u32); - memcpy(cmd_buf->cmd_pl, - (void *) bcf_hdr + signature_block_offset, - signature_block_size); - ret = i2400m_bm_cmd(i2400m, &cmd_buf->cmd, - sizeof(cmd_buf->cmd) + signature_block_size, - &ack, sizeof(ack), i2400m_bm_cmd_raw); - } - d_fnend(3, dev, "returning %d ", ret); - return ret; -} - - -/** - * i2400m_bootrom_init - reboots a powered device into boot mode - * - * @i2400m: device descriptor - * @flags: - * i2400m_bri_soft: a reboot barker has been seen - * already, so don't wait for it. - * - * i2400m_bri_no_reboot: don't send a reboot command, but wait - * for a reboot barker notification. this is a one shot; if - * the state machine needs to send a reboot command it will. - * - * returns: - * - * < 0 errno code on error, 0 if ok. - * - * description: - * - * tries hard enough to put the device in boot-mode. there are two - * main phases to this: - * - * a. (1) send a reboot command and (2) get a reboot barker - * - * b. (1) echo/ack the reboot sending the reboot barker back and (2) - * getting an ack barker in return - * - * we want to skip (a) in some cases [soft]. the state machine is - * horrible, but it is basically: on each phase, send what has to be - * sent (if any), wait for the answer and act on the answer. we might - * have to backtrack and retry, so we keep a max tries counter for - * that. - * - * it sucks because we don't know ahead of time which is going to be - * the reboot barker (the device might send different ones depending - * on its eeprom config) and once the device reboots and waits for the - * echo/ack reboot barker being sent back, it doesn't understand - * anything else. so we can be left at the point where we don't know - * what to send to it -- cold reset and bus reset seem to have little - * effect. so the function iterates (in this case) through all the - * known barkers and tries them all until an ack is - * received. otherwise, it gives up. - * - * if we get a timeout after sending a warm reset, we do it again. - */ -int i2400m_bootrom_init(struct i2400m *i2400m, enum i2400m_bri flags) -{ - int result; - struct device *dev = i2400m_dev(i2400m); - struct i2400m_bootrom_header *cmd; - struct i2400m_bootrom_header ack; - int count = i2400m->bus_bm_retries; - int ack_timeout_cnt = 1; - unsigned i; - - build_bug_on(sizeof(*cmd) != sizeof(i2400m_barker_db[0].data)); - build_bug_on(sizeof(ack) != sizeof(i2400m_ack_barker)); - - d_fnstart(4, dev, "(i2400m %p flags 0x%08x) ", i2400m, flags); - result = -enomem; - cmd = i2400m->bm_cmd_buf; - if (flags & i2400m_bri_soft) - goto do_reboot_ack; -do_reboot: - ack_timeout_cnt = 1; - if (--count < 0) - goto error_timeout; - d_printf(4, dev, "device reboot: reboot command [%d # left] ", - count); - if ((flags & i2400m_bri_no_reboot) == 0) - i2400m_reset(i2400m, i2400m_rt_warm); - result = i2400m_bm_cmd(i2400m, null, 0, &ack, sizeof(ack), - i2400m_bm_cmd_raw); - flags &= ~i2400m_bri_no_reboot; - switch (result) { - case -erestartsys: - /* - * at this point, i2400m_bm_cmd(), through - * __i2400m_bm_ack_process(), has updated - * i2400m->barker and we are good to go. - */ - d_printf(4, dev, "device reboot: got reboot barker "); - break; - case -eisconn: /* we don't know how it got here...but we follow it */ - d_printf(4, dev, "device reboot: got ack barker - whatever "); - goto do_reboot; - case -etimedout: - /* - * device has timed out, we might be in boot mode - * already and expecting an ack; if we don't know what - * the barker is, we just send them all. cold reset - * and bus reset don't work. beats me. - */ - if (i2400m->barker != null) { - dev_err(dev, "device boot: reboot barker timed out, " - "trying (set) %08x echo/ack ", - le32_to_cpu(i2400m->barker->data[0])); - goto do_reboot_ack; - } - for (i = 0; i < i2400m_barker_db_used; i++) { - struct i2400m_barker_db *barker = &i2400m_barker_db[i]; - memcpy(cmd, barker->data, sizeof(barker->data)); - result = i2400m_bm_cmd(i2400m, cmd, sizeof(*cmd), - &ack, sizeof(ack), - i2400m_bm_cmd_raw); - if (result == -eisconn) { - dev_warn(dev, "device boot: got ack barker " - "after sending echo/ack barker " - "#%d/%08x; rebooting j.i.c. ", - i, le32_to_cpu(barker->data[0])); - flags &= ~i2400m_bri_no_reboot; - goto do_reboot; - } - } - dev_err(dev, "device boot: tried all the echo/acks, could " - "not get device to respond; giving up"); - result = -eshutdown; - case -eproto: - case -eshutdown: /* dev is gone */ - case -eintr: /* user cancelled */ - goto error_dev_gone; - default: - dev_err(dev, "device reboot: error %d while waiting " - "for reboot barker - rebooting ", result); - d_dump(1, dev, &ack, result); - goto do_reboot; - } - /* at this point we ack back with 4 reboot barkers and expect - * 4 ack barkers. this is ugly, as we send a raw command -- - * hence the cast. _bm_cmd() will catch the reboot ack - * notification and report it as -eisconn. */ -do_reboot_ack: - d_printf(4, dev, "device reboot ack: sending ack [%d # left] ", count); - memcpy(cmd, i2400m->barker->data, sizeof(i2400m->barker->data)); - result = i2400m_bm_cmd(i2400m, cmd, sizeof(*cmd), - &ack, sizeof(ack), i2400m_bm_cmd_raw); - switch (result) { - case -erestartsys: - d_printf(4, dev, "reboot ack: got reboot barker - retrying "); - if (--count < 0) - goto error_timeout; - goto do_reboot_ack; - case -eisconn: - d_printf(4, dev, "reboot ack: got ack barker - good "); - break; - case -etimedout: /* no response, maybe it is the other type? */ - if (ack_timeout_cnt-- < 0) { - d_printf(4, dev, "reboot ack timedout: retrying "); - goto do_reboot_ack; - } else { - dev_err(dev, "reboot ack timedout too long: " - "trying reboot "); - goto do_reboot; - } - break; - case -eproto: - case -eshutdown: /* dev is gone */ - goto error_dev_gone; - default: - dev_err(dev, "device reboot ack: error %d while waiting for " - "reboot ack barker - rebooting ", result); - goto do_reboot; - } - d_printf(2, dev, "device reboot ack: got ack barker - boot done "); - result = 0; -exit_timeout: -error_dev_gone: - d_fnend(4, dev, "(i2400m %p flags 0x%08x) = %d ", - i2400m, flags, result); - return result; - -error_timeout: - dev_err(dev, "timed out waiting for reboot ack "); - result = -etimedout; - goto exit_timeout; -} - - -/* - * read the mac addr - * - * the position this function reads is fixed in device memory and - * always available, even without firmware. - * - * note we specify we want to read only six bytes, but provide space - * for 16, as we always get it rounded up. - */ -int i2400m_read_mac_addr(struct i2400m *i2400m) -{ - int result; - struct device *dev = i2400m_dev(i2400m); - struct net_device *net_dev = i2400m->wimax_dev.net_dev; - struct i2400m_bootrom_header *cmd; - struct { - struct i2400m_bootrom_header ack; - u8 ack_pl[16]; - } __packed ack_buf; - - d_fnstart(5, dev, "(i2400m %p) ", i2400m); - cmd = i2400m->bm_cmd_buf; - cmd->command = i2400m_brh_command(i2400m_brh_read, 0, 1); - cmd->target_addr = cpu_to_le32(0x00203fe8); - cmd->data_size = cpu_to_le32(6); - result = i2400m_bm_cmd(i2400m, cmd, sizeof(*cmd), - &ack_buf.ack, sizeof(ack_buf), 0); - if (result < 0) { - dev_err(dev, "bm: read mac addr failed: %d ", result); - goto error_read_mac; - } - d_printf(2, dev, "mac addr is %pm ", ack_buf.ack_pl); - if (i2400m->bus_bm_mac_addr_impaired == 1) { - ack_buf.ack_pl[0] = 0x00; - ack_buf.ack_pl[1] = 0x16; - ack_buf.ack_pl[2] = 0xd3; - get_random_bytes(&ack_buf.ack_pl[3], 3); - dev_err(dev, "bm is mac addr impaired, faking mac addr to " - "mac addr is %pm ", ack_buf.ack_pl); - result = 0; - } - net_dev->addr_len = eth_alen; - memcpy(net_dev->dev_addr, ack_buf.ack_pl, eth_alen); -error_read_mac: - d_fnend(5, dev, "(i2400m %p) = %d ", i2400m, result); - return result; -} - - -/* - * initialize a non signed boot - * - * this implies sending some magic values to the device's memory. note - * we convert the values to little endian in the same array - * declaration. - */ -static -int i2400m_dnload_init_nonsigned(struct i2400m *i2400m) -{ - unsigned i = 0; - int ret = 0; - struct device *dev = i2400m_dev(i2400m); - d_fnstart(5, dev, "(i2400m %p) ", i2400m); - if (i2400m->bus_bm_pokes_table) { - while (i2400m->bus_bm_pokes_table[i].address) { - ret = i2400m_download_chunk( - i2400m, - &i2400m->bus_bm_pokes_table[i].data, - sizeof(i2400m->bus_bm_pokes_table[i].data), - i2400m->bus_bm_pokes_table[i].address, 1, 1); - if (ret < 0) - break; - i++; - } - } - d_fnend(5, dev, "(i2400m %p) = %d ", i2400m, ret); - return ret; -} - - -/* - * initialize the signed boot process - * - * @i2400m: device descriptor - * - * @bcf_hdr: pointer to the firmware header; assumes it is fully in - * memory (it has gone through basic validation). - * - * returns: 0 if ok, < 0 errno code on error, -erestartsys if the hw - * rebooted. - * - * this writes the firmware bcf header to the device using the - * hash_payload_only command. - */ -static -int i2400m_dnload_init_signed(struct i2400m *i2400m, - const struct i2400m_bcf_hdr *bcf_hdr) -{ - int ret; - struct device *dev = i2400m_dev(i2400m); - struct { - struct i2400m_bootrom_header cmd; - struct i2400m_bcf_hdr cmd_pl; - } __packed *cmd_buf; - struct i2400m_bootrom_header ack; - - d_fnstart(5, dev, "(i2400m %p bcf_hdr %p) ", i2400m, bcf_hdr); - cmd_buf = i2400m->bm_cmd_buf; - cmd_buf->cmd.command = - i2400m_brh_command(i2400m_brh_hash_payload_only, 0, 0); - cmd_buf->cmd.target_addr = 0; - cmd_buf->cmd.data_size = cpu_to_le32(sizeof(cmd_buf->cmd_pl)); - memcpy(&cmd_buf->cmd_pl, bcf_hdr, sizeof(*bcf_hdr)); - ret = i2400m_bm_cmd(i2400m, &cmd_buf->cmd, sizeof(*cmd_buf), - &ack, sizeof(ack), 0); - if (ret >= 0) - ret = 0; - d_fnend(5, dev, "(i2400m %p bcf_hdr %p) = %d ", i2400m, bcf_hdr, ret); - return ret; -} - - -/* - * initialize the firmware download at the device size - * - * multiplex to the one that matters based on the device's mode - * (signed or non-signed). - */ -static -int i2400m_dnload_init(struct i2400m *i2400m, - const struct i2400m_bcf_hdr *bcf_hdr) -{ - int result; - struct device *dev = i2400m_dev(i2400m); - - if (i2400m_boot_is_signed(i2400m)) { - d_printf(1, dev, "signed boot "); - result = i2400m_dnload_init_signed(i2400m, bcf_hdr); - if (result == -erestartsys) - return result; - if (result < 0) - dev_err(dev, "firmware %s: signed boot download " - "initialization failed: %d ", - i2400m->fw_name, result); - } else { - /* non-signed boot process without pokes */ - d_printf(1, dev, "non-signed boot "); - result = i2400m_dnload_init_nonsigned(i2400m); - if (result == -erestartsys) - return result; - if (result < 0) - dev_err(dev, "firmware %s: non-signed download " - "initialization failed: %d ", - i2400m->fw_name, result); - } - return result; -} - - -/* - * run consistency tests on the firmware file and load up headers - * - * check for the firmware being made for the i2400m device, - * etc...these checks are mostly informative, as the device will make - * them too; but the driver's response is more informative on what - * went wrong. - * - * this will also look at all the headers present on the firmware - * file, and update i2400m->fw_bcf_hdr to point to them. - */ -static -int i2400m_fw_hdr_check(struct i2400m *i2400m, - const struct i2400m_bcf_hdr *bcf_hdr, - size_t index, size_t offset) -{ - struct device *dev = i2400m_dev(i2400m); - - unsigned module_type, header_len, major_version, minor_version, - module_id, module_vendor, date, size; - - module_type = le32_to_cpu(bcf_hdr->module_type); - header_len = sizeof(u32) * le32_to_cpu(bcf_hdr->header_len); - major_version = (le32_to_cpu(bcf_hdr->header_version) & 0xffff0000) - >> 16; - minor_version = le32_to_cpu(bcf_hdr->header_version) & 0x0000ffff; - module_id = le32_to_cpu(bcf_hdr->module_id); - module_vendor = le32_to_cpu(bcf_hdr->module_vendor); - date = le32_to_cpu(bcf_hdr->date); - size = sizeof(u32) * le32_to_cpu(bcf_hdr->size); - - d_printf(1, dev, "firmware %s #%zd@%08zx: bcf header " - "type:vendor:id 0x%x:%x:%x v%u.%u (%u/%u b) built %08x ", - i2400m->fw_name, index, offset, - module_type, module_vendor, module_id, - major_version, minor_version, header_len, size, date); - - /* hard errors */ - if (major_version != 1) { - dev_err(dev, "firmware %s #%zd@%08zx: major header version " - "v%u.%u not supported ", - i2400m->fw_name, index, offset, - major_version, minor_version); - return -ebadf; - } - - if (module_type != 6) { /* built for the right hardware? */ - dev_err(dev, "firmware %s #%zd@%08zx: unexpected module " - "type 0x%x; aborting ", - i2400m->fw_name, index, offset, - module_type); - return -ebadf; - } - - if (module_vendor != 0x8086) { - dev_err(dev, "firmware %s #%zd@%08zx: unexpected module " - "vendor 0x%x; aborting ", - i2400m->fw_name, index, offset, module_vendor); - return -ebadf; - } - - if (date < 0x20080300) - dev_warn(dev, "firmware %s #%zd@%08zx: build date %08x " - "too old; unsupported ", - i2400m->fw_name, index, offset, date); - return 0; -} - - -/* - * run consistency tests on the firmware file and load up headers - * - * check for the firmware being made for the i2400m device, - * etc...these checks are mostly informative, as the device will make - * them too; but the driver's response is more informative on what - * went wrong. - * - * this will also look at all the headers present on the firmware - * file, and update i2400m->fw_hdrs to point to them. - */ -static -int i2400m_fw_check(struct i2400m *i2400m, const void *bcf, size_t bcf_size) -{ - int result; - struct device *dev = i2400m_dev(i2400m); - size_t headers = 0; - const struct i2400m_bcf_hdr *bcf_hdr; - const void *itr, *next, *top; - size_t slots = 0, used_slots = 0; - - for (itr = bcf, top = itr + bcf_size; - itr < top; - headers++, itr = next) { - size_t leftover, offset, header_len, size; - - leftover = top - itr; - offset = itr - bcf; - if (leftover <= sizeof(*bcf_hdr)) { - dev_err(dev, "firmware %s: %zu b left at @%zx, " - "not enough for bcf header ", - i2400m->fw_name, leftover, offset); - break; - } - bcf_hdr = itr; - /* only the first header is supposed to be followed by - * payload */ - header_len = sizeof(u32) * le32_to_cpu(bcf_hdr->header_len); - size = sizeof(u32) * le32_to_cpu(bcf_hdr->size); - if (headers == 0) - next = itr + size; - else - next = itr + header_len; - - result = i2400m_fw_hdr_check(i2400m, bcf_hdr, headers, offset); - if (result < 0) - continue; - if (used_slots + 1 >= slots) { - /* +1 -> we need to account for the one we'll - * occupy and at least an extra one for - * always being null */ - result = i2400m_zrealloc_2x( - (void **) &i2400m->fw_hdrs, &slots, - sizeof(i2400m->fw_hdrs[0]), - gfp_kernel); - if (result < 0) - goto error_zrealloc; - } - i2400m->fw_hdrs[used_slots] = bcf_hdr; - used_slots++; - } - if (headers == 0) { - dev_err(dev, "firmware %s: no usable headers found ", - i2400m->fw_name); - result = -ebadf; - } else - result = 0; -error_zrealloc: - return result; -} - - -/* - * match a barker to a bcf header module id - * - * the device sends a barker which tells the firmware loader which - * header in the bcf file has to be used. this does the matching. - */ -static -unsigned i2400m_bcf_hdr_match(struct i2400m *i2400m, - const struct i2400m_bcf_hdr *bcf_hdr) -{ - u32 barker = le32_to_cpu(i2400m->barker->data[0]) - & 0x7fffffff; - u32 module_id = le32_to_cpu(bcf_hdr->module_id) - & 0x7fffffff; /* high bit used for something else */ - - /* special case for 5x50 */ - if (barker == i2400m_sboot_barker && module_id == 0) - return 1; - if (module_id == barker) - return 1; - return 0; -} - -static -const struct i2400m_bcf_hdr *i2400m_bcf_hdr_find(struct i2400m *i2400m) -{ - struct device *dev = i2400m_dev(i2400m); - const struct i2400m_bcf_hdr **bcf_itr, *bcf_hdr; - unsigned i = 0; - u32 barker = le32_to_cpu(i2400m->barker->data[0]); - - d_printf(2, dev, "finding bcf header for barker %08x ", barker); - if (barker == i2400m_nboot_barker) { - bcf_hdr = i2400m->fw_hdrs[0]; - d_printf(1, dev, "using bcf header #%u/%08x for non-signed " - "barker ", 0, le32_to_cpu(bcf_hdr->module_id)); - return bcf_hdr; - } - for (bcf_itr = i2400m->fw_hdrs; *bcf_itr != null; bcf_itr++, i++) { - bcf_hdr = *bcf_itr; - if (i2400m_bcf_hdr_match(i2400m, bcf_hdr)) { - d_printf(1, dev, "hit on bcf hdr #%u/%08x ", - i, le32_to_cpu(bcf_hdr->module_id)); - return bcf_hdr; - } else - d_printf(1, dev, "miss on bcf hdr #%u/%08x ", - i, le32_to_cpu(bcf_hdr->module_id)); - } - dev_err(dev, "cannot find a matching bcf header for barker %08x ", - barker); - return null; -} - - -/* - * download the firmware to the device - * - * @i2400m: device descriptor - * @bcf: pointer to loaded (and minimally verified for consistency) - * firmware - * @bcf_size: size of the @bcf buffer (header plus payloads) - * - * the process for doing this is described in this file's header. - * - * note we only reinitialize boot-mode if the flags say so. some hw - * iterations need it, some don't. in any case, if we loop, we always - * need to reinitialize the boot room, hence the flags modification. - */ -static -int i2400m_fw_dnload(struct i2400m *i2400m, const struct i2400m_bcf_hdr *bcf, - size_t fw_size, enum i2400m_bri flags) -{ - int ret = 0; - struct device *dev = i2400m_dev(i2400m); - int count = i2400m->bus_bm_retries; - const struct i2400m_bcf_hdr *bcf_hdr; - size_t bcf_size; - - d_fnstart(5, dev, "(i2400m %p bcf %p fw size %zu) ", - i2400m, bcf, fw_size); - i2400m->boot_mode = 1; - wmb(); /* make sure other readers see it */ -hw_reboot: - if (count-- == 0) { - ret = -erestartsys; - dev_err(dev, "device rebooted too many times, aborting "); - goto error_too_many_reboots; - } - if (flags & i2400m_bri_mac_reinit) { - ret = i2400m_bootrom_init(i2400m, flags); - if (ret < 0) { - dev_err(dev, "bootrom init failed: %d ", ret); - goto error_bootrom_init; - } - } - flags |= i2400m_bri_mac_reinit; - - /* - * initialize the download, push the bytes to the device and - * then jump to the new firmware. note @ret is passed with the - * offset of the jump instruction to _dnload_finalize() - * - * note we need to use the bcf header in the firmware image - * that matches the barker that the device sent when it - * rebooted, so it has to be passed along. - */ - ret = -ebadf; - bcf_hdr = i2400m_bcf_hdr_find(i2400m); - if (bcf_hdr == null) - goto error_bcf_hdr_find; - - ret = i2400m_dnload_init(i2400m, bcf_hdr); - if (ret == -erestartsys) - goto error_dev_rebooted; - if (ret < 0) - goto error_dnload_init; - - /* - * bcf_size refers to one header size plus the fw sections size - * indicated by the header,ie. if there are other extended headers - * at the tail, they are not counted - */ - bcf_size = sizeof(u32) * le32_to_cpu(bcf_hdr->size); - ret = i2400m_dnload_bcf(i2400m, bcf, bcf_size); - if (ret == -erestartsys) - goto error_dev_rebooted; - if (ret < 0) { - dev_err(dev, "fw %s: download failed: %d ", - i2400m->fw_name, ret); - goto error_dnload_bcf; - } - - ret = i2400m_dnload_finalize(i2400m, bcf_hdr, bcf, ret); - if (ret == -erestartsys) - goto error_dev_rebooted; - if (ret < 0) { - dev_err(dev, "fw %s: " - "download finalization failed: %d ", - i2400m->fw_name, ret); - goto error_dnload_finalize; - } - - d_printf(2, dev, "fw %s successfully uploaded ", - i2400m->fw_name); - i2400m->boot_mode = 0; - wmb(); /* make sure i2400m_msg_to_dev() sees boot_mode */ -error_dnload_finalize: -error_dnload_bcf: -error_dnload_init: -error_bcf_hdr_find: -error_bootrom_init: -error_too_many_reboots: - d_fnend(5, dev, "(i2400m %p bcf %p size %zu) = %d ", - i2400m, bcf, fw_size, ret); - return ret; - -error_dev_rebooted: - dev_err(dev, "device rebooted, %d tries left ", count); - /* we got the notification already, no need to wait for it again */ - flags |= i2400m_bri_soft; - goto hw_reboot; -} - -static -int i2400m_fw_bootstrap(struct i2400m *i2400m, const struct firmware *fw, - enum i2400m_bri flags) -{ - int ret; - struct device *dev = i2400m_dev(i2400m); - const struct i2400m_bcf_hdr *bcf; /* firmware data */ - - d_fnstart(5, dev, "(i2400m %p) ", i2400m); - bcf = (void *) fw->data; - ret = i2400m_fw_check(i2400m, bcf, fw->size); - if (ret >= 0) - ret = i2400m_fw_dnload(i2400m, bcf, fw->size, flags); - if (ret < 0) - dev_err(dev, "%s: cannot use: %d, skipping ", - i2400m->fw_name, ret); - kfree(i2400m->fw_hdrs); - i2400m->fw_hdrs = null; - d_fnend(5, dev, "(i2400m %p) = %d ", i2400m, ret); - return ret; -} - - -/* refcounted container for firmware data */ -struct i2400m_fw { - struct kref kref; - const struct firmware *fw; -}; - - -static -void i2400m_fw_destroy(struct kref *kref) -{ - struct i2400m_fw *i2400m_fw = - container_of(kref, struct i2400m_fw, kref); - release_firmware(i2400m_fw->fw); - kfree(i2400m_fw); -} - - -static -struct i2400m_fw *i2400m_fw_get(struct i2400m_fw *i2400m_fw) -{ - if (i2400m_fw != null && i2400m_fw != (void *) ~0) - kref_get(&i2400m_fw->kref); - return i2400m_fw; -} - - -static -void i2400m_fw_put(struct i2400m_fw *i2400m_fw) -{ - kref_put(&i2400m_fw->kref, i2400m_fw_destroy); -} - - -/** - * i2400m_dev_bootstrap - bring the device to a known state and upload firmware - * - * @i2400m: device descriptor - * @flags: - * i2400m_bri_soft: a reboot barker has been seen - * already, so don't wait for it. - * - * i2400m_bri_no_reboot: don't send a reboot command, but wait - * for a reboot barker notification. this is a one shot; if - * the state machine needs to send a reboot command it will. - * - * returns: >= 0 if ok, < 0 errno code on error. - * - * this sets up the firmware upload environment, loads the firmware - * file from disk, verifies and then calls the firmware upload process - * per se. - * - * can be called either from probe, or after a warm reset. can not be - * called from within an interrupt. all the flow in this code is - * single-threade; all i/os are synchronous. - */ -int i2400m_dev_bootstrap(struct i2400m *i2400m, enum i2400m_bri flags) -{ - int ret, itr; - struct device *dev = i2400m_dev(i2400m); - struct i2400m_fw *i2400m_fw; - const struct firmware *fw; - const char *fw_name; - - d_fnstart(5, dev, "(i2400m %p) ", i2400m); - - ret = -enodev; - spin_lock(&i2400m->rx_lock); - i2400m_fw = i2400m_fw_get(i2400m->fw_cached); - spin_unlock(&i2400m->rx_lock); - if (i2400m_fw == (void *) ~0) { - dev_err(dev, "can't load firmware now!"); - goto out; - } else if (i2400m_fw != null) { - dev_info(dev, "firmware %s: loading from cache ", - i2400m->fw_name); - ret = i2400m_fw_bootstrap(i2400m, i2400m_fw->fw, flags); - i2400m_fw_put(i2400m_fw); - goto out; - } - - /* load firmware files to memory. */ - for (itr = 0, ret = -enoent; ; itr++) { - fw_name = i2400m->bus_fw_names[itr]; - if (fw_name == null) { - dev_err(dev, "could not find a usable firmware image "); - break; - } - d_printf(1, dev, "trying firmware %s (%d) ", fw_name, itr); - ret = request_firmware(&fw, fw_name, dev); - if (ret < 0) { - dev_err(dev, "fw %s: cannot load file: %d ", - fw_name, ret); - continue; - } - i2400m->fw_name = fw_name; - ret = i2400m_fw_bootstrap(i2400m, fw, flags); - release_firmware(fw); - if (ret >= 0) /* firmware loaded successfully */ - break; - i2400m->fw_name = null; - } -out: - d_fnend(5, dev, "(i2400m %p) = %d ", i2400m, ret); - return ret; -} -export_symbol_gpl(i2400m_dev_bootstrap); - - -void i2400m_fw_cache(struct i2400m *i2400m) -{ - int result; - struct i2400m_fw *i2400m_fw; - struct device *dev = i2400m_dev(i2400m); - - /* if there is anything there, free it -- now, this'd be weird */ - spin_lock(&i2400m->rx_lock); - i2400m_fw = i2400m->fw_cached; - spin_unlock(&i2400m->rx_lock); - if (i2400m_fw != null && i2400m_fw != (void *) ~0) { - i2400m_fw_put(i2400m_fw); - warn(1, "%s:%u: still cached fw still present? ", - __func__, __line__); - } - - if (i2400m->fw_name == null) { - dev_err(dev, "firmware n/a: can't cache "); - i2400m_fw = (void *) ~0; - goto out; - } - - i2400m_fw = kzalloc(sizeof(*i2400m_fw), gfp_atomic); - if (i2400m_fw == null) - goto out; - kref_init(&i2400m_fw->kref); - result = request_firmware(&i2400m_fw->fw, i2400m->fw_name, dev); - if (result < 0) { - dev_err(dev, "firmware %s: failed to cache: %d ", - i2400m->fw_name, result); - kfree(i2400m_fw); - i2400m_fw = (void *) ~0; - } else - dev_info(dev, "firmware %s: cached ", i2400m->fw_name); -out: - spin_lock(&i2400m->rx_lock); - i2400m->fw_cached = i2400m_fw; - spin_unlock(&i2400m->rx_lock); -} - - -void i2400m_fw_uncache(struct i2400m *i2400m) -{ - struct i2400m_fw *i2400m_fw; - - spin_lock(&i2400m->rx_lock); - i2400m_fw = i2400m->fw_cached; - i2400m->fw_cached = null; - spin_unlock(&i2400m->rx_lock); - - if (i2400m_fw != null && i2400m_fw != (void *) ~0) - i2400m_fw_put(i2400m_fw); -} - diff --git a/drivers/staging/wimax/i2400m/i2400m-usb.h b/drivers/staging/wimax/i2400m/i2400m-usb.h --- a/drivers/staging/wimax/i2400m/i2400m-usb.h +++ /dev/null -/* - * intel wireless wimax connection 2400m - * usb-specific i2400m driver definitions - * - * - * copyright (c) 2007-2008 intel corporation. all rights reserved. - * - * redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * - * * redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * * redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in - * the documentation and/or other materials provided with the - * distribution. - * * neither the name of intel corporation nor the names of its - * contributors may be used to endorse or promote products derived - * from this software without specific prior written permission. - * - * this software is provided by the copyright holders and contributors - * "as is" and any express or implied warranties, including, but not - * limited to, the implied warranties of merchantability and fitness for - * a particular purpose are disclaimed. in no event shall the copyright - * owner or contributors be liable for any direct, indirect, incidental, - * special, exemplary, or consequential damages (including, but not - * limited to, procurement of substitute goods or services; loss of use, - * data, or profits; or business interruption) however caused and on any - * theory of liability, whether in contract, strict liability, or tort - * (including negligence or otherwise) arising in any way out of the use - * of this software, even if advised of the possibility of such damage. - * - * - * intel corporation <linux-wimax@intel.com> - * inaky perez-gonzalez <inaky.perez-gonzalez@intel.com> - * yanir lubetkin <yanirx.lubetkin@intel.com> - * - initial implementation - * - * - * this driver implements the bus-specific part of the i2400m for - * usb. check i2400m.h for a generic driver description. - * - * architecture - * - * this driver listens to notifications sent from the notification - * endpoint (in usb-notif.c); when data is ready to read, the code in - * there schedules a read from the device (usb-rx.c) and then passes - * the data to the generic rx code (rx.c). - * - * when the generic driver needs to send data (network or control), it - * queues up in the tx fifo (tx.c) and that will notify the driver - * through the i2400m->bus_tx_kick() callback - * (usb-tx.c:i2400mu_bus_tx_kick) which will send the items in the - * fifo queue. - * - * this driver, as well, implements the usb-specific ops for the generic - * driver to be able to setup/teardown communication with the device - * [i2400m_bus_dev_start() and i2400m_bus_dev_stop()], reseting the - * device [i2400m_bus_reset()] and performing firmware upload - * [i2400m_bus_bm_cmd() and i2400_bus_bm_wait_for_ack()]. - */ - -#ifndef __i2400m_usb_h__ -#define __i2400m_usb_h__ - -#include "i2400m.h" -#include <linux/kthread.h> - - -/* - * error density count: cheapo error density (over time) counter - * - * originally by reinette chatre <reinette.chatre@intel.com> - * - * embed an 'struct edc' somewhere. each time there is a soft or - * retryable error, call edc_inc() and check if the error top - * watermark has been reached. - */ -enum { - edc_max_errors = 10, - edc_error_timeframe = hz, -}; - -/* error density counter */ -struct edc { - unsigned long timestart; - u16 errorcount; -}; - -struct i2400m_endpoint_cfg { - unsigned char bulk_out; - unsigned char notification; - unsigned char reset_cold; - unsigned char bulk_in; -}; - -static inline void edc_init(struct edc *edc) -{ - edc->timestart = jiffies; -} - -/** - * edc_inc - report a soft error and check if we are over the watermark - * - * @edc: pointer to error density counter. - * @max_err: maximum number of errors we can accept over the timeframe - * @timeframe: length of the timeframe (in jiffies). - * - * returns: !0 1 if maximum acceptable errors per timeframe has been - * exceeded. 0 otherwise. - * - * this is way to determine if the number of acceptable errors per time - * period has been exceeded. it is not accurate as there are cases in which - * this scheme will not work, for example if there are periodic occurrences - * of errors that straddle updates to the start time. this scheme is - * sufficient for our usage. - * - * to use, embed a 'struct edc' somewhere, initialize it with - * edc_init() and when an error hits: - * - * if (do_something_fails_with_a_soft_error) { - * if (edc_inc(&my->edc, max_errors, max_timeframe)) - * ops, hard error, do something about it - * else - * retry or ignore, depending on whatever - * } - */ -static inline int edc_inc(struct edc *edc, u16 max_err, u16 timeframe) -{ - unsigned long now; - - now = jiffies; - if (time_after(now, edc->timestart + timeframe)) { - edc->errorcount = 1; - edc->timestart = now; - } else if (++edc->errorcount > max_err) { - edc->errorcount = 0; - edc->timestart = now; - return 1; - } - return 0; -} - -/* host-device interface for usb */ -enum { - i2400m_usb_boot_retries = 3, - i2400mu_max_notification_len = 256, - i2400mu_blk_size = 16, - i2400mu_pl_size_max = 0x3eff, - - /* device ids */ - usb_device_id_i6050 = 0x0186, - usb_device_id_i6050_2 = 0x0188, - usb_device_id_i6150 = 0x07d6, - usb_device_id_i6150_2 = 0x07d7, - usb_device_id_i6150_3 = 0x07d9, - usb_device_id_i6250 = 0x0187, -}; - - -/** - * struct i2400mu - descriptor for a usb connected i2400m - * - * @i2400m: bus-generic i2400m implementation; has to be first (see - * it's documentation in i2400m.h). - * - * @usb_dev: pointer to our usb device - * - * @usb_iface: pointer to our usb interface - * - * @urb_edc: error density counter; used to keep a density-on-time tab - * on how many soft (retryable or ignorable) errors we get. if we - * go over the threshold, we consider the bus transport is failing - * too much and reset. - * - * @notif_urb: urb for receiving notifications from the device. - * - * @tx_kthread: thread we use for data tx. we use a thread because in - * order to do deep power saving and put the device to sleep, we - * need to call usb_autopm_*() [blocking functions]. - * - * @tx_wq: waitqueue for the tx kthread to sleep when there is no data - * to be sent; when more data is available, it is woken up by - * i2400mu_bus_tx_kick(). - * - * @rx_kthread: thread we use for data rx. we use a thread because in - * order to do deep power saving and put the device to sleep, we - * need to call usb_autopm_*() [blocking functions]. - * - * @rx_wq: waitqueue for the rx kthread to sleep when there is no data - * to receive. when data is available, it is woken up by - * usb-notif.c:i2400mu_notification_grok(). - * - * @rx_pending_count: number of rx-data-ready notifications that were - * still not handled by the rx kthread. - * - * @rx_size: current rx buffer size that is being used. - * - * @rx_size_acc: accumulator of the sizes of the previous read - * transactions. - * - * @rx_size_cnt: number of read transactions accumulated in - * @rx_size_acc. - * - * @do_autopm: disable(0)/enable(>0) calling the - * usb_autopm_get/put_interface() barriers when executing - * commands. see doc in i2400mu_suspend() for more information. - * - * @rx_size_auto_shrink: if true, the rx_size is shrunk - * automatically based on the average size of the received - * transactions. this allows the receive code to allocate smaller - * chunks of memory and thus reduce pressure on the memory - * allocator by not wasting so much space. by default it is - * enabled. - * - * @debugfs_dentry: hookup for debugfs files. - * these have to be in a separate directory, a child of - * (wimax_dev->debugfs_dentry) so they can be removed when the - * module unloads, as we don't keep each dentry. - */ -struct i2400mu { - struct i2400m i2400m; /* first! see doc */ - - struct usb_device *usb_dev; - struct usb_interface *usb_iface; - struct edc urb_edc; /* error density counter */ - struct i2400m_endpoint_cfg endpoint_cfg; - - struct urb *notif_urb; - struct task_struct *tx_kthread; - wait_queue_head_t tx_wq; - - struct task_struct *rx_kthread; - wait_queue_head_t rx_wq; - atomic_t rx_pending_count; - size_t rx_size, rx_size_acc, rx_size_cnt; - atomic_t do_autopm; - u8 rx_size_auto_shrink; - - struct dentry *debugfs_dentry; - unsigned i6050:1; /* 1 if this is a 6050 based sku */ -}; - - -static inline -void i2400mu_init(struct i2400mu *i2400mu) -{ - i2400m_init(&i2400mu->i2400m); - edc_init(&i2400mu->urb_edc); - init_waitqueue_head(&i2400mu->tx_wq); - atomic_set(&i2400mu->rx_pending_count, 0); - init_waitqueue_head(&i2400mu->rx_wq); - i2400mu->rx_size = page_size - sizeof(struct skb_shared_info); - atomic_set(&i2400mu->do_autopm, 1); - i2400mu->rx_size_auto_shrink = 1; -} - -int i2400mu_notification_setup(struct i2400mu *); -void i2400mu_notification_release(struct i2400mu *); - -int i2400mu_rx_setup(struct i2400mu *); -void i2400mu_rx_release(struct i2400mu *); -void i2400mu_rx_kick(struct i2400mu *); - -int i2400mu_tx_setup(struct i2400mu *); -void i2400mu_tx_release(struct i2400mu *); -void i2400mu_bus_tx_kick(struct i2400m *); - -ssize_t i2400mu_bus_bm_cmd_send(struct i2400m *, - const struct i2400m_bootrom_header *, size_t, - int); -ssize_t i2400mu_bus_bm_wait_for_ack(struct i2400m *, - struct i2400m_bootrom_header *, size_t); -#endif /* #ifndef __i2400m_usb_h__ */ diff --git a/drivers/staging/wimax/i2400m/i2400m.h b/drivers/staging/wimax/i2400m/i2400m.h --- a/drivers/staging/wimax/i2400m/i2400m.h +++ /dev/null -/* - * intel wireless wimax connection 2400m - * declarations for bus-generic internal apis - * - * - * copyright (c) 2007-2008 intel corporation. all rights reserved. - * - * redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * - * * redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * * redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in - * the documentation and/or other materials provided with the - * distribution. - * * neither the name of intel corporation nor the names of its - * contributors may be used to endorse or promote products derived - * from this software without specific prior written permission. - * - * this software is provided by the copyright holders and contributors - * "as is" and any express or implied warranties, including, but not - * limited to, the implied warranties of merchantability and fitness for - * a particular purpose are disclaimed. in no event shall the copyright - * owner or contributors be liable for any direct, indirect, incidental, - * special, exemplary, or consequential damages (including, but not - * limited to, procurement of substitute goods or services; loss of use, - * data, or profits; or business interruption) however caused and on any - * theory of liability, whether in contract, strict liability, or tort - * (including negligence or otherwise) arising in any way out of the use - * of this software, even if advised of the possibility of such damage. - * - * - * intel corporation <linux-wimax@intel.com> - * inaky perez-gonzalez <inaky.perez-gonzalez@intel.com> - * yanir lubetkin <yanirx.lubetkin@intel.com> - * - initial implementation - * - * - * general driver architecture - * - * the i2400m driver is split in the following two major parts: - * - * - bus specific driver - * - bus generic driver (this part) - * - * the bus specific driver sets up stuff specific to the bus the - * device is connected to (usb, pci, tam-tam...non-authoritative - * nor binding list) which is basically the device-model management - * (probe/disconnect, etc), moving data from device to kernel and - * back, doing the power saving details and reseting the device. - * - * for details on each bus-specific driver, see it's include file, - * i2400m-busname.h - * - * the bus-generic functionality break up is: - * - * - firmware upload: fw.c - takes care of uploading firmware to the - * device. bus-specific driver just needs to provides a way to - * execute boot-mode commands and to reset the device. - * - * - rx handling: rx.c - receives data from the bus-specific code and - * feeds it to the network or wimax stack or uses it to modify - * the driver state. bus-specific driver only has to receive - * frames and pass them to this module. - * - * - tx handling: tx.c - manages the tx fifo queue and provides means - * for the bus-specific tx code to pull data from the fifo - * queue. bus-specific code just pulls frames from this module - * to sends them to the device. - * - * - netdev glue: netdev.c - interface with linux networking - * stack. pass around data frames, and configure when the - * device is up and running or shutdown (through ifconfig up / - * down). bus-generic only. - * - * - control ops: control.c - implements various commands for - * controlling the device. bus-generic only. - * - * - device model glue: driver.c - implements helpers for the - * device-model glue done by the bus-specific layer - * (setup/release the driver resources), turning the device on - * and off, handling the device reboots/resets and a few simple - * wimax stack ops. - * - * code is also broken up in linux-glue / device-glue. - * - * linux glue contains functions that deal mostly with gluing with the - * rest of the linux kernel. - * - * device-glue are functions that deal mostly with the way the device - * does things and talk the device's language. - * - * device-glue code is licensed bsd so other open source oses can take - * it to implement their drivers. - * - * - * apis and header files - * - * this bus generic code exports three apis: - * - * - hdi (host-device interface) definitions common to all busses - * (include/linux/wimax/i2400m.h); these can be also used by user - * space code. - * - internal api for the bus-generic code - * - external api for the bus-specific drivers - * - * - * life cycle: - * - * when the bus-specific driver probes, it allocates a network device - * with enough space for it's data structue, that must contain a - * &struct i2400m at the top. - * - * on probe, it needs to fill the i2400m members marked as [fill], as - * well as i2400m->wimax_dev.net_dev and call i2400m_setup(). the - * i2400m driver will only register with the wimax and network stacks; - * the only access done to the device is to read the mac address so we - * can register a network device. - * - * the high-level call flow is: - * - * bus_probe() - * i2400m_setup() - * i2400m->bus_setup() - * boot rom initialization / read mac addr - * network / wimax stacks registration - * i2400m_dev_start() - * i2400m->bus_dev_start() - * i2400m_dev_initialize() - * - * the reverse applies for a disconnect() call: - * - * bus_disconnect() - * i2400m_release() - * i2400m_dev_stop() - * i2400m_dev_shutdown() - * i2400m->bus_dev_stop() - * network / wimax stack unregistration - * i2400m->bus_release() - * - * at this point, control and data communications are possible. - * - * while the device is up, it might reset. the bus-specific driver has - * to catch that situation and call i2400m_dev_reset_handle() to deal - * with it (reset the internal driver structures and go back to square - * one). - */ - -#ifndef __i2400m_h__ -#define __i2400m_h__ - -#include <linux/usb.h> -#include <linux/netdevice.h> -#include <linux/completion.h> -#include <linux/rwsem.h> -#include <linux/atomic.h> -#include "../net-wimax.h" -#include "linux-wimax-i2400m.h" -#include <asm/byteorder.h> - -enum { -/* netdev interface */ - /* - * out of nwg spec (r1_v1.2.2), 3.3.3 asn bearer plane mtu size - * - * the mtu is 1400 or less - */ - i2400m_max_mtu = 1400, -}; - -/* misc constants */ -enum { - /* size of the boot mode command buffer */ - i2400m_bm_cmd_buf_size = 16 * 1024, - i2400m_bm_ack_buf_size = 256, -}; - -enum { - /* maximum number of bus reset can be retried */ - i2400m_bus_reset_retries = 3, -}; - -/** - * struct i2400m_poke_table - hardware poke table for the intel 2400m - * - * this structure will be used to create a device specific poke table - * to put the device in a consistent state at boot time. - * - * @address: the device address to poke - * - * @data: the data value to poke to the device address - * - */ -struct i2400m_poke_table{ - __le32 address; - __le32 data; -}; - -#define i2400m_fw_poke(a, d) { \ - .address = cpu_to_le32(a), \ - .data = cpu_to_le32(d) \ -} - - -/** - * i2400m_reset_type - methods to reset a device - * - * @i2400m_rt_warm: reset without device disconnection, device handles - * are kept valid but state is back to power on, with firmware - * re-uploaded. - * @i2400m_rt_cold: tell the device to disconnect itself from the bus - * and reconnect. renders all device handles invalid. - * @i2400m_rt_bus: tells the bus to reset the device; last measure - * used when both types above don't work. - */ -enum i2400m_reset_type { - i2400m_rt_warm, /* first measure */ - i2400m_rt_cold, /* second measure */ - i2400m_rt_bus, /* call in artillery */ -}; - -struct i2400m_reset_ctx; -struct i2400m_roq; -struct i2400m_barker_db; - -/** - * struct i2400m - descriptor for an intel 2400m - * - * members marked with [fill] must be filled out/initialized before - * calling i2400m_setup(). - * - * note the @bus_setup/@bus_release, @bus_dev_start/@bus_dev_release - * call pairs are very much doing almost the same, and depending on - * the underlying bus, some stuff has to be put in one or the - * other. the idea of setup/release is that they setup the minimal - * amount needed for loading firmware, where us dev_start/stop setup - * the rest needed to do full data/control traffic. - * - * @bus_tx_block_size: [fill] usb imposes a 16 block size, but other - * busses will differ. so we have a tx_blk_size variable that the - * bus layer sets to tell the engine how much of that we need. - * - * @bus_tx_room_min: [fill] minimum room required while allocating - * tx queue's buffer space for message header. usb requires - * 16 bytes. refer to bus specific driver code for details. - * - * @bus_pl_size_max: [fill] maximum payload size. - * - * @bus_setup: [optional fill] function called by the bus-generic code - * [i2400m_setup()] to setup the basic bus-specific communications - * to the the device needed to load firmware. see life cycle above. - * - * note: doesn't need to upload the firmware, as that is taken - * care of by the bus-generic code. - * - * @bus_release: [optional fill] function called by the bus-generic - * code [i2400m_release()] to shutdown the basic bus-specific - * communications to the the device needed to load firmware. see - * life cycle above. - * - * this function does not need to reset the device, just tear down - * all the host resources created to handle communication with - * the device. - * - * @bus_dev_start: [optional fill] function called by the bus-generic - * code [i2400m_dev_start()] to do things needed to start the - * device. see life cycle above. - * - * note: doesn't need to upload the firmware, as that is taken - * care of by the bus-generic code. - * - * @bus_dev_stop: [optional fill] function called by the bus-generic - * code [i2400m_dev_stop()] to do things needed for stopping the - * device. see life cycle above. - * - * this function does not need to reset the device, just tear down - * all the host resources created to handle communication with - * the device. - * - * @bus_tx_kick: [fill] function called by the bus-generic code to let - * the bus-specific code know that there is data available in the - * tx fifo for transmission to the device. - * - * this function cannot sleep. - * - * @bus_reset: [fill] function called by the bus-generic code to reset - * the device in in various ways. doesn't need to wait for the - * reset to finish. - * - * if warm or cold reset fail, this function is expected to do a - * bus-specific reset (eg: usb reset) to get the device to a - * working state (even if it implies device disconecction). - * - * note the warm reset is used by the firmware uploader to - * reinitialize the device. - * - * important: this is called very early in the device setup - * process, so it cannot rely on common infrastructure being laid - * out. - * - * important: don't call reset on rt_bus with i2400m->init_mutex - * held, as the .pre/.post reset handlers will deadlock. - * - * @bus_bm_retries: [fill] how many times shall a firmware upload / - * device initialization be retried? different models of the same - * device might need different values, hence it is set by the - * bus-specific driver. note this value is used in two places, - * i2400m_fw_dnload() and __i2400m_dev_start(); they won't become - * multiplicative (__i2400m_dev_start() calling n times - * i2400m_fw_dnload() and this trying n times to download the - * firmware), as if __i2400m_dev_start() only retries if the - * firmware crashed while initializing the device (not in a - * general case). - * - * @bus_bm_cmd_send: [fill] function called to send a boot-mode - * command. flags are defined in 'enum i2400m_bm_cmd_flags'. this - * is synchronous and has to return 0 if ok or < 0 errno code in - * any error condition. - * - * @bus_bm_wait_for_ack: [fill] function called to wait for a - * boot-mode notification (that can be a response to a previously - * issued command or an asynchronous one). will read until all the - * indicated size is read or timeout. reading more or less data - * than asked for is an error condition. return 0 if ok, < 0 errno - * code on error. - * - * the caller to this function will check if the response is a - * barker that indicates the device going into reset mode. - * - * @bus_fw_names: [fill] a null-terminated array with the names of the - * firmware images to try loading. this is made a list so we can - * support backward compatibility of firmware releases (eg: if we - * can't find the default v1.4, we try v1.3). in general, the name - * should be i2400m-fw-x-version.sbcf, where x is the bus name. - * the list is tried in order and the first one that loads is - * used. the fw loader will set i2400m->fw_name to point to the - * active firmware image. - * - * @bus_bm_mac_addr_impaired: [fill] set to true if the device's mac - * address provided in boot mode is kind of broken and needs to - * be re-read later on. - * - * @bus_bm_pokes_table: [fill/optional] a table of device addresses - * and values that will be poked at device init time to move the - * device to the correct state for the type of boot/firmware being - * used. this table must be terminated with (0x000000, - * 0x00000000) or bad things will happen. - * - * - * @wimax_dev: wimax generic device for linkage into the kernel wimax - * stack. due to the way a net_device is allocated, we need to - * force this to be the first field so that we can get from - * netdev_priv() the right pointer. - * - * @updown: the device is up and ready for transmitting control and - * data packets. this implies @ready (communication infrastructure - * with the device is ready) and the device's firmware has been - * loaded and the device initialized. - * - * write to it only inside a i2400m->init_mutex protected area - * followed with a wmb(); rmb() before accesing (unless locked - * inside i2400m->init_mutex). read access can be loose like that - * [just using rmb()] because the paths that use this also do - * other error checks later on. - * - * @ready: communication infrastructure with the device is ready, data - * frames can start to be passed around (this is lighter than - * using the wimax state for certain hot paths). - * - * write to it only inside a i2400m->init_mutex protected area - * followed with a wmb(); rmb() before accesing (unless locked - * inside i2400m->init_mutex). read access can be loose like that - * [just using rmb()] because the paths that use this also do - * other error checks later on. - * - * @rx_reorder: 1 if rx reordering is enabled; this can only be - * set at probe time. - * - * @state: device's state (as reported by it) - * - * @state_wq: waitqueue that is woken up whenever the state changes - * - * @tx_lock: spinlock to protect tx members - * - * @tx_buf: fifo buffer for tx; we queue data here - * - * @tx_in: fifo index for incoming data. note this doesn't wrap around - * and it is always greater than @tx_out. - * - * @tx_out: fifo index for outgoing data - * - * @tx_msg: current tx message that is active in the fifo for - * appending payloads. - * - * @tx_sequence: current sequence number for tx messages from the - * device to the host. - * - * @tx_msg_size: size of the current message being transmitted by the - * bus-specific code. - * - * @tx_pl_num: total number of payloads sent - * - * @tx_pl_max: maximum number of payloads sent in a tx message - * - * @tx_pl_min: minimum number of payloads sent in a tx message - * - * @tx_num: number of tx messages sent - * - * @tx_size_acc: number of bytes in all tx messages sent - * (this is different to net_dev's statistics as it also counts - * control messages). - * - * @tx_size_min: smallest tx message sent. - * - * @tx_size_max: biggest tx message sent. - * - * @rx_lock: spinlock to protect rx members and rx_roq_refcount. - * - * @rx_pl_num: total number of payloads received - * - * @rx_pl_max: maximum number of payloads received in a rx message - * - * @rx_pl_min: minimum number of payloads received in a rx message - * - * @rx_num: number of rx messages received - * - * @rx_size_acc: number of bytes in all rx messages received - * (this is different to net_dev's statistics as it also counts - * control messages). - * - * @rx_size_min: smallest rx message received. - * - * @rx_size_max: buggest rx message received. - * - * @rx_roq: rx reorder queues. (fw >= v1.4) when packets are received - * out of order, the device will ask the driver to hold certain - * packets until the ones that are received out of order can be - * delivered. then the driver can release them to the host. see - * drivers/net/i2400m/rx.c for details. - * - * @rx_roq_refcount: refcount rx_roq. this refcounts any access to - * rx_roq thus preventing rx_roq being destroyed when rx_roq - * is being accessed. rx_roq_refcount is protected by rx_lock. - * - * @rx_reports: reports received from the device that couldn't be - * processed because the driver wasn't still ready; when ready, - * they are pulled from here and chewed. - * - * @rx_reports_ws: work struct used to kick a scan of the rx reports - * list and to process each. - * - * @src_mac_addr: mac address used to make ethernet packets be coming - * from. this is generated at i2400m_setup() time and used during - * the life cycle of the instance. see i2400m_fake_eth_header(). - * - * @init_mutex: mutex used for serializing the device bringup - * sequence; this way if the device reboots in the middle, we - * don't try to do a bringup again while we are tearing down the - * one that failed. - * - * can't reuse @msg_mutex because from within the bringup sequence - * we need to send messages to the device and thus use @msg_mutex. - * - * @msg_mutex: mutex used to send control commands to the device (we - * only allow one at a time, per host-device interface design). - * - * @msg_completion: used to wait for an ack to a control command sent - * to the device. - * - * @ack_skb: used to store the actual ack to a control command if the - * reception of the command was successful. otherwise, a err_ptr() - * errno code that indicates what failed with the ack reception. - * - * only valid after @msg_completion is woken up. only updateable - * if @msg_completion is armed. only touched by - * i2400m_msg_to_dev(). - * - * protected by @rx_lock. in theory the command execution flow is - * sequential, but in case the device sends an out-of-phase or - * very delayed response, we need to avoid it trampling current - * execution. - * - * @bm_cmd_buf: boot mode command buffer for composing firmware upload - * commands. - * - * usb can't r/w to stack, vmalloc, etc...as well, we end up - * having to alloc/free a lot to compose commands, so we use these - * for stagging and not having to realloc all the time. - * - * this assumes the code always runs serialized. only one thread - * can call i2400m_bm_cmd() at the same time. - * - * @bm_ack_buf: boot mode acknoledge buffer for staging reception of - * responses to commands. - * - * see @bm_cmd_buf. - * - * @work_queue: work queue for processing device reports. this - * workqueue cannot be used for processing tx or rx to the device, - * as from it we'll process device reports, which might require - * further communication with the device. - * - * @debugfs_dentry: hookup for debugfs files. - * these have to be in a separate directory, a child of - * (wimax_dev->debugfs_dentry) so they can be removed when the - * module unloads, as we don't keep each dentry. - * - * @fw_name: name of the firmware image that is currently being used. - * - * @fw_version: version of the firmware interface, major.minor, - * encoded in the high word and low word (major << 16 | minor). - * - * @fw_hdrs: null terminated array of pointers to the firmware - * headers. this is only available during firmware load time. - * - * @fw_cached: used to cache firmware when the system goes to - * suspend/standby/hibernation (as on resume we can't read it). if - * null, no firmware was cached, read it. if ~0, you can't read - * any firmware files (the system still didn't come out of suspend - * and failed to cache one), so abort; otherwise, a valid cached - * firmware to be used. access to this variable is protected by - * the spinlock i2400m->rx_lock. - * - * @barker: barker type that the device uses; this is initialized by - * i2400m_is_boot_barker() the first time it is called. then it - * won't change during the life cycle of the device and every time - * a boot barker is received, it is just verified for it being the - * same. - * - * @pm_notifier: used to register for pm events - * - * @bus_reset_retries: counter for the number of bus resets attempted for - * this boot. it's not for tracking the number of bus resets during - * the whole driver life cycle (from insmod to rmmod) but for the - * number of dev_start() executed until dev_start() returns a success - * (ie: a good boot means a dev_stop() followed by a successful - * dev_start()). dev_reset_handler() increments this counter whenever - * it is triggering a bus reset. it checks this counter to decide if a - * subsequent bus reset should be retried. dev_reset_handler() retries - * the bus reset until dev_start() succeeds or the counter reaches - * i2400m_bus_reset_retries. the counter is cleared to 0 in - * dev_reset_handle() when dev_start() returns a success, - * ie: a successul boot is completed. - * - * @alive: flag to denote if the device *should* be alive. this flag is - * everything like @updown (see doc for @updown) except reflecting - * the device state *we expect* rather than the actual state as denoted - * by @updown. it is set 1 whenever @updown is set 1 in dev_start(). - * then the device is expected to be alive all the time - * (i2400m->alive remains 1) until the driver is removed. therefore - * all the device reboot events detected can be still handled properly - * by either dev_reset_handle() or .pre_reset/.post_reset as long as - * the driver presents. it is set 0 along with @updown in dev_stop(). - * - * @error_recovery: flag to denote if we are ready to take an error recovery. - * 0 for ready to take an error recovery; 1 for not ready. it is - * initialized to 1 while probe() since we don't tend to take any error - * recovery during probe(). it is decremented by 1 whenever dev_start() - * succeeds to indicate we are ready to take error recovery from now on. - * it is checked every time we wanna schedule an error recovery. if an - * error recovery is already in place (error_recovery was set 1), we - * should not schedule another one until the last one is done. - */ -struct i2400m { - struct wimax_dev wimax_dev; /* first! see doc */ - - unsigned updown:1; /* network device is up or down */ - unsigned boot_mode:1; /* is the device in boot mode? */ - unsigned sboot:1; /* signed or unsigned fw boot */ - unsigned ready:1; /* device comm infrastructure ready */ - unsigned rx_reorder:1; /* rx reorder is enabled */ - u8 trace_msg_from_user; /* echo rx msgs to 'trace' pipe */ - /* typed u8 so /sys/kernel/debug/u8 can tweak */ - enum i2400m_system_state state; - wait_queue_head_t state_wq; /* woken up when on state updates */ - - size_t bus_tx_block_size; - size_t bus_tx_room_min; - size_t bus_pl_size_max; - unsigned bus_bm_retries; - - int (*bus_setup)(struct i2400m *); - int (*bus_dev_start)(struct i2400m *); - void (*bus_dev_stop)(struct i2400m *); - void (*bus_release)(struct i2400m *); - void (*bus_tx_kick)(struct i2400m *); - int (*bus_reset)(struct i2400m *, enum i2400m_reset_type); - ssize_t (*bus_bm_cmd_send)(struct i2400m *, - const struct i2400m_bootrom_header *, - size_t, int flags); - ssize_t (*bus_bm_wait_for_ack)(struct i2400m *, - struct i2400m_bootrom_header *, size_t); - const char **bus_fw_names; - unsigned bus_bm_mac_addr_impaired:1; - const struct i2400m_poke_table *bus_bm_pokes_table; - - spinlock_t tx_lock; /* protect tx state */ - void *tx_buf; - size_t tx_in, tx_out; - struct i2400m_msg_hdr *tx_msg; - size_t tx_sequence, tx_msg_size; - /* tx stats */ - unsigned tx_pl_num, tx_pl_max, tx_pl_min, - tx_num, tx_size_acc, tx_size_min, tx_size_max; - - /* rx stuff */ - /* protect rx state and rx_roq_refcount */ - spinlock_t rx_lock; - unsigned rx_pl_num, rx_pl_max, rx_pl_min, - rx_num, rx_size_acc, rx_size_min, rx_size_max; - struct i2400m_roq *rx_roq; /* access is refcounted */ - struct kref rx_roq_refcount; /* refcount access to rx_roq */ - u8 src_mac_addr[eth_hlen]; - struct list_head rx_reports; /* under rx_lock! */ - struct work_struct rx_report_ws; - - struct mutex msg_mutex; /* serialize command execution */ - struct completion msg_completion; - struct sk_buff *ack_skb; /* protected by rx_lock */ - - void *bm_ack_buf; /* for receiving acks over usb */ - void *bm_cmd_buf; /* for issuing commands over usb */ - - struct workqueue_struct *work_queue; - - struct mutex init_mutex; /* protect bringup seq */ - struct i2400m_reset_ctx *reset_ctx; /* protected by init_mutex */ - - struct work_struct wake_tx_ws; - struct sk_buff *wake_tx_skb; - - struct work_struct reset_ws; - const char *reset_reason; - - struct work_struct recovery_ws; - - struct dentry *debugfs_dentry; - const char *fw_name; /* name of the current firmware image */ - unsigned long fw_version; /* version of the firmware interface */ - const struct i2400m_bcf_hdr **fw_hdrs; - struct i2400m_fw *fw_cached; /* protected by rx_lock */ - struct i2400m_barker_db *barker; - - struct notifier_block pm_notifier; - - /* counting bus reset retries in this boot */ - atomic_t bus_reset_retries; - - /* if the device is expected to be alive */ - unsigned alive; - - /* 0 if we are ready for error recovery; 1 if not ready */ - atomic_t error_recovery; - -}; - - -/* - * bus-generic internal apis - * ------------------------- - */ - -static inline -struct i2400m *wimax_dev_to_i2400m(struct wimax_dev *wimax_dev) -{ - return container_of(wimax_dev, struct i2400m, wimax_dev); -} - -static inline -struct i2400m *net_dev_to_i2400m(struct net_device *net_dev) -{ - return wimax_dev_to_i2400m(netdev_priv(net_dev)); -} - -/* - * boot mode support - */ - -/** - * i2400m_bm_cmd_flags - flags to i2400m_bm_cmd() - * - * @i2400m_bm_cmd_raw: send the command block as-is, without doing any - * extra processing for adding crc. - */ -enum i2400m_bm_cmd_flags { - i2400m_bm_cmd_raw = 1 << 2, -}; - -/** - * i2400m_bri - boot-rom indicators - * - * flags for i2400m_bootrom_init() and i2400m_dev_bootstrap() [which - * are passed from things like i2400m_setup()]. can be combined with - * |. - * - * @i2400m_bri_soft: the device rebooted already and a reboot - * barker received, proceed directly to ack the boot sequence. - * @i2400m_bri_no_reboot: do not reboot the device and proceed - * directly to wait for a reboot barker from the device. - * @i2400m_bri_mac_reinit: we need to reinitialize the boot - * rom after reading the mac address. this is quite a dirty hack, - * if you ask me -- the device requires the bootrom to be - * initialized after reading the mac address. - */ -enum i2400m_bri { - i2400m_bri_soft = 1 << 1, - i2400m_bri_no_reboot = 1 << 2, - i2400m_bri_mac_reinit = 1 << 3, -}; - -void i2400m_bm_cmd_prepare(struct i2400m_bootrom_header *); -int i2400m_dev_bootstrap(struct i2400m *, enum i2400m_bri); -int i2400m_read_mac_addr(struct i2400m *); -int i2400m_bootrom_init(struct i2400m *, enum i2400m_bri); -int i2400m_is_boot_barker(struct i2400m *, const void *, size_t); -static inline -int i2400m_is_d2h_barker(const void *buf) -{ - const __le32 *barker = buf; - return le32_to_cpu(*barker) == i2400m_d2h_msg_barker; -} -void i2400m_unknown_barker(struct i2400m *, const void *, size_t); - -/* make/grok boot-rom header commands */ - -static inline -__le32 i2400m_brh_command(enum i2400m_brh_opcode opcode, unsigned use_checksum, - unsigned direct_access) -{ - return cpu_to_le32( - i2400m_brh_signature - | (direct_access ? i2400m_brh_direct_access : 0) - | i2400m_brh_response_required /* response always required */ - | (use_checksum ? i2400m_brh_use_checksum : 0) - | (opcode & i2400m_brh_opcode_mask)); -} - -static inline -void i2400m_brh_set_opcode(struct i2400m_bootrom_header *hdr, - enum i2400m_brh_opcode opcode) -{ - hdr->command = cpu_to_le32( - (le32_to_cpu(hdr->command) & ~i2400m_brh_opcode_mask) - | (opcode & i2400m_brh_opcode_mask)); -} - -static inline -unsigned i2400m_brh_get_opcode(const struct i2400m_bootrom_header *hdr) -{ - return le32_to_cpu(hdr->command) & i2400m_brh_opcode_mask; -} - -static inline -unsigned i2400m_brh_get_response(const struct i2400m_bootrom_header *hdr) -{ - return (le32_to_cpu(hdr->command) & i2400m_brh_response_mask) - >> i2400m_brh_response_shift; -} - -static inline -unsigned i2400m_brh_get_use_checksum(const struct i2400m_bootrom_header *hdr) -{ - return le32_to_cpu(hdr->command) & i2400m_brh_use_checksum; -} - -static inline -unsigned i2400m_brh_get_response_required( - const struct i2400m_bootrom_header *hdr) -{ - return le32_to_cpu(hdr->command) & i2400m_brh_response_required; -} - -static inline -unsigned i2400m_brh_get_direct_access(const struct i2400m_bootrom_header *hdr) -{ - return le32_to_cpu(hdr->command) & i2400m_brh_direct_access; -} - -static inline -unsigned i2400m_brh_get_signature(const struct i2400m_bootrom_header *hdr) -{ - return (le32_to_cpu(hdr->command) & i2400m_brh_signature_mask) - >> i2400m_brh_signature_shift; -} - - -/* - * driver / device setup and internal functions - */ -void i2400m_init(struct i2400m *); -int i2400m_reset(struct i2400m *, enum i2400m_reset_type); -void i2400m_netdev_setup(struct net_device *net_dev); -int i2400m_sysfs_setup(struct device_driver *); -void i2400m_sysfs_release(struct device_driver *); -int i2400m_tx_setup(struct i2400m *); -void i2400m_wake_tx_work(struct work_struct *); -void i2400m_tx_release(struct i2400m *); - -int i2400m_rx_setup(struct i2400m *); -void i2400m_rx_release(struct i2400m *); - -void i2400m_fw_cache(struct i2400m *); -void i2400m_fw_uncache(struct i2400m *); - -void i2400m_net_rx(struct i2400m *, struct sk_buff *, unsigned, const void *, - int); -void i2400m_net_erx(struct i2400m *, struct sk_buff *, enum i2400m_cs); -void i2400m_net_wake_stop(struct i2400m *); -enum i2400m_pt; -int i2400m_tx(struct i2400m *, const void *, size_t, enum i2400m_pt); - -#ifdef config_debug_fs -void i2400m_debugfs_add(struct i2400m *); -void i2400m_debugfs_rm(struct i2400m *); -#else -static inline void i2400m_debugfs_add(struct i2400m *i2400m) {} -static inline void i2400m_debugfs_rm(struct i2400m *i2400m) {} -#endif - -/* initialize/shutdown the device */ -int i2400m_dev_initialize(struct i2400m *); -void i2400m_dev_shutdown(struct i2400m *); - -extern struct attribute_group i2400m_dev_attr_group; - - -/* hdi message's payload description handling */ - -static inline -size_t i2400m_pld_size(const struct i2400m_pld *pld) -{ - return i2400m_pld_size_mask & le32_to_cpu(pld->val); -} - -static inline -enum i2400m_pt i2400m_pld_type(const struct i2400m_pld *pld) -{ - return (i2400m_pld_type_mask & le32_to_cpu(pld->val)) - >> i2400m_pld_type_shift; -} - -static inline -void i2400m_pld_set(struct i2400m_pld *pld, size_t size, - enum i2400m_pt type) -{ - pld->val = cpu_to_le32( - ((type << i2400m_pld_type_shift) & i2400m_pld_type_mask) - | (size & i2400m_pld_size_mask)); -} - - -/* - * api for the bus-specific drivers - * -------------------------------- - */ - -static inline -struct i2400m *i2400m_get(struct i2400m *i2400m) -{ - dev_hold(i2400m->wimax_dev.net_dev); - return i2400m; -} - -static inline -void i2400m_put(struct i2400m *i2400m) -{ - dev_put(i2400m->wimax_dev.net_dev); -} - -int i2400m_dev_reset_handle(struct i2400m *, const char *); -int i2400m_pre_reset(struct i2400m *); -int i2400m_post_reset(struct i2400m *); -void i2400m_error_recovery(struct i2400m *); - -/* - * _setup()/_release() are called by the probe/disconnect functions of - * the bus-specific drivers. - */ -int i2400m_setup(struct i2400m *, enum i2400m_bri bm_flags); -void i2400m_release(struct i2400m *); - -int i2400m_rx(struct i2400m *, struct sk_buff *); -struct i2400m_msg_hdr *i2400m_tx_msg_get(struct i2400m *, size_t *); -void i2400m_tx_msg_sent(struct i2400m *); - - -/* - * utility functions - */ - -static inline -struct device *i2400m_dev(struct i2400m *i2400m) -{ - return i2400m->wimax_dev.net_dev->dev.parent; -} - -int i2400m_msg_check_status(const struct i2400m_l3l4_hdr *, char *, size_t); -int i2400m_msg_size_check(struct i2400m *, const struct i2400m_l3l4_hdr *, - size_t); -struct sk_buff *i2400m_msg_to_dev(struct i2400m *, const void *, size_t); -void i2400m_msg_to_dev_cancel_wait(struct i2400m *, int); -void i2400m_report_hook(struct i2400m *, const struct i2400m_l3l4_hdr *, - size_t); -void i2400m_report_hook_work(struct work_struct *); -int i2400m_cmd_enter_powersave(struct i2400m *); -int i2400m_cmd_exit_idle(struct i2400m *); -struct sk_buff *i2400m_get_device_info(struct i2400m *); -int i2400m_firmware_check(struct i2400m *); -int i2400m_set_idle_timeout(struct i2400m *, unsigned); - -static inline -struct usb_endpoint_descriptor *usb_get_epd(struct usb_interface *iface, int ep) -{ - return &iface->cur_altsetting->endpoint[ep].desc; -} - -int i2400m_op_rfkill_sw_toggle(struct wimax_dev *, enum wimax_rf_state); -void i2400m_report_tlv_rf_switches_status(struct i2400m *, - const struct i2400m_tlv_rf_switches_status *); - -/* - * helpers for firmware backwards compatibility - * - * as we aim to support at least the firmware version that was - * released with the previous kernel/driver release, some code will be - * conditionally executed depending on the firmware version. on each - * release, the code to support fw releases past the last two ones - * will be purged. - * - * by making it depend on this macros, it is easier to keep it a tab - * on what has to go and what not. - */ -static inline -unsigned i2400m_le_v1_3(struct i2400m *i2400m) -{ - /* running fw is lower or v1.3 */ - return i2400m->fw_version <= 0x00090001; -} - -static inline -unsigned i2400m_ge_v1_4(struct i2400m *i2400m) -{ - /* running fw is higher or v1.4 */ - return i2400m->fw_version >= 0x00090002; -} - - -/* - * do a millisecond-sleep for allowing wireshark to dump all the data - * packets. used only for debugging. - */ -static inline -void __i2400m_msleep(unsigned ms) -{ -#if 1 -#else - msleep(ms); -#endif -} - - -/* module initialization helpers */ -int i2400m_barker_db_init(const char *); -void i2400m_barker_db_exit(void); - - - -#endif /* #ifndef __i2400m_h__ */ diff --git a/drivers/staging/wimax/i2400m/linux-wimax-i2400m.h b/drivers/staging/wimax/i2400m/linux-wimax-i2400m.h --- a/drivers/staging/wimax/i2400m/linux-wimax-i2400m.h +++ /dev/null -/* - * intel wireless wimax connection 2400m - * host-device protocol interface definitions - * - * - * copyright (c) 2007-2008 intel corporation. all rights reserved. - * - * redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * - * * redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * * redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in - * the documentation and/or other materials provided with the - * distribution. - * * neither the name of intel corporation nor the names of its - * contributors may be used to endorse or promote products derived - * from this software without specific prior written permission. - * - * this software is provided by the copyright holders and contributors - * "as is" and any express or implied warranties, including, but not - * limited to, the implied warranties of merchantability and fitness for - * a particular purpose are disclaimed. in no event shall the copyright - * owner or contributors be liable for any direct, indirect, incidental, - * special, exemplary, or consequential damages (including, but not - * limited to, procurement of substitute goods or services; loss of use, - * data, or profits; or business interruption) however caused and on any - * theory of liability, whether in contract, strict liability, or tort - * (including negligence or otherwise) arising in any way out of the use - * of this software, even if advised of the possibility of such damage. - * - * - * intel corporation <linux-wimax@intel.com> - * inaky perez-gonzalez <inaky.perez-gonzalez@intel.com> - * - initial implementation - * - * - * this header defines the data structures and constants used to - * communicate with the device. - * - * bootmode/bootrom/firmware upload protocol - * - * the firmware upload protocol is quite simple and only requires a - * handful of commands. see drivers/net/wimax/i2400m/fw.c for more - * details. - * - * the bcf data structure is for the firmware file header. - * - * - * the data / control protocol - * - * this is the normal protocol spoken with the device once the - * firmware is uploaded. it transports data payloads and control - * messages back and forth. - * - * it consists 'messages' that pack one or more payloads each. the - * format is described in detail in drivers/net/wimax/i2400m/rx.c and - * tx.c. - * - * - * the l3l4 protocol - * - * the term l3l4 refers to layer 3 (the device), layer 4 (the - * driver/host software). - * - * this is the control protocol used by the host to control the i2400m - * device (scan, connect, disconnect...). this is sent to / received - * as control frames. these frames consist of a header and zero or - * more tlvs with information. we call each control frame a "message". - * - * each message is composed of: - * - * header - * [tlv0 + payload0] - * [tlv1 + payload1] - * [...] - * [tlvn + payloadn] - * - * the header is defined by 'struct i2400m_l3l4_hdr'. the payloads are - * defined by a tlv structure (type length value) which is a 'header' - * (struct i2400m_tlv_hdr) and then the payload. - * - * all integers are represented as little endian. - * - * - requests and events - * - * the requests can be clasified as follows: - * - * command: implies a request from the host to the device requesting - * an action being performed. the device will reply with a - * message (with the same type as the command), status and - * no (tlv) payload. execution of a command might cause - * events (of different type) to be sent later on as - * device's state changes. - * - * get/set: similar to command, but will not cause other - * events. the reply, in the case of get, will contain - * tlvs with the requested information. - * - * event: asynchronous messages sent from the device, maybe as a - * consequence of previous commands but disassociated from - * them. - * - * only one request might be pending at the same time (ie: don't - * parallelize nor post another get request before the previous - * command has been acknowledged with it's corresponding reply by the - * device). - * - * the different requests and their formats are described below: - * - * i2400m_mt_* message types - * i2400m_ms_* message status (for replies, events) - * i2400m_tlv_* tlvs - * - * data types are named 'struct i2400m_msg_opname', opname matching the - * operation. - */ - -#ifndef __linux__wimax__i2400m_h__ -#define __linux__wimax__i2400m_h__ - -#include <linux/types.h> -#include <linux/if_ether.h> - -/* - * host device interface (hdi) common to all busses - */ - -/* boot-mode (firmware upload mode) commands */ - -/* header for the firmware file */ -struct i2400m_bcf_hdr { - __le32 module_type; - __le32 header_len; - __le32 header_version; - __le32 module_id; - __le32 module_vendor; - __le32 date; /* bcd yyymmdd */ - __le32 size; /* in dwords */ - __le32 key_size; /* in dwords */ - __le32 modulus_size; /* in dwords */ - __le32 exponent_size; /* in dwords */ - __u8 reserved[88]; -} __attribute__ ((packed)); - -/* boot mode opcodes */ -enum i2400m_brh_opcode { - i2400m_brh_read = 1, - i2400m_brh_write = 2, - i2400m_brh_jump = 3, - i2400m_brh_signed_jump = 8, - i2400m_brh_hash_payload_only = 9, -}; - -/* boot mode command masks and stuff */ -enum i2400m_brh { - i2400m_brh_signature = 0xcbbc0000, - i2400m_brh_signature_mask = 0xffff0000, - i2400m_brh_signature_shift = 16, - i2400m_brh_opcode_mask = 0x0000000f, - i2400m_brh_response_mask = 0x000000f0, - i2400m_brh_response_shift = 4, - i2400m_brh_direct_access = 0x00000400, - i2400m_brh_response_required = 0x00000200, - i2400m_brh_use_checksum = 0x00000100, -}; - - -/** - * i2400m_bootrom_header - header for a boot-mode command - * - * @cmd: the above command descriptor - * @target_addr: where on the device memory should the action be performed. - * @data_size: for read/write, amount of data to be read/written - * @block_checksum: checksum value (if applicable) - * @payload: the beginning of data attached to this header - */ -struct i2400m_bootrom_header { - __le32 command; /* compose with enum i2400_brh */ - __le32 target_addr; - __le32 data_size; - __le32 block_checksum; - char payload[0]; -} __attribute__ ((packed)); - - -/* - * data / control protocol - */ - -/* packet types for the host-device interface */ -enum i2400m_pt { - i2400m_pt_data = 0, - i2400m_pt_ctrl, - i2400m_pt_trace, /* for device debug */ - i2400m_pt_reset_warm, /* device reset */ - i2400m_pt_reset_cold, /* usb[transport] reset, like reconnect */ - i2400m_pt_edata, /* extended rx data */ - i2400m_pt_illegal -}; - - -/* - * payload for a data packet - * - * this is prefixed to each and every outgoing data type. - */ -struct i2400m_pl_data_hdr { - __le32 reserved; -} __attribute__((packed)); - - -/* - * payload for an extended data packet - * - * new in fw v1.4 - * - * @reorder: if this payload has to be reorder or not (and how) - * @cs: the type of data in the packet, as defined per (802.16e - * t11.13.19.1). currently only 2 (ipv4 packet) supported. - * - * this is prefixed to each and every incoming data packet. - */ -struct i2400m_pl_edata_hdr { - __le32 reorder; /* bits defined in i2400m_ro */ - __u8 cs; - __u8 reserved[11]; -} __attribute__((packed)); - -enum i2400m_cs { - i2400m_cs_ipv4_0 = 0, - i2400m_cs_ipv4 = 2, -}; - -enum i2400m_ro { - i2400m_ro_needed = 0x01, - i2400m_ro_type = 0x03, - i2400m_ro_type_shift = 1, - i2400m_ro_cin = 0x0f, - i2400m_ro_cin_shift = 4, - i2400m_ro_fbn = 0x07ff, - i2400m_ro_fbn_shift = 8, - i2400m_ro_sn = 0x07ff, - i2400m_ro_sn_shift = 21, -}; - -enum i2400m_ro_type { - i2400m_ro_type_reset = 0, - i2400m_ro_type_packet, - i2400m_ro_type_ws, - i2400m_ro_type_packet_ws, -}; - - -/* misc constants */ -enum { - i2400m_pl_align = 16, /* payload data size alignment */ - i2400m_pl_size_max = 0x3eff, - i2400m_max_pls_in_msg = 60, - /* protocol barkers: sync sequences; for notifications they - * are sent in groups of four. */ - i2400m_h2d_preview_barker = 0xcafe900d, - i2400m_cold_reset_barker = 0xc01dc01d, - i2400m_warm_reset_barker = 0x50f750f7, - i2400m_nboot_barker = 0xdeadbeef, - i2400m_sboot_barker = 0x0ff1c1a1, - i2400m_sboot_barker_6050 = 0x80000001, - i2400m_ack_barker = 0xfeedbabe, - i2400m_d2h_msg_barker = 0xbeefbabe, -}; - - -/* - * hardware payload descriptor - * - * bitfields encoded in a struct to enforce typing semantics. - * - * look in rx.c and tx.c for a full description of the format. - */ -struct i2400m_pld { - __le32 val; -} __attribute__ ((packed)); - -#define i2400m_pld_size_mask 0x00003fff -#define i2400m_pld_type_shift 16 -#define i2400m_pld_type_mask 0x000f0000 - -/* - * header for a tx message or rx message - * - * @barker: preamble - * @size: used for management of the fifo queue buffer; before - * sending, this is converted to be a real preamble. this - * indicates the real size of the tx message that starts at this - * point. if the highest bit is set, then this message is to be - * skipped. - * @sequence: sequence number of this message - * @offset: offset where the message itself starts -- see the comments - * in the file header about message header and payload descriptor - * alignment. - * @num_pls: number of payloads in this message - * @padding: amount of padding bytes at the end of the message to make - * it be of block-size aligned - * - * look in rx.c and tx.c for a full description of the format. - */ -struct i2400m_msg_hdr { - union { - __le32 barker; - __u32 size; /* same size type as barker!! */ - }; - union { - __le32 sequence; - __u32 offset; /* same size type as barker!! */ - }; - __le16 num_pls; - __le16 rsv1; - __le16 padding; - __le16 rsv2; - struct i2400m_pld pld[0]; -} __attribute__ ((packed)); - - - -/* - * l3/l4 control protocol - */ - -enum { - /* interface version */ - i2400m_l3l4_version = 0x0100, -}; - -/* message types */ -enum i2400m_mt { - i2400m_mt_reserved = 0x0000, - i2400m_mt_invalid = 0xffff, - i2400m_mt_report_mask = 0x8000, - - i2400m_mt_get_scan_result = 0x4202, - i2400m_mt_set_scan_param = 0x4402, - i2400m_mt_cmd_rf_control = 0x4602, - i2400m_mt_cmd_scan = 0x4603, - i2400m_mt_cmd_connect = 0x4604, - i2400m_mt_cmd_disconnect = 0x4605, - i2400m_mt_cmd_exit_idle = 0x4606, - i2400m_mt_get_lm_version = 0x5201, - i2400m_mt_get_device_info = 0x5202, - i2400m_mt_get_link_status = 0x5203, - i2400m_mt_get_statistics = 0x5204, - i2400m_mt_get_state = 0x5205, - i2400m_mt_get_media_status = 0x5206, - i2400m_mt_set_init_config = 0x5404, - i2400m_mt_cmd_init = 0x5601, - i2400m_mt_cmd_terminate = 0x5602, - i2400m_mt_cmd_mode_of_op = 0x5603, - i2400m_mt_cmd_reset_device = 0x5604, - i2400m_mt_cmd_monitor_control = 0x5605, - i2400m_mt_cmd_enter_powersave = 0x5606, - i2400m_mt_get_tls_operation_result = 0x6201, - i2400m_mt_set_eap_success = 0x6402, - i2400m_mt_set_eap_fail = 0x6403, - i2400m_mt_set_eap_key = 0x6404, - i2400m_mt_cmd_send_eap_response = 0x6602, - i2400m_mt_report_scan_result = 0xc002, - i2400m_mt_report_state = 0xd002, - i2400m_mt_report_powersave_ready = 0xd005, - i2400m_mt_report_eap_request = 0xe002, - i2400m_mt_report_eap_restart = 0xe003, - i2400m_mt_report_alt_accept = 0xe004, - i2400m_mt_report_key_request = 0xe005, -}; - - -/* - * message ack status codes - * - * when a message is replied-to, this status is reported. - */ -enum i2400m_ms { - i2400m_ms_done_ok = 0, - i2400m_ms_done_in_progress = 1, - i2400m_ms_invalid_op = 2, - i2400m_ms_bad_state = 3, - i2400m_ms_illegal_value = 4, - i2400m_ms_missing_params = 5, - i2400m_ms_version_error = 6, - i2400m_ms_accessibility_error = 7, - i2400m_ms_busy = 8, - i2400m_ms_corrupted_tlv = 9, - i2400m_ms_uninitialized = 10, - i2400m_ms_unknown_error = 11, - i2400m_ms_production_error = 12, - i2400m_ms_no_rf = 13, - i2400m_ms_not_ready_for_powersave = 14, - i2400m_ms_thermal_critical = 15, - i2400m_ms_max -}; - - -/** - * i2400m_tlv - enumeration of the different types of tlvs - * - * tlvs stand for type-length-value and are the header for a payload - * composed of almost anything. each payload has a type assigned - * and a length. - */ -enum i2400m_tlv { - i2400m_tlv_l4_message_versions = 129, - i2400m_tlv_system_state = 141, - i2400m_tlv_media_status = 161, - i2400m_tlv_rf_operation = 162, - i2400m_tlv_rf_status = 163, - i2400m_tlv_device_reset_type = 132, - i2400m_tlv_config_idle_parameters = 601, - i2400m_tlv_config_idle_timeout = 611, - i2400m_tlv_config_d2h_data_format = 614, - i2400m_tlv_config_dl_host_reorder = 615, -}; - - -struct i2400m_tlv_hdr { - __le16 type; - __le16 length; /* payload's */ - __u8 pl[0]; -} __attribute__((packed)); - - -struct i2400m_l3l4_hdr { - __le16 type; - __le16 length; /* payload's */ - __le16 version; - __le16 resv1; - __le16 status; - __le16 resv2; - struct i2400m_tlv_hdr pl[0]; -} __attribute__((packed)); - - -/** - * i2400m_system_state - different states of the device - */ -enum i2400m_system_state { - i2400m_ss_uninitialized = 1, - i2400m_ss_init, - i2400m_ss_ready, - i2400m_ss_scan, - i2400m_ss_standby, - i2400m_ss_connecting, - i2400m_ss_wimax_connected, - i2400m_ss_data_path_connected, - i2400m_ss_idle, - i2400m_ss_disconnecting, - i2400m_ss_out_of_zone, - i2400m_ss_sleepactive, - i2400m_ss_production, - i2400m_ss_config, - i2400m_ss_rf_off, - i2400m_ss_rf_shutdown, - i2400m_ss_device_disconnect, - i2400m_ss_max, -}; - - -/** - * i2400m_tlv_system_state - report on the state of the system - * - * @state: see enum i2400m_system_state - */ -struct i2400m_tlv_system_state { - struct i2400m_tlv_hdr hdr; - __le32 state; -} __attribute__((packed)); - - -struct i2400m_tlv_l4_message_versions { - struct i2400m_tlv_hdr hdr; - __le16 major; - __le16 minor; - __le16 branch; - __le16 reserved; -} __attribute__((packed)); - - -struct i2400m_tlv_detailed_device_info { - struct i2400m_tlv_hdr hdr; - __u8 reserved1[400]; - __u8 mac_address[eth_alen]; - __u8 reserved2[2]; -} __attribute__((packed)); - - -enum i2400m_rf_switch_status { - i2400m_rf_switch_on = 1, - i2400m_rf_switch_off = 2, -}; - -struct i2400m_tlv_rf_switches_status { - struct i2400m_tlv_hdr hdr; - __u8 sw_rf_switch; /* 1 on, 2 off */ - __u8 hw_rf_switch; /* 1 on, 2 off */ - __u8 reserved[2]; -} __attribute__((packed)); - - -enum { - i2400m_rf_operation_on = 1, - i2400m_rf_operation_off = 2 -}; - -struct i2400m_tlv_rf_operation { - struct i2400m_tlv_hdr hdr; - __le32 status; /* 1 on, 2 off */ -} __attribute__((packed)); - - -enum i2400m_tlv_reset_type { - i2400m_reset_type_cold = 1, - i2400m_reset_type_warm -}; - -struct i2400m_tlv_device_reset_type { - struct i2400m_tlv_hdr hdr; - __le32 reset_type; -} __attribute__((packed)); - - -struct i2400m_tlv_config_idle_parameters { - struct i2400m_tlv_hdr hdr; - __le32 idle_timeout; /* 100 to 300000 ms [5min], 100 increments - * 0 disabled */ - __le32 idle_paging_interval; /* frames */ -} __attribute__((packed)); - - -enum i2400m_media_status { - i2400m_media_status_link_up = 1, - i2400m_media_status_link_down, - i2400m_media_status_link_renew, -}; - -struct i2400m_tlv_media_status { - struct i2400m_tlv_hdr hdr; - __le32 media_status; -} __attribute__((packed)); - - -/* new in v1.4 */ -struct i2400m_tlv_config_idle_timeout { - struct i2400m_tlv_hdr hdr; - __le32 timeout; /* 100 to 300000 ms [5min], 100 increments - * 0 disabled */ -} __attribute__((packed)); - -/* new in v1.4 -- for backward compat, will be removed */ -struct i2400m_tlv_config_d2h_data_format { - struct i2400m_tlv_hdr hdr; - __u8 format; /* 0 old format, 1 enhanced */ - __u8 reserved[3]; -} __attribute__((packed)); - -/* new in v1.4 */ -struct i2400m_tlv_config_dl_host_reorder { - struct i2400m_tlv_hdr hdr; - __u8 reorder; /* 0 disabled, 1 enabled */ - __u8 reserved[3]; -} __attribute__((packed)); - - -#endif /* #ifndef __linux__wimax__i2400m_h__ */ diff --git a/drivers/staging/wimax/i2400m/netdev.c b/drivers/staging/wimax/i2400m/netdev.c --- a/drivers/staging/wimax/i2400m/netdev.c +++ /dev/null -// spdx-license-identifier: gpl-2.0-only -/* - * intel wireless wimax connection 2400m - * glue with the networking stack - * - * copyright (c) 2007 intel corporation <linux-wimax@intel.com> - * yanir lubetkin <yanirx.lubetkin@intel.com> - * inaky perez-gonzalez <inaky.perez-gonzalez@intel.com> - * - * this implements an ethernet device for the i2400m. - * - * we fake being an ethernet device to simplify the support from user - * space and from the other side. the world is (sadly) configured to - * take in only ethernet devices... - * - * because of this, when using firmwares <= v1.3, there is an - * copy-each-rxed-packet overhead on the rx path. each ip packet has - * to be reallocated to add an ethernet header (as there is no space - * in what we get from the device). this is a known drawback and - * firmwares >= 1.4 add header space that can be used to insert the - * ethernet header without having to reallocate and copy. - * - * tx error handling is tricky; because we have to fifo/queue the - * buffers for transmission (as the hardware likes it aggregated), we - * just give the skb to the tx subsystem and by the time it is - * transmitted, we have long forgotten about it. so we just don't care - * too much about it. - * - * note that when the device is in idle mode with the basestation, we - * need to negotiate coming back up online. that involves negotiation - * and possible user space interaction. thus, we defer to a workqueue - * to do all that. by default, we only queue a single packet and drop - * the rest, as potentially the time to go back from idle to normal is - * long. - * - * roadmap - * - * i2400m_open called on ifconfig up - * i2400m_stop called on ifconfig down - * - * i2400m_hard_start_xmit called by the network stack to send a packet - * i2400m_net_wake_tx wake up device from basestation-idle & tx - * i2400m_wake_tx_work - * i2400m_cmd_exit_idle - * i2400m_tx - * i2400m_net_tx tx a data frame - * i2400m_tx - * - * i2400m_change_mtu called on ifconfig mtu xxx - * - * i2400m_tx_timeout called when the device times out - * - * i2400m_net_rx called by the rx code when a data frame is - * available (firmware <= 1.3) - * i2400m_net_erx called by the rx code when a data frame is - * available (firmware >= 1.4). - * i2400m_netdev_setup called to setup all the netdev stuff from - * alloc_netdev. - */ -#include <linux/if_arp.h> -#include <linux/slab.h> -#include <linux/netdevice.h> -#include <linux/ethtool.h> -#include <linux/export.h> -#include "i2400m.h" - - -#define d_submodule netdev -#include "debug-levels.h" - -enum { -/* netdev interface */ - /* 20 secs? yep, this is the maximum timeout that the device - * might take to get out of idle / negotiate it with the base - * station. we add 1sec for good measure. */ - i2400m_tx_timeout = 21 * hz, - /* - * experimentation has determined that, 20 to be a good value - * for minimizing the jitter in the throughput. - */ - i2400m_tx_qlen = 20, -}; - - -static -int i2400m_open(struct net_device *net_dev) -{ - int result; - struct i2400m *i2400m = net_dev_to_i2400m(net_dev); - struct device *dev = i2400m_dev(i2400m); - - d_fnstart(3, dev, "(net_dev %p [i2400m %p]) ", net_dev, i2400m); - /* make sure we wait until init is complete... */ - mutex_lock(&i2400m->init_mutex); - if (i2400m->updown) - result = 0; - else - result = -ebusy; - mutex_unlock(&i2400m->init_mutex); - d_fnend(3, dev, "(net_dev %p [i2400m %p]) = %d ", - net_dev, i2400m, result); - return result; -} - - -static -int i2400m_stop(struct net_device *net_dev) -{ - struct i2400m *i2400m = net_dev_to_i2400m(net_dev); - struct device *dev = i2400m_dev(i2400m); - - d_fnstart(3, dev, "(net_dev %p [i2400m %p]) ", net_dev, i2400m); - i2400m_net_wake_stop(i2400m); - d_fnend(3, dev, "(net_dev %p [i2400m %p]) = 0 ", net_dev, i2400m); - return 0; -} - - -/* - * wake up the device and transmit a held skb, then restart the net queue - * - * when the device goes into basestation-idle mode, we need to tell it - * to exit that mode; it will negotiate with the base station, user - * space may have to intervene to rehandshake crypto and then tell us - * when it is ready to transmit the packet we have "queued". still we - * need to give it sometime after it reports being ok. - * - * on error, there is not much we can do. if the error was on tx, we - * still wake the queue up to see if the next packet will be luckier. - * - * if _cmd_exit_idle() fails...well, it could be many things; most - * commonly it is that something else took the device out of idle mode - * (for example, the base station). in that case we get an -eilseq and - * we are just going to ignore that one. if the device is back to - * connected, then fine -- if it is someother state, the packet will - * be dropped anyway. - */ -void i2400m_wake_tx_work(struct work_struct *ws) -{ - int result; - struct i2400m *i2400m = container_of(ws, struct i2400m, wake_tx_ws); - struct net_device *net_dev = i2400m->wimax_dev.net_dev; - struct device *dev = i2400m_dev(i2400m); - struct sk_buff *skb; - unsigned long flags; - - spin_lock_irqsave(&i2400m->tx_lock, flags); - skb = i2400m->wake_tx_skb; - i2400m->wake_tx_skb = null; - spin_unlock_irqrestore(&i2400m->tx_lock, flags); - - d_fnstart(3, dev, "(ws %p i2400m %p skb %p) ", ws, i2400m, skb); - result = -einval; - if (skb == null) { - dev_err(dev, "wake&tx: skb disappeared! "); - goto out_put; - } - /* if we have, somehow, lost the connection after this was - * queued, don't do anything; this might be the device got - * reset or just disconnected. */ - if (unlikely(!netif_carrier_ok(net_dev))) - goto out_kfree; - result = i2400m_cmd_exit_idle(i2400m); - if (result == -eilseq) - result = 0; - if (result < 0) { - dev_err(dev, "wake&tx: device didn't get out of idle: " - "%d - resetting ", result); - i2400m_reset(i2400m, i2400m_rt_bus); - goto error; - } - result = wait_event_timeout(i2400m->state_wq, - i2400m->state != i2400m_ss_idle, - net_dev->watchdog_timeo - hz/2); - if (result == 0) - result = -etimedout; - if (result < 0) { - dev_err(dev, "wake&tx: error waiting for device to exit idle: " - "%d - resetting ", result); - i2400m_reset(i2400m, i2400m_rt_bus); - goto error; - } - msleep(20); /* device still needs some time or it drops it */ - result = i2400m_tx(i2400m, skb->data, skb->len, i2400m_pt_data); -error: - netif_wake_queue(net_dev); -out_kfree: - kfree_skb(skb); /* refcount transferred by _hard_start_xmit() */ -out_put: - i2400m_put(i2400m); - d_fnend(3, dev, "(ws %p i2400m %p skb %p) = void [%d] ", - ws, i2400m, skb, result); -} - - -/* - * prepare the data payload tx header - * - * the i2400m expects a 4 byte header in front of a data packet. - * - * because we pretend to be an ethernet device, this packet comes with - * an ethernet header. pull it and push our header. - */ -static -void i2400m_tx_prep_header(struct sk_buff *skb) -{ - struct i2400m_pl_data_hdr *pl_hdr; - skb_pull(skb, eth_hlen); - pl_hdr = skb_push(skb, sizeof(*pl_hdr)); - pl_hdr->reserved = 0; -} - - - -/* - * cleanup resources acquired during i2400m_net_wake_tx() - * - * this is called by __i2400m_dev_stop and means we have to make sure - * the workqueue is flushed from any pending work. - */ -void i2400m_net_wake_stop(struct i2400m *i2400m) -{ - struct device *dev = i2400m_dev(i2400m); - struct sk_buff *wake_tx_skb; - unsigned long flags; - - d_fnstart(3, dev, "(i2400m %p) ", i2400m); - /* - * see i2400m_hard_start_xmit(), references are taken there and - * here we release them if the packet was still pending. - */ - cancel_work_sync(&i2400m->wake_tx_ws); - - spin_lock_irqsave(&i2400m->tx_lock, flags); - wake_tx_skb = i2400m->wake_tx_skb; - i2400m->wake_tx_skb = null; - spin_unlock_irqrestore(&i2400m->tx_lock, flags); - - if (wake_tx_skb) { - i2400m_put(i2400m); - kfree_skb(wake_tx_skb); - } - - d_fnend(3, dev, "(i2400m %p) = void ", i2400m); -} - - -/* - * tx an skb to an idle device - * - * when the device is in basestation-idle mode, we need to wake it up - * and then tx. so we queue a work_struct for doing so. - * - * we need to get an extra ref for the skb (so it is not dropped), as - * well as be careful not to queue more than one request (won't help - * at all). if more than one request comes or there are errors, we - * just drop the packets (see i2400m_hard_start_xmit()). - */ -static -int i2400m_net_wake_tx(struct i2400m *i2400m, struct net_device *net_dev, - struct sk_buff *skb) -{ - int result; - struct device *dev = i2400m_dev(i2400m); - unsigned long flags; - - d_fnstart(3, dev, "(skb %p net_dev %p) ", skb, net_dev); - if (net_ratelimit()) { - d_printf(3, dev, "wake&nettx: " - "skb %p sending %d bytes to radio ", - skb, skb->len); - d_dump(4, dev, skb->data, skb->len); - } - /* we hold a ref count for i2400m and skb, so when - * stopping() the device, we need to cancel that work - * and if pending, release those resources. */ - result = 0; - spin_lock_irqsave(&i2400m->tx_lock, flags); - if (!i2400m->wake_tx_skb) { - netif_stop_queue(net_dev); - i2400m_get(i2400m); - i2400m->wake_tx_skb = skb_get(skb); /* transfer ref count */ - i2400m_tx_prep_header(skb); - result = schedule_work(&i2400m->wake_tx_ws); - warn_on(result == 0); - } - spin_unlock_irqrestore(&i2400m->tx_lock, flags); - if (result == 0) { - /* yes, this happens even if we stopped the - * queue -- blame the queue disciplines that - * queue without looking -- i guess there is a reason - * for that. */ - if (net_ratelimit()) - d_printf(1, dev, "nettx: device exiting idle, " - "dropping skb %p, queue running %d ", - skb, netif_queue_stopped(net_dev)); - result = -ebusy; - } - d_fnend(3, dev, "(skb %p net_dev %p) = %d ", skb, net_dev, result); - return result; -} - - -/* - * transmit a packet to the base station on behalf of the network stack. - * - * returns: 0 if ok, < 0 errno code on error. - * - * we need to pull the ethernet header and add the hardware header, - * which is currently set to all zeroes and reserved. - */ -static -int i2400m_net_tx(struct i2400m *i2400m, struct net_device *net_dev, - struct sk_buff *skb) -{ - int result; - struct device *dev = i2400m_dev(i2400m); - - d_fnstart(3, dev, "(i2400m %p net_dev %p skb %p) ", - i2400m, net_dev, skb); - /* fixme: check eth hdr, only ipv4 is routed by the device as of now */ - netif_trans_update(net_dev); - i2400m_tx_prep_header(skb); - d_printf(3, dev, "nettx: skb %p sending %d bytes to radio ", - skb, skb->len); - d_dump(4, dev, skb->data, skb->len); - result = i2400m_tx(i2400m, skb->data, skb->len, i2400m_pt_data); - d_fnend(3, dev, "(i2400m %p net_dev %p skb %p) = %d ", - i2400m, net_dev, skb, result); - return result; -} - - -/* - * transmit a packet to the base station on behalf of the network stack - * - * - * returns: netdev_tx_ok (always, even in case of error) - * - * in case of error, we just drop it. reasons: - * - * - we add a hw header to each skb, and if the network stack - * retries, we have no way to know if that skb has it or not. - * - * - network protocols have their own drop-recovery mechanisms - * - * - there is not much else we can do - * - * if the device is idle, we need to wake it up; that is an operation - * that will sleep. see i2400m_net_wake_tx() for details. - */ -static -netdev_tx_t i2400m_hard_start_xmit(struct sk_buff *skb, - struct net_device *net_dev) -{ - struct i2400m *i2400m = net_dev_to_i2400m(net_dev); - struct device *dev = i2400m_dev(i2400m); - int result = -1; - - d_fnstart(3, dev, "(skb %p net_dev %p) ", skb, net_dev); - - if (skb_cow_head(skb, 0)) - goto drop; - - if (i2400m->state == i2400m_ss_idle) - result = i2400m_net_wake_tx(i2400m, net_dev, skb); - else - result = i2400m_net_tx(i2400m, net_dev, skb); - if (result < 0) { -drop: - net_dev->stats.tx_dropped++; - } else { - net_dev->stats.tx_packets++; - net_dev->stats.tx_bytes += skb->len; - } - dev_kfree_skb(skb); - d_fnend(3, dev, "(skb %p net_dev %p) = %d ", skb, net_dev, result); - return netdev_tx_ok; -} - - -static -void i2400m_tx_timeout(struct net_device *net_dev, unsigned int txqueue) -{ - /* - * we might want to kick the device - * - * there is not much we can do though, as the device requires - * that we send the data aggregated. by the time we receive - * this, there might be data pending to be sent or not... - */ - net_dev->stats.tx_errors++; -} - - -/* - * create a fake ethernet header - * - * for emulating an ethernet device, every received ip header has to - * be prefixed with an ethernet header. fake it with the given - * protocol. - */ -static -void i2400m_rx_fake_eth_header(struct net_device *net_dev, - void *_eth_hdr, __be16 protocol) -{ - struct i2400m *i2400m = net_dev_to_i2400m(net_dev); - struct ethhdr *eth_hdr = _eth_hdr; - - memcpy(eth_hdr->h_dest, net_dev->dev_addr, sizeof(eth_hdr->h_dest)); - memcpy(eth_hdr->h_source, i2400m->src_mac_addr, - sizeof(eth_hdr->h_source)); - eth_hdr->h_proto = protocol; -} - - -/* - * i2400m_net_rx - pass a network packet to the stack - * - * @i2400m: device instance - * @skb_rx: the skb where the buffer pointed to by @buf is - * @i: 1 if payload is the only one - * @buf: pointer to the buffer containing the data - * @len: buffer's length - * - * this is only used now for the v1.3 firmware. it will be deprecated - * in >= 2.6.31. - * - * note that due to firmware limitations, we don't have space to add - * an ethernet header, so we need to copy each packet. firmware - * versions >= v1.4 fix this [see i2400m_net_erx()]. - * - * we just clone the skb and set it up so that it's skb->data pointer - * points to "buf" and it's length. - * - * note that if the payload is the last (or the only one) in a - * multi-payload message, we don't clone the skb but just reuse it. - * - * this function is normally run from a thread context. however, we - * still use netif_rx() instead of netif_receive_skb() as was - * recommended in the mailing list. reason is in some stress tests - * when sending/receiving a lot of data we seem to hit a softlock in - * the kernel's tcp implementation [aroudn tcp_delay_timer()]. using - * netif_rx() took care of the issue. - * - * this is, of course, still open to do more research on why running - * with netif_receive_skb() hits this softlock. fixme. - * - * fixme: currently we don't do any efforts at distinguishing if what - * we got was an ipv4 or ipv6 header, to setup the protocol field - * correctly. - */ -void i2400m_net_rx(struct i2400m *i2400m, struct sk_buff *skb_rx, - unsigned i, const void *buf, int buf_len) -{ - struct net_device *net_dev = i2400m->wimax_dev.net_dev; - struct device *dev = i2400m_dev(i2400m); - struct sk_buff *skb; - - d_fnstart(2, dev, "(i2400m %p buf %p buf_len %d) ", - i2400m, buf, buf_len); - if (i) { - skb = skb_get(skb_rx); - d_printf(2, dev, "rx: reusing first payload skb %p ", skb); - skb_pull(skb, buf - (void *) skb->data); - skb_trim(skb, (void *) skb_end_pointer(skb) - buf); - } else { - /* yes, this is bad -- a lot of overhead -- see - * comments at the top of the file */ - skb = __netdev_alloc_skb(net_dev, buf_len, gfp_kernel); - if (skb == null) { - dev_err(dev, "netrx: no memory to realloc skb "); - net_dev->stats.rx_dropped++; - goto error_skb_realloc; - } - skb_put_data(skb, buf, buf_len); - } - i2400m_rx_fake_eth_header(i2400m->wimax_dev.net_dev, - skb->data - eth_hlen, - cpu_to_be16(eth_p_ip)); - skb_set_mac_header(skb, -eth_hlen); - skb->dev = i2400m->wimax_dev.net_dev; - skb->protocol = htons(eth_p_ip); - net_dev->stats.rx_packets++; - net_dev->stats.rx_bytes += buf_len; - d_printf(3, dev, "netrx: receiving %d bytes to network stack ", - buf_len); - d_dump(4, dev, buf, buf_len); - netif_rx_ni(skb); /* see notes in function header */ -error_skb_realloc: - d_fnend(2, dev, "(i2400m %p buf %p buf_len %d) = void ", - i2400m, buf, buf_len); -} - - -/* - * i2400m_net_erx - pass a network packet to the stack (extended version) - * - * @i2400m: device descriptor - * @skb: the skb where the packet is - the skb should be set to point - * at the ip packet; this function will add ethernet headers if - * needed. - * @cs: packet type - * - * this is only used now for firmware >= v1.4. note it is quite - * similar to i2400m_net_rx() (used only for v1.3 firmware). - * - * this function is normally run from a thread context. however, we - * still use netif_rx() instead of netif_receive_skb() as was - * recommended in the mailing list. reason is in some stress tests - * when sending/receiving a lot of data we seem to hit a softlock in - * the kernel's tcp implementation [aroudn tcp_delay_timer()]. using - * netif_rx() took care of the issue. - * - * this is, of course, still open to do more research on why running - * with netif_receive_skb() hits this softlock. fixme. - */ -void i2400m_net_erx(struct i2400m *i2400m, struct sk_buff *skb, - enum i2400m_cs cs) -{ - struct net_device *net_dev = i2400m->wimax_dev.net_dev; - struct device *dev = i2400m_dev(i2400m); - - d_fnstart(2, dev, "(i2400m %p skb %p [%u] cs %d) ", - i2400m, skb, skb->len, cs); - switch (cs) { - case i2400m_cs_ipv4_0: - case i2400m_cs_ipv4: - i2400m_rx_fake_eth_header(i2400m->wimax_dev.net_dev, - skb->data - eth_hlen, - cpu_to_be16(eth_p_ip)); - skb_set_mac_header(skb, -eth_hlen); - skb->dev = i2400m->wimax_dev.net_dev; - skb->protocol = htons(eth_p_ip); - net_dev->stats.rx_packets++; - net_dev->stats.rx_bytes += skb->len; - break; - default: - dev_err(dev, "erx: bug? cs type %u unsupported ", cs); - goto error; - - } - d_printf(3, dev, "erx: receiving %d bytes to the network stack ", - skb->len); - d_dump(4, dev, skb->data, skb->len); - netif_rx_ni(skb); /* see notes in function header */ -error: - d_fnend(2, dev, "(i2400m %p skb %p [%u] cs %d) = void ", - i2400m, skb, skb->len, cs); -} - -static const struct net_device_ops i2400m_netdev_ops = { - .ndo_open = i2400m_open, - .ndo_stop = i2400m_stop, - .ndo_start_xmit = i2400m_hard_start_xmit, - .ndo_tx_timeout = i2400m_tx_timeout, -}; - -static void i2400m_get_drvinfo(struct net_device *net_dev, - struct ethtool_drvinfo *info) -{ - struct i2400m *i2400m = net_dev_to_i2400m(net_dev); - - strscpy(info->driver, kbuild_modname, sizeof(info->driver)); - strscpy(info->fw_version, i2400m->fw_name ? : "", - sizeof(info->fw_version)); - if (net_dev->dev.parent) - strscpy(info->bus_info, dev_name(net_dev->dev.parent), - sizeof(info->bus_info)); -} - -static const struct ethtool_ops i2400m_ethtool_ops = { - .get_drvinfo = i2400m_get_drvinfo, - .get_link = ethtool_op_get_link, -}; - -/* - * i2400m_netdev_setup - setup setup @net_dev's i2400m private data - * - * called by alloc_netdev() - */ -void i2400m_netdev_setup(struct net_device *net_dev) -{ - d_fnstart(3, null, "(net_dev %p) ", net_dev); - ether_setup(net_dev); - net_dev->mtu = i2400m_max_mtu; - net_dev->min_mtu = 0; - net_dev->max_mtu = i2400m_max_mtu; - net_dev->tx_queue_len = i2400m_tx_qlen; - net_dev->features = - netif_f_vlan_challenged - | netif_f_highdma; - net_dev->flags = - iff_noarp /* i2400m is apure ip device */ - & (~iff_broadcast /* i2400m is p2p */ - & ~iff_multicast); - net_dev->watchdog_timeo = i2400m_tx_timeout; - net_dev->netdev_ops = &i2400m_netdev_ops; - net_dev->ethtool_ops = &i2400m_ethtool_ops; - d_fnend(3, null, "(net_dev %p) = void ", net_dev); -} -export_symbol_gpl(i2400m_netdev_setup); - diff --git a/drivers/staging/wimax/i2400m/op-rfkill.c b/drivers/staging/wimax/i2400m/op-rfkill.c --- a/drivers/staging/wimax/i2400m/op-rfkill.c +++ /dev/null -// spdx-license-identifier: gpl-2.0-only -/* - * intel wireless wimax connection 2400m - * implement backend for the wimax stack rfkill support - * - * copyright (c) 2007-2008 intel corporation <linux-wimax@intel.com> - * inaky perez-gonzalez <inaky.perez-gonzalez@intel.com> - * - * the wimax kernel stack integrates into rf-kill and keeps the - * switches's status. we just need to: - * - * - report changes in the hw rf kill switch [with - * wimax_rfkill_{sw,hw}_report(), which happens when we detect those - * indications coming through hardware reports]. we also do it on - * initialization to let the stack know the initial hw state. - * - * - implement indications from the stack to change the sw rf kill - * switch (coming from sysfs, the wimax stack or user space). - */ -#include "i2400m.h" -#include "linux-wimax-i2400m.h" -#include <linux/slab.h> - - - -#define d_submodule rfkill -#include "debug-levels.h" - -/* - * return true if the i2400m radio is in the requested wimax_rf_state state - * - */ -static -int i2400m_radio_is(struct i2400m *i2400m, enum wimax_rf_state state) -{ - if (state == wimax_rf_off) - return i2400m->state == i2400m_ss_rf_off - || i2400m->state == i2400m_ss_rf_shutdown; - else if (state == wimax_rf_on) - /* state == wimax_rf_on */ - return i2400m->state != i2400m_ss_rf_off - && i2400m->state != i2400m_ss_rf_shutdown; - else { - bug(); - return -einval; /* shut gcc warnings on certain arches */ - } -} - - -/* - * wimax stack operation: implement sw rfkill toggling - * - * @wimax_dev: device descriptor - * @skb: skb where the message has been received; skb->data is - * expected to point to the message payload. - * @genl_info: passed by the generic netlink layer - * - * generic netlink will call this function when a message is sent from - * userspace to change the software rf-kill switch status. - * - * this function will set the device's software rf-kill switch state to - * match what is requested. - * - * note: the i2400m has a strict state machine; we can only set the - * rf-kill switch when it is on, the hw rf-kill is on and the - * device is initialized. so we ignore errors steaming from not - * being in the right state (-eilseq). - */ -int i2400m_op_rfkill_sw_toggle(struct wimax_dev *wimax_dev, - enum wimax_rf_state state) -{ - int result; - struct i2400m *i2400m = wimax_dev_to_i2400m(wimax_dev); - struct device *dev = i2400m_dev(i2400m); - struct sk_buff *ack_skb; - struct { - struct i2400m_l3l4_hdr hdr; - struct i2400m_tlv_rf_operation sw_rf; - } __packed *cmd; - char strerr[32]; - - d_fnstart(4, dev, "(wimax_dev %p state %d) ", wimax_dev, state); - - result = -enomem; - cmd = kzalloc(sizeof(*cmd), gfp_kernel); - if (cmd == null) - goto error_alloc; - cmd->hdr.type = cpu_to_le16(i2400m_mt_cmd_rf_control); - cmd->hdr.length = cpu_to_le16(sizeof(cmd->sw_rf)); - cmd->hdr.version = cpu_to_le16(i2400m_l3l4_version); - cmd->sw_rf.hdr.type = cpu_to_le16(i2400m_tlv_rf_operation); - cmd->sw_rf.hdr.length = cpu_to_le16(sizeof(cmd->sw_rf.status)); - switch (state) { - case wimax_rf_off: /* rfkill on, radio off */ - cmd->sw_rf.status = cpu_to_le32(2); - break; - case wimax_rf_on: /* rfkill off, radio on */ - cmd->sw_rf.status = cpu_to_le32(1); - break; - default: - bug(); - } - - ack_skb = i2400m_msg_to_dev(i2400m, cmd, sizeof(*cmd)); - result = ptr_err(ack_skb); - if (is_err(ack_skb)) { - dev_err(dev, "failed to issue 'rf control' command: %d ", - result); - goto error_msg_to_dev; - } - result = i2400m_msg_check_status(wimax_msg_data(ack_skb), - strerr, sizeof(strerr)); - if (result < 0) { - dev_err(dev, "'rf control' (0x%04x) command failed: %d - %s ", - i2400m_mt_cmd_rf_control, result, strerr); - goto error_cmd; - } - - /* now we wait for the state to change to radio_off or radio_on */ - result = wait_event_timeout( - i2400m->state_wq, i2400m_radio_is(i2400m, state), - 5 * hz); - if (result == 0) - result = -etimedout; - if (result < 0) - dev_err(dev, "error waiting for device to toggle rf state: " - "%d ", result); - result = 0; -error_cmd: - kfree_skb(ack_skb); -error_msg_to_dev: -error_alloc: - d_fnend(4, dev, "(wimax_dev %p state %d) = %d ", - wimax_dev, state, result); - kfree(cmd); - return result; -} - - -/* - * inform the wimax stack of changes in the rf kill switches reported - * by the device - * - * @i2400m: device descriptor - * @rfss: tlv for rf switches status; already validated - * - * note: the reports on rf switch status cannot be trusted - * or used until the device is in a state of radio_off - * or greater. - */ -void i2400m_report_tlv_rf_switches_status( - struct i2400m *i2400m, - const struct i2400m_tlv_rf_switches_status *rfss) -{ - struct device *dev = i2400m_dev(i2400m); - enum i2400m_rf_switch_status hw, sw; - enum wimax_st wimax_state; - - sw = rfss->sw_rf_switch; - hw = rfss->hw_rf_switch; - - d_fnstart(3, dev, "(i2400m %p rfss %p [hw %u sw %u]) ", - i2400m, rfss, hw, sw); - /* we only process rw switch evens when the device has been - * fully initialized */ - wimax_state = wimax_state_get(&i2400m->wimax_dev); - if (wimax_state < wimax_st_radio_off) { - d_printf(3, dev, "ignoring rf switches report, state %u ", - wimax_state); - goto out; - } - switch (sw) { - case i2400m_rf_switch_on: /* rf kill disabled (radio on) */ - wimax_report_rfkill_sw(&i2400m->wimax_dev, wimax_rf_on); - break; - case i2400m_rf_switch_off: /* rf kill enabled (radio off) */ - wimax_report_rfkill_sw(&i2400m->wimax_dev, wimax_rf_off); - break; - default: - dev_err(dev, "hw bug? unknown rf sw state 0x%x ", sw); - } - - switch (hw) { - case i2400m_rf_switch_on: /* rf kill disabled (radio on) */ - wimax_report_rfkill_hw(&i2400m->wimax_dev, wimax_rf_on); - break; - case i2400m_rf_switch_off: /* rf kill enabled (radio off) */ - wimax_report_rfkill_hw(&i2400m->wimax_dev, wimax_rf_off); - break; - default: - dev_err(dev, "hw bug? unknown rf hw state 0x%x ", hw); - } -out: - d_fnend(3, dev, "(i2400m %p rfss %p [hw %u sw %u]) = void ", - i2400m, rfss, hw, sw); -} diff --git a/drivers/staging/wimax/i2400m/rx.c b/drivers/staging/wimax/i2400m/rx.c --- a/drivers/staging/wimax/i2400m/rx.c +++ /dev/null -/* - * intel wireless wimax connection 2400m - * handle incoming traffic and deliver it to the control or data planes - * - * - * copyright (c) 2007-2008 intel corporation. all rights reserved. - * - * redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * - * * redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * * redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in - * the documentation and/or other materials provided with the - * distribution. - * * neither the name of intel corporation nor the names of its - * contributors may be used to endorse or promote products derived - * from this software without specific prior written permission. - * - * this software is provided by the copyright holders and contributors - * "as is" and any express or implied warranties, including, but not - * limited to, the implied warranties of merchantability and fitness for - * a particular purpose are disclaimed. in no event shall the copyright - * owner or contributors be liable for any direct, indirect, incidental, - * special, exemplary, or consequential damages (including, but not - * limited to, procurement of substitute goods or services; loss of use, - * data, or profits; or business interruption) however caused and on any - * theory of liability, whether in contract, strict liability, or tort - * (including negligence or otherwise) arising in any way out of the use - * of this software, even if advised of the possibility of such damage. - * - * - * intel corporation <linux-wimax@intel.com> - * yanir lubetkin <yanirx.lubetkin@intel.com> - * - initial implementation - * inaky perez-gonzalez <inaky.perez-gonzalez@intel.com> - * - use skb_clone(), break up processing in chunks - * - split transport/device specific - * - make buffer size dynamic to exert less memory pressure - * - rx reorder support - * - * this handles the rx path. - * - * we receive an rx message from the bus-specific driver, which - * contains one or more payloads that have potentially different - * destinataries (data or control paths). - * - * so we just take that payload from the transport specific code in - * the form of an skb, break it up in chunks (a cloned skb each in the - * case of network packets) and pass it to netdev or to the - * command/ack handler (and from there to the wimax stack). - * - * protocol format - * - * the format of the buffer is: - * - * header (struct i2400m_msg_hdr) - * payload descriptor 0 (struct i2400m_pld) - * payload descriptor 1 - * ... - * payload descriptor n - * payload 0 (raw bytes) - * payload 1 - * ... - * payload n - * - * see tx.c for a deeper description on alignment requirements and - * other fun facts of it. - * - * data packets - * - * in firmwares <= v1.3, data packets have no header for rx, but they - * do for tx (currently unused). - * - * in firmware >= 1.4, rx packets have an extended header (16 - * bytes). this header conveys information for management of host - * reordering of packets (the device offloads storage of the packets - * for reordering to the host). read below for more information. - * - * the header is used as dummy space to emulate an ethernet header and - * thus be able to act as an ethernet device without having to reallocate. - * - * data rx reordering - * - * starting in firmware v1.4, the device can deliver packets for - * delivery with special reordering information; this allows it to - * more effectively do packet management when some frames were lost in - * the radio traffic. - * - * thus, for rx packets that come out of order, the device gives the - * driver enough information to queue them properly and then at some - * point, the signal to deliver the whole (or part) of the queued - * packets to the networking stack. there are 16 such queues. - * - * this only happens when a packet comes in with the "need reorder" - * flag set in the rx header. when such bit is set, the following - * operations might be indicated: - * - * - reset queue: send all queued packets to the os - * - * - queue: queue a packet - * - * - update ws: update the queue's window start and deliver queued - * packets that meet the criteria - * - * - queue & update ws: queue a packet, update the window start and - * deliver queued packets that meet the criteria - * - * (delivery criteria: the packet's [normalized] sequence number is - * lower than the new [normalized] window start). - * - * see the i2400m_roq_*() functions for details. - * - * roadmap - * - * i2400m_rx - * i2400m_rx_msg_hdr_check - * i2400m_rx_pl_descr_check - * i2400m_rx_payload - * i2400m_net_rx - * i2400m_rx_edata - * i2400m_net_erx - * i2400m_roq_reset - * i2400m_net_erx - * i2400m_roq_queue - * __i2400m_roq_queue - * i2400m_roq_update_ws - * __i2400m_roq_update_ws - * i2400m_net_erx - * i2400m_roq_queue_update_ws - * __i2400m_roq_queue - * __i2400m_roq_update_ws - * i2400m_net_erx - * i2400m_rx_ctl - * i2400m_msg_size_check - * i2400m_report_hook_work [in a workqueue] - * i2400m_report_hook - * wimax_msg_to_user - * i2400m_rx_ctl_ack - * wimax_msg_to_user_alloc - * i2400m_rx_trace - * i2400m_msg_size_check - * wimax_msg - */ -#include <linux/slab.h> -#include <linux/kernel.h> -#include <linux/if_arp.h> -#include <linux/netdevice.h> -#include <linux/workqueue.h> -#include <linux/export.h> -#include <linux/moduleparam.h> -#include "i2400m.h" - - -#define d_submodule rx -#include "debug-levels.h" - -static int i2400m_rx_reorder_disabled; /* 0 (rx reorder enabled) by default */ -module_param_named(rx_reorder_disabled, i2400m_rx_reorder_disabled, int, 0644); -module_parm_desc(rx_reorder_disabled, - "if true, rx reordering will be disabled."); - -struct i2400m_report_hook_args { - struct sk_buff *skb_rx; - const struct i2400m_l3l4_hdr *l3l4_hdr; - size_t size; - struct list_head list_node; -}; - - -/* - * execute i2400m_report_hook in a workqueue - * - * goes over the list of queued reports in i2400m->rx_reports and - * processes them. - * - * note: refcounts on i2400m are not needed because we flush the - * workqueue this runs on (i2400m->work_queue) before destroying - * i2400m. - */ -void i2400m_report_hook_work(struct work_struct *ws) -{ - struct i2400m *i2400m = container_of(ws, struct i2400m, rx_report_ws); - struct device *dev = i2400m_dev(i2400m); - struct i2400m_report_hook_args *args, *args_next; - list_head(list); - unsigned long flags; - - while (1) { - spin_lock_irqsave(&i2400m->rx_lock, flags); - list_splice_init(&i2400m->rx_reports, &list); - spin_unlock_irqrestore(&i2400m->rx_lock, flags); - if (list_empty(&list)) - break; - else - d_printf(1, dev, "processing queued reports "); - list_for_each_entry_safe(args, args_next, &list, list_node) { - d_printf(2, dev, "processing queued report %p ", args); - i2400m_report_hook(i2400m, args->l3l4_hdr, args->size); - kfree_skb(args->skb_rx); - list_del(&args->list_node); - kfree(args); - } - } -} - - -/* - * flush the list of queued reports - */ -static -void i2400m_report_hook_flush(struct i2400m *i2400m) -{ - struct device *dev = i2400m_dev(i2400m); - struct i2400m_report_hook_args *args, *args_next; - list_head(list); - unsigned long flags; - - d_printf(1, dev, "flushing queued reports "); - spin_lock_irqsave(&i2400m->rx_lock, flags); - list_splice_init(&i2400m->rx_reports, &list); - spin_unlock_irqrestore(&i2400m->rx_lock, flags); - list_for_each_entry_safe(args, args_next, &list, list_node) { - d_printf(2, dev, "flushing queued report %p ", args); - kfree_skb(args->skb_rx); - list_del(&args->list_node); - kfree(args); - } -} - - -/* - * queue a report for later processing - * - * @i2400m: device descriptor - * @skb_rx: skb that contains the payload (for reference counting) - * @l3l4_hdr: pointer to the control - * @size: size of the message - */ -static -void i2400m_report_hook_queue(struct i2400m *i2400m, struct sk_buff *skb_rx, - const void *l3l4_hdr, size_t size) -{ - struct device *dev = i2400m_dev(i2400m); - unsigned long flags; - struct i2400m_report_hook_args *args; - - args = kzalloc(sizeof(*args), gfp_noio); - if (args) { - args->skb_rx = skb_get(skb_rx); - args->l3l4_hdr = l3l4_hdr; - args->size = size; - spin_lock_irqsave(&i2400m->rx_lock, flags); - list_add_tail(&args->list_node, &i2400m->rx_reports); - spin_unlock_irqrestore(&i2400m->rx_lock, flags); - d_printf(2, dev, "queued report %p ", args); - rmb(); /* see i2400m->ready's documentation */ - if (likely(i2400m->ready)) /* only send if up */ - queue_work(i2400m->work_queue, &i2400m->rx_report_ws); - } else { - if (printk_ratelimit()) - dev_err(dev, "%s:%u: can't allocate %zu b ", - __func__, __line__, sizeof(*args)); - } -} - - -/* - * process an ack to a command - * - * @i2400m: device descriptor - * @payload: pointer to message - * @size: size of the message - * - * pass the acknodledgment (in an skb) to the thread that is waiting - * for it in i2400m->msg_completion. - * - * we need to coordinate properly with the thread waiting for the - * ack. check if it is waiting or if it is gone. we loose the spinlock - * to avoid allocating on atomic contexts (yeah, could use gfp_atomic, - * but this is not so speed critical). - */ -static -void i2400m_rx_ctl_ack(struct i2400m *i2400m, - const void *payload, size_t size) -{ - struct device *dev = i2400m_dev(i2400m); - struct wimax_dev *wimax_dev = &i2400m->wimax_dev; - unsigned long flags; - struct sk_buff *ack_skb; - - /* anyone waiting for an answer? */ - spin_lock_irqsave(&i2400m->rx_lock, flags); - if (i2400m->ack_skb != err_ptr(-einprogress)) { - dev_err(dev, "huh? reply to command with no waiters "); - goto error_no_waiter; - } - spin_unlock_irqrestore(&i2400m->rx_lock, flags); - - ack_skb = wimax_msg_alloc(wimax_dev, null, payload, size, gfp_kernel); - - /* check waiter didn't time out waiting for the answer... */ - spin_lock_irqsave(&i2400m->rx_lock, flags); - if (i2400m->ack_skb != err_ptr(-einprogress)) { - d_printf(1, dev, "huh? waiter for command reply cancelled "); - goto error_waiter_cancelled; - } - if (is_err(ack_skb)) - dev_err(dev, "cmd/get/set ack: cannot allocate skb "); - i2400m->ack_skb = ack_skb; - spin_unlock_irqrestore(&i2400m->rx_lock, flags); - complete(&i2400m->msg_completion); - return; - -error_waiter_cancelled: - if (!is_err(ack_skb)) - kfree_skb(ack_skb); -error_no_waiter: - spin_unlock_irqrestore(&i2400m->rx_lock, flags); -} - - -/* - * receive and process a control payload - * - * @i2400m: device descriptor - * @skb_rx: skb that contains the payload (for reference counting) - * @payload: pointer to message - * @size: size of the message - * - * there are two types of control rx messages: reports (asynchronous, - * like your every day interrupts) and 'acks' (reponses to a command, - * get or set request). - * - * if it is a report, we run hooks on it (to extract information for - * things we need to do in the driver) and then pass it over to the - * wimax stack to send it to user space. - * - * note: report processing is done in a workqueue specific to the - * generic driver, to avoid deadlocks in the system. - * - * if it is not a report, it is an ack to a previously executed - * command, set or get, so wake up whoever is waiting for it from - * i2400m_msg_to_dev(). i2400m_rx_ctl_ack() takes care of that. - * - * note that the sizes we pass to other functions from here are the - * sizes of the _l3l4_hdr + payload, not full buffer sizes, as we have - * verified in _msg_size_check() that they are congruent. - * - * for reports: we can't clone the original skb where the data is - * because we need to send this up via netlink; netlink has to add - * headers and we can't overwrite what's preceding the payload...as - * it is another message. so we just dup them. - */ -static -void i2400m_rx_ctl(struct i2400m *i2400m, struct sk_buff *skb_rx, - const void *payload, size_t size) -{ - int result; - struct device *dev = i2400m_dev(i2400m); - const struct i2400m_l3l4_hdr *l3l4_hdr = payload; - unsigned msg_type; - - result = i2400m_msg_size_check(i2400m, l3l4_hdr, size); - if (result < 0) { - dev_err(dev, "hw bug? device sent a bad message: %d ", - result); - goto error_check; - } - msg_type = le16_to_cpu(l3l4_hdr->type); - d_printf(1, dev, "%s 0x%04x: %zu bytes ", - msg_type & i2400m_mt_report_mask ? "report" : "cmd/set/get", - msg_type, size); - d_dump(2, dev, l3l4_hdr, size); - if (msg_type & i2400m_mt_report_mask) { - /* - * process each report - * - * - has to be ran serialized as well - * - * - the handling might force the execution of - * commands. that might cause reentrancy issues with - * bus-specific subdrivers and workqueues, so the we - * run it in a separate workqueue. - * - * - when the driver is not yet ready to handle them, - * they are queued and at some point the queue is - * restarted [note: we can't queue skbs directly, as - * this might be a piece of a skb, not the whole - * thing, and this is cheaper than cloning the - * skb]. - * - * note we don't do refcounting for the device - * structure; this is because before destroying - * 'i2400m', we make sure to flush the - * i2400m->work_queue, so there are no issues. - */ - i2400m_report_hook_queue(i2400m, skb_rx, l3l4_hdr, size); - if (unlikely(i2400m->trace_msg_from_user)) - wimax_msg(&i2400m->wimax_dev, "echo", - l3l4_hdr, size, gfp_kernel); - result = wimax_msg(&i2400m->wimax_dev, null, l3l4_hdr, size, - gfp_kernel); - if (result < 0) - dev_err(dev, "error sending report to userspace: %d ", - result); - } else /* an ack to a cmd, get or set */ - i2400m_rx_ctl_ack(i2400m, payload, size); -error_check: - return; -} - - -/* - * receive and send up a trace - * - * @i2400m: device descriptor - * @skb_rx: skb that contains the trace (for reference counting) - * @payload: pointer to trace message inside the skb - * @size: size of the message - * - * the i2400m might produce trace information (diagnostics) and we - * send them through a different kernel-to-user pipe (to avoid - * clogging it). - * - * as in i2400m_rx_ctl(), we can't clone the original skb where the - * data is because we need to send this up via netlink; netlink has to - * add headers and we can't overwrite what's preceding the - * payload...as it is another message. so we just dup them. - */ -static -void i2400m_rx_trace(struct i2400m *i2400m, - const void *payload, size_t size) -{ - int result; - struct device *dev = i2400m_dev(i2400m); - struct wimax_dev *wimax_dev = &i2400m->wimax_dev; - const struct i2400m_l3l4_hdr *l3l4_hdr = payload; - unsigned msg_type; - - result = i2400m_msg_size_check(i2400m, l3l4_hdr, size); - if (result < 0) { - dev_err(dev, "hw bug? device sent a bad trace message: %d ", - result); - goto error_check; - } - msg_type = le16_to_cpu(l3l4_hdr->type); - d_printf(1, dev, "trace %s 0x%04x: %zu bytes ", - msg_type & i2400m_mt_report_mask ? "report" : "cmd/set/get", - msg_type, size); - d_dump(2, dev, l3l4_hdr, size); - result = wimax_msg(wimax_dev, "trace", l3l4_hdr, size, gfp_kernel); - if (result < 0) - dev_err(dev, "error sending trace to userspace: %d ", - result); -error_check: - return; -} - - -/* - * reorder queue data stored on skb->cb while the skb is queued in the - * reorder queues. - */ -struct i2400m_roq_data { - unsigned sn; /* serial number for the skb */ - enum i2400m_cs cs; /* packet type for the skb */ -}; - - -/* - * reorder queue - * - * @ws: window start; sequence number where the current window start - * is for this queue - * @queue: the skb queue itself - * @log: circular ring buffer used to log information about the - * reorder process in this queue that can be displayed in case of - * error to help diagnose it. - * - * this is the head for a list of skbs. in the skb->cb member of the - * skb when queued here contains a 'struct i2400m_roq_data' were we - * store the sequence number (sn) and the cs (packet type) coming from - * the rx payload header from the device. - */ -struct i2400m_roq { - unsigned ws; - struct sk_buff_head queue; - struct i2400m_roq_log *log; -}; - - -static -void __i2400m_roq_init(struct i2400m_roq *roq) -{ - roq->ws = 0; - skb_queue_head_init(&roq->queue); -} - - -static -unsigned __i2400m_roq_index(struct i2400m *i2400m, struct i2400m_roq *roq) -{ - return ((unsigned long) roq - (unsigned long) i2400m->rx_roq) - / sizeof(*roq); -} - - -/* - * normalize a sequence number based on the queue's window start - * - * nsn = (sn - ws) % 2048 - * - * note that if @sn < @roq->ws, we still need a positive number; %'s - * sign is implementation specific, so we normalize it by adding 2048 - * to bring it to be positive. - */ -static -unsigned __i2400m_roq_nsn(struct i2400m_roq *roq, unsigned sn) -{ - int r; - r = ((int) sn - (int) roq->ws) % 2048; - if (r < 0) - r += 2048; - return r; -} - - -/* - * circular buffer to keep the last n reorder operations - * - * in case something fails, dumb then to try to come up with what - * happened. - */ -enum { - i2400m_roq_log_length = 32, -}; - -struct i2400m_roq_log { - struct i2400m_roq_log_entry { - enum i2400m_ro_type type; - unsigned ws, count, sn, nsn, new_ws; - } entry[i2400m_roq_log_length]; - unsigned in, out; -}; - - -/* print a log entry */ -static -void i2400m_roq_log_entry_print(struct i2400m *i2400m, unsigned index, - unsigned e_index, - struct i2400m_roq_log_entry *e) -{ - struct device *dev = i2400m_dev(i2400m); - - switch(e->type) { - case i2400m_ro_type_reset: - dev_err(dev, "q#%d reset ws %u cnt %u sn %u/%u" - " - new nws %u ", - index, e->ws, e->count, e->sn, e->nsn, e->new_ws); - break; - case i2400m_ro_type_packet: - dev_err(dev, "q#%d queue ws %u cnt %u sn %u/%u ", - index, e->ws, e->count, e->sn, e->nsn); - break; - case i2400m_ro_type_ws: - dev_err(dev, "q#%d update_ws ws %u cnt %u sn %u/%u" - " - new nws %u ", - index, e->ws, e->count, e->sn, e->nsn, e->new_ws); - break; - case i2400m_ro_type_packet_ws: - dev_err(dev, "q#%d queue_update_ws ws %u cnt %u sn %u/%u" - " - new nws %u ", - index, e->ws, e->count, e->sn, e->nsn, e->new_ws); - break; - default: - dev_err(dev, "q#%d bug? entry %u - unknown type %u ", - index, e_index, e->type); - break; - } -} - - -static -void i2400m_roq_log_add(struct i2400m *i2400m, - struct i2400m_roq *roq, enum i2400m_ro_type type, - unsigned ws, unsigned count, unsigned sn, - unsigned nsn, unsigned new_ws) -{ - struct i2400m_roq_log_entry *e; - unsigned cnt_idx; - int index = __i2400m_roq_index(i2400m, roq); - - /* if we run out of space, we eat from the end */ - if (roq->log->in - roq->log->out == i2400m_roq_log_length) - roq->log->out++; - cnt_idx = roq->log->in++ % i2400m_roq_log_length; - e = &roq->log->entry[cnt_idx]; - - e->type = type; - e->ws = ws; - e->count = count; - e->sn = sn; - e->nsn = nsn; - e->new_ws = new_ws; - - if (d_test(1)) - i2400m_roq_log_entry_print(i2400m, index, cnt_idx, e); -} - - -/* dump all the entries in the fifo and reinitialize it */ -static -void i2400m_roq_log_dump(struct i2400m *i2400m, struct i2400m_roq *roq) -{ - unsigned cnt, cnt_idx; - struct i2400m_roq_log_entry *e; - int index = __i2400m_roq_index(i2400m, roq); - - bug_on(roq->log->out > roq->log->in); - for (cnt = roq->log->out; cnt < roq->log->in; cnt++) { - cnt_idx = cnt % i2400m_roq_log_length; - e = &roq->log->entry[cnt_idx]; - i2400m_roq_log_entry_print(i2400m, index, cnt_idx, e); - memset(e, 0, sizeof(*e)); - } - roq->log->in = roq->log->out = 0; -} - - -/* - * backbone for the queuing of an skb (by normalized sequence number) - * - * @i2400m: device descriptor - * @roq: reorder queue where to add - * @skb: the skb to add - * @sn: the sequence number of the skb - * @nsn: the normalized sequence number of the skb (pre-computed by the - * caller from the @sn and @roq->ws). - * - * we try first a couple of quick cases: - * - * - the queue is empty - * - the skb would be appended to the queue - * - * these will be the most common operations. - * - * if these fail, then we have to do a sorted insertion in the queue, - * which is the slowest path. - * - * we don't have to acquire a reference count as we are going to own it. - */ -static -void __i2400m_roq_queue(struct i2400m *i2400m, struct i2400m_roq *roq, - struct sk_buff *skb, unsigned sn, unsigned nsn) -{ - struct device *dev = i2400m_dev(i2400m); - struct sk_buff *skb_itr; - struct i2400m_roq_data *roq_data_itr, *roq_data; - unsigned nsn_itr; - - d_fnstart(4, dev, "(i2400m %p roq %p skb %p sn %u nsn %u) ", - i2400m, roq, skb, sn, nsn); - - roq_data = (struct i2400m_roq_data *) &skb->cb; - build_bug_on(sizeof(*roq_data) > sizeof(skb->cb)); - roq_data->sn = sn; - d_printf(3, dev, "erx: roq %p [ws %u] nsn %d sn %u ", - roq, roq->ws, nsn, roq_data->sn); - - /* queues will be empty on not-so-bad environments, so try - * that first */ - if (skb_queue_empty(&roq->queue)) { - d_printf(2, dev, "erx: roq %p - first one ", roq); - __skb_queue_head(&roq->queue, skb); - goto out; - } - /* now try append, as most of the operations will be that */ - skb_itr = skb_peek_tail(&roq->queue); - roq_data_itr = (struct i2400m_roq_data *) &skb_itr->cb; - nsn_itr = __i2400m_roq_nsn(roq, roq_data_itr->sn); - /* nsn bounds assumed correct (checked when it was queued) */ - if (nsn >= nsn_itr) { - d_printf(2, dev, "erx: roq %p - appended after %p (nsn %d sn %u) ", - roq, skb_itr, nsn_itr, roq_data_itr->sn); - __skb_queue_tail(&roq->queue, skb); - goto out; - } - /* none of the fast paths option worked. iterate to find the - * right spot where to insert the packet; we know the queue is - * not empty, so we are not the first ones; we also know we - * are not going to be the last ones. the list is sorted, so - * we have to insert before the the first guy with an nsn_itr - * greater that our nsn. */ - skb_queue_walk(&roq->queue, skb_itr) { - roq_data_itr = (struct i2400m_roq_data *) &skb_itr->cb; - nsn_itr = __i2400m_roq_nsn(roq, roq_data_itr->sn); - /* nsn bounds assumed correct (checked when it was queued) */ - if (nsn_itr > nsn) { - d_printf(2, dev, "erx: roq %p - queued before %p " - "(nsn %d sn %u) ", roq, skb_itr, nsn_itr, - roq_data_itr->sn); - __skb_queue_before(&roq->queue, skb_itr, skb); - goto out; - } - } - /* if we get here, that is very bad -- print info to help - * diagnose and crash it */ - dev_err(dev, "sw bug? failed to insert packet "); - dev_err(dev, "erx: roq %p [ws %u] skb %p nsn %d sn %u ", - roq, roq->ws, skb, nsn, roq_data->sn); - skb_queue_walk(&roq->queue, skb_itr) { - roq_data_itr = (struct i2400m_roq_data *) &skb_itr->cb; - nsn_itr = __i2400m_roq_nsn(roq, roq_data_itr->sn); - /* nsn bounds assumed correct (checked when it was queued) */ - dev_err(dev, "erx: roq %p skb_itr %p nsn %d sn %u ", - roq, skb_itr, nsn_itr, roq_data_itr->sn); - } - bug(); -out: - d_fnend(4, dev, "(i2400m %p roq %p skb %p sn %u nsn %d) = void ", - i2400m, roq, skb, sn, nsn); -} - - -/* - * backbone for the update window start operation - * - * @i2400m: device descriptor - * @roq: reorder queue - * @sn: new sequence number - * - * updates the window start of a queue; when doing so, it must deliver - * to the networking stack all the queued skb's whose normalized - * sequence number is lower than the new normalized window start. - */ -static -unsigned __i2400m_roq_update_ws(struct i2400m *i2400m, struct i2400m_roq *roq, - unsigned sn) -{ - struct device *dev = i2400m_dev(i2400m); - struct sk_buff *skb_itr, *tmp_itr; - struct i2400m_roq_data *roq_data_itr; - unsigned new_nws, nsn_itr; - - new_nws = __i2400m_roq_nsn(roq, sn); - /* - * for type 2(update_window_start) rx messages, there is no - * need to check if the normalized sequence number is greater 1023. - * simply insert and deliver all packets to the host up to the - * window start. - */ - skb_queue_walk_safe(&roq->queue, skb_itr, tmp_itr) { - roq_data_itr = (struct i2400m_roq_data *) &skb_itr->cb; - nsn_itr = __i2400m_roq_nsn(roq, roq_data_itr->sn); - /* nsn bounds assumed correct (checked when it was queued) */ - if (nsn_itr < new_nws) { - d_printf(2, dev, "erx: roq %p - release skb %p " - "(nsn %u/%u new nws %u) ", - roq, skb_itr, nsn_itr, roq_data_itr->sn, - new_nws); - __skb_unlink(skb_itr, &roq->queue); - i2400m_net_erx(i2400m, skb_itr, roq_data_itr->cs); - } - else - break; /* rest of packets all nsn_itr > nws */ - } - roq->ws = sn; - return new_nws; -} - - -/* - * reset a queue - * - * @i2400m: device descriptor - * @cin: queue index - * - * deliver all the packets and reset the window-start to zero. name is - * kind of misleading. - */ -static -void i2400m_roq_reset(struct i2400m *i2400m, struct i2400m_roq *roq) -{ - struct device *dev = i2400m_dev(i2400m); - struct sk_buff *skb_itr, *tmp_itr; - struct i2400m_roq_data *roq_data_itr; - - d_fnstart(2, dev, "(i2400m %p roq %p) ", i2400m, roq); - i2400m_roq_log_add(i2400m, roq, i2400m_ro_type_reset, - roq->ws, skb_queue_len(&roq->queue), - ~0, ~0, 0); - skb_queue_walk_safe(&roq->queue, skb_itr, tmp_itr) { - roq_data_itr = (struct i2400m_roq_data *) &skb_itr->cb; - d_printf(2, dev, "erx: roq %p - release skb %p (sn %u) ", - roq, skb_itr, roq_data_itr->sn); - __skb_unlink(skb_itr, &roq->queue); - i2400m_net_erx(i2400m, skb_itr, roq_data_itr->cs); - } - roq->ws = 0; - d_fnend(2, dev, "(i2400m %p roq %p) = void ", i2400m, roq); -} - - -/* - * queue a packet - * - * @i2400m: device descriptor - * @cin: queue index - * @skb: containing the packet data - * @fbn: first block number of the packet in @skb - * @lbn: last block number of the packet in @skb - * - * the hardware is asking the driver to queue a packet for later - * delivery to the networking stack. - */ -static -void i2400m_roq_queue(struct i2400m *i2400m, struct i2400m_roq *roq, - struct sk_buff *skb, unsigned lbn) -{ - struct device *dev = i2400m_dev(i2400m); - unsigned nsn, len; - - d_fnstart(2, dev, "(i2400m %p roq %p skb %p lbn %u) = void ", - i2400m, roq, skb, lbn); - len = skb_queue_len(&roq->queue); - nsn = __i2400m_roq_nsn(roq, lbn); - if (unlikely(nsn >= 1024)) { - dev_err(dev, "sw bug? queue nsn %d (lbn %u ws %u) ", - nsn, lbn, roq->ws); - i2400m_roq_log_dump(i2400m, roq); - i2400m_reset(i2400m, i2400m_rt_warm); - } else { - __i2400m_roq_queue(i2400m, roq, skb, lbn, nsn); - i2400m_roq_log_add(i2400m, roq, i2400m_ro_type_packet, - roq->ws, len, lbn, nsn, ~0); - } - d_fnend(2, dev, "(i2400m %p roq %p skb %p lbn %u) = void ", - i2400m, roq, skb, lbn); -} - - -/* - * update the window start in a reorder queue and deliver all skbs - * with a lower window start - * - * @i2400m: device descriptor - * @roq: reorder queue - * @sn: new sequence number - */ -static -void i2400m_roq_update_ws(struct i2400m *i2400m, struct i2400m_roq *roq, - unsigned sn) -{ - struct device *dev = i2400m_dev(i2400m); - unsigned old_ws, nsn, len; - - d_fnstart(2, dev, "(i2400m %p roq %p sn %u) ", i2400m, roq, sn); - old_ws = roq->ws; - len = skb_queue_len(&roq->queue); - nsn = __i2400m_roq_update_ws(i2400m, roq, sn); - i2400m_roq_log_add(i2400m, roq, i2400m_ro_type_ws, - old_ws, len, sn, nsn, roq->ws); - d_fnstart(2, dev, "(i2400m %p roq %p sn %u) = void ", i2400m, roq, sn); -} - - -/* - * queue a packet and update the window start - * - * @i2400m: device descriptor - * @cin: queue index - * @skb: containing the packet data - * @fbn: first block number of the packet in @skb - * @sn: last block number of the packet in @skb - * - * note that unlike i2400m_roq_update_ws(), which sets the new window - * start to @sn, in here we'll set it to @sn + 1. - */ -static -void i2400m_roq_queue_update_ws(struct i2400m *i2400m, struct i2400m_roq *roq, - struct sk_buff *skb, unsigned sn) -{ - struct device *dev = i2400m_dev(i2400m); - unsigned nsn, old_ws, len; - - d_fnstart(2, dev, "(i2400m %p roq %p skb %p sn %u) ", - i2400m, roq, skb, sn); - len = skb_queue_len(&roq->queue); - nsn = __i2400m_roq_nsn(roq, sn); - /* - * for type 3(queue_update_window_start) rx messages, there is no - * need to check if the normalized sequence number is greater 1023. - * simply insert and deliver all packets to the host up to the - * window start. - */ - old_ws = roq->ws; - /* if the queue is empty, don't bother as we'd queue - * it and immediately unqueue it -- just deliver it. - */ - if (len == 0) { - struct i2400m_roq_data *roq_data; - roq_data = (struct i2400m_roq_data *) &skb->cb; - i2400m_net_erx(i2400m, skb, roq_data->cs); - } else - __i2400m_roq_queue(i2400m, roq, skb, sn, nsn); - - __i2400m_roq_update_ws(i2400m, roq, sn + 1); - i2400m_roq_log_add(i2400m, roq, i2400m_ro_type_packet_ws, - old_ws, len, sn, nsn, roq->ws); - - d_fnend(2, dev, "(i2400m %p roq %p skb %p sn %u) = void ", - i2400m, roq, skb, sn); -} - - -/* - * this routine destroys the memory allocated for rx_roq, when no - * other thread is accessing it. access to rx_roq is refcounted by - * rx_roq_refcount, hence memory allocated must be destroyed when - * rx_roq_refcount becomes zero. this routine gets executed when - * rx_roq_refcount becomes zero. - */ -static void i2400m_rx_roq_destroy(struct kref *ref) -{ - unsigned itr; - struct i2400m *i2400m - = container_of(ref, struct i2400m, rx_roq_refcount); - for (itr = 0; itr < i2400m_ro_cin + 1; itr++) - __skb_queue_purge(&i2400m->rx_roq[itr].queue); - kfree(i2400m->rx_roq[0].log); - kfree(i2400m->rx_roq); - i2400m->rx_roq = null; -} - -/* - * receive and send up an extended data packet - * - * @i2400m: device descriptor - * @skb_rx: skb that contains the extended data packet - * @single_last: 1 if the payload is the only one or the last one of - * the skb. - * @payload: pointer to the packet's data inside the skb - * @size: size of the payload - * - * starting in v1.4 of the i2400m's firmware, the device can send data - * packets to the host in an extended format that; this incudes a 16 - * byte header (struct i2400m_pl_edata_hdr). using this header's space - * we can fake ethernet headers for ethernet device emulation without - * having to copy packets around. - * - * this function handles said path. - * - * - * receive and send up an extended data packet that requires no reordering - * - * @i2400m: device descriptor - * @skb_rx: skb that contains the extended data packet - * @single_last: 1 if the payload is the only one or the last one of - * the skb. - * @payload: pointer to the packet's data (past the actual extended - * data payload header). - * @size: size of the payload - * - * pass over to the networking stack a data packet that might have - * reordering requirements. - * - * this needs to the decide if the skb in which the packet is - * contained can be reused or if it needs to be cloned. then it has to - * be trimmed in the edges so that the beginning is the space for eth - * header and then pass it to i2400m_net_erx() for the stack - * - * assumes the caller has verified the sanity of the payload (size, - * etc) already. - */ -static -void i2400m_rx_edata(struct i2400m *i2400m, struct sk_buff *skb_rx, - unsigned single_last, const void *payload, size_t size) -{ - struct device *dev = i2400m_dev(i2400m); - const struct i2400m_pl_edata_hdr *hdr = payload; - struct net_device *net_dev = i2400m->wimax_dev.net_dev; - struct sk_buff *skb; - enum i2400m_cs cs; - u32 reorder; - unsigned ro_needed, ro_type, ro_cin, ro_sn; - struct i2400m_roq *roq; - struct i2400m_roq_data *roq_data; - unsigned long flags; - - build_bug_on(eth_hlen > sizeof(*hdr)); - - d_fnstart(2, dev, "(i2400m %p skb_rx %p single %u payload %p " - "size %zu) ", i2400m, skb_rx, single_last, payload, size); - if (size < sizeof(*hdr)) { - dev_err(dev, "erx: hw bug? message with short header (%zu " - "vs %zu bytes expected) ", size, sizeof(*hdr)); - goto error; - } - - if (single_last) { - skb = skb_get(skb_rx); - d_printf(3, dev, "erx: skb %p reusing ", skb); - } else { - skb = skb_clone(skb_rx, gfp_kernel); - if (skb == null) { - dev_err(dev, "erx: no memory to clone skb "); - net_dev->stats.rx_dropped++; - goto error_skb_clone; - } - d_printf(3, dev, "erx: skb %p cloned from %p ", skb, skb_rx); - } - /* now we have to pull and trim so that the skb points to the - * beginning of the ip packet; the netdev part will add the - * ethernet header as needed - we know there is enough space - * because we checked in i2400m_rx_edata(). */ - skb_pull(skb, payload + sizeof(*hdr) - (void *) skb->data); - skb_trim(skb, (void *) skb_end_pointer(skb) - payload - sizeof(*hdr)); - - reorder = le32_to_cpu(hdr->reorder); - ro_needed = reorder & i2400m_ro_needed; - cs = hdr->cs; - if (ro_needed) { - ro_type = (reorder >> i2400m_ro_type_shift) & i2400m_ro_type; - ro_cin = (reorder >> i2400m_ro_cin_shift) & i2400m_ro_cin; - ro_sn = (reorder >> i2400m_ro_sn_shift) & i2400m_ro_sn; - - spin_lock_irqsave(&i2400m->rx_lock, flags); - if (i2400m->rx_roq == null) { - kfree_skb(skb); /* rx_roq is already destroyed */ - spin_unlock_irqrestore(&i2400m->rx_lock, flags); - goto error; - } - roq = &i2400m->rx_roq[ro_cin]; - kref_get(&i2400m->rx_roq_refcount); - spin_unlock_irqrestore(&i2400m->rx_lock, flags); - - roq_data = (struct i2400m_roq_data *) &skb->cb; - roq_data->sn = ro_sn; - roq_data->cs = cs; - d_printf(2, dev, "erx: reorder needed: " - "type %u cin %u [ws %u] sn %u/%u len %zub ", - ro_type, ro_cin, roq->ws, ro_sn, - __i2400m_roq_nsn(roq, ro_sn), size); - d_dump(2, dev, payload, size); - switch(ro_type) { - case i2400m_ro_type_reset: - i2400m_roq_reset(i2400m, roq); - kfree_skb(skb); /* no data here */ - break; - case i2400m_ro_type_packet: - i2400m_roq_queue(i2400m, roq, skb, ro_sn); - break; - case i2400m_ro_type_ws: - i2400m_roq_update_ws(i2400m, roq, ro_sn); - kfree_skb(skb); /* no data here */ - break; - case i2400m_ro_type_packet_ws: - i2400m_roq_queue_update_ws(i2400m, roq, skb, ro_sn); - break; - default: - dev_err(dev, "hw bug? unknown reorder type %u ", ro_type); - } - - spin_lock_irqsave(&i2400m->rx_lock, flags); - kref_put(&i2400m->rx_roq_refcount, i2400m_rx_roq_destroy); - spin_unlock_irqrestore(&i2400m->rx_lock, flags); - } - else - i2400m_net_erx(i2400m, skb, cs); -error_skb_clone: -error: - d_fnend(2, dev, "(i2400m %p skb_rx %p single %u payload %p " - "size %zu) = void ", i2400m, skb_rx, single_last, payload, size); -} - - -/* - * act on a received payload - * - * @i2400m: device instance - * @skb_rx: skb where the transaction was received - * @single_last: 1 this is the only payload or the last one (so the - * skb can be reused instead of cloned). - * @pld: payload descriptor - * @payload: payload data - * - * upon reception of a payload, look at its guts in the payload - * descriptor and decide what to do with it. if it is a single payload - * skb or if the last skb is a data packet, the skb will be referenced - * and modified (so it doesn't have to be cloned). - */ -static -void i2400m_rx_payload(struct i2400m *i2400m, struct sk_buff *skb_rx, - unsigned single_last, const struct i2400m_pld *pld, - const void *payload) -{ - struct device *dev = i2400m_dev(i2400m); - size_t pl_size = i2400m_pld_size(pld); - enum i2400m_pt pl_type = i2400m_pld_type(pld); - - d_printf(7, dev, "rx: received payload type %u, %zu bytes ", - pl_type, pl_size); - d_dump(8, dev, payload, pl_size); - - switch (pl_type) { - case i2400m_pt_data: - d_printf(3, dev, "rx: data payload %zu bytes ", pl_size); - i2400m_net_rx(i2400m, skb_rx, single_last, payload, pl_size); - break; - case i2400m_pt_ctrl: - i2400m_rx_ctl(i2400m, skb_rx, payload, pl_size); - break; - case i2400m_pt_trace: - i2400m_rx_trace(i2400m, payload, pl_size); - break; - case i2400m_pt_edata: - d_printf(3, dev, "erx: data payload %zu bytes ", pl_size); - i2400m_rx_edata(i2400m, skb_rx, single_last, payload, pl_size); - break; - default: /* anything else shouldn't come to the host */ - if (printk_ratelimit()) - dev_err(dev, "rx: hw bug? unexpected payload type %u ", - pl_type); - } -} - - -/* - * check a received transaction's message header - * - * @i2400m: device descriptor - * @msg_hdr: message header - * @buf_size: size of the received buffer - * - * check that the declarations done by a rx buffer message header are - * sane and consistent with the amount of data that was received. - */ -static -int i2400m_rx_msg_hdr_check(struct i2400m *i2400m, - const struct i2400m_msg_hdr *msg_hdr, - size_t buf_size) -{ - int result = -eio; - struct device *dev = i2400m_dev(i2400m); - if (buf_size < sizeof(*msg_hdr)) { - dev_err(dev, "rx: hw bug? message with short header (%zu " - "vs %zu bytes expected) ", buf_size, sizeof(*msg_hdr)); - goto error; - } - if (msg_hdr->barker != cpu_to_le32(i2400m_d2h_msg_barker)) { - dev_err(dev, "rx: hw bug? message received with unknown " - "barker 0x%08x (buf_size %zu bytes) ", - le32_to_cpu(msg_hdr->barker), buf_size); - goto error; - } - if (msg_hdr->num_pls == 0) { - dev_err(dev, "rx: hw bug? zero payload packets in message "); - goto error; - } - if (le16_to_cpu(msg_hdr->num_pls) > i2400m_max_pls_in_msg) { - dev_err(dev, "rx: hw bug? message contains more payload " - "than maximum; ignoring. "); - goto error; - } - result = 0; -error: - return result; -} - - -/* - * check a payload descriptor against the received data - * - * @i2400m: device descriptor - * @pld: payload descriptor - * @pl_itr: offset (in bytes) in the received buffer the payload is - * located - * @buf_size: size of the received buffer - * - * given a payload descriptor (part of a rx buffer), check it is sane - * and that the data it declares fits in the buffer. - */ -static -int i2400m_rx_pl_descr_check(struct i2400m *i2400m, - const struct i2400m_pld *pld, - size_t pl_itr, size_t buf_size) -{ - int result = -eio; - struct device *dev = i2400m_dev(i2400m); - size_t pl_size = i2400m_pld_size(pld); - enum i2400m_pt pl_type = i2400m_pld_type(pld); - - if (pl_size > i2400m->bus_pl_size_max) { - dev_err(dev, "rx: hw bug? payload @%zu: size %zu is " - "bigger than maximum %zu; ignoring message ", - pl_itr, pl_size, i2400m->bus_pl_size_max); - goto error; - } - if (pl_itr + pl_size > buf_size) { /* enough? */ - dev_err(dev, "rx: hw bug? payload @%zu: size %zu " - "goes beyond the received buffer " - "size (%zu bytes); ignoring message ", - pl_itr, pl_size, buf_size); - goto error; - } - if (pl_type >= i2400m_pt_illegal) { - dev_err(dev, "rx: hw bug? illegal payload type %u; " - "ignoring message ", pl_type); - goto error; - } - result = 0; -error: - return result; -} - - -/** - * i2400m_rx - receive a buffer of data from the device - * - * @i2400m: device descriptor - * @skb: skbuff where the data has been received - * - * parse in a buffer of data that contains an rx message sent from the - * device. see the file header for the format. run all checks on the - * buffer header, then run over each payload's descriptors, verify - * their consistency and act on each payload's contents. if - * everything is successful, update the device's statistics. - * - * note: you need to set the skb to contain only the length of the - * received buffer; for that, use skb_trim(skb, received_size). - * - * returns: - * - * 0 if ok, < 0 errno on error - * - * if ok, this function owns now the skb and the caller doesn't have - * to run kfree_skb() on it. however, on error, the caller still owns - * the skb and it is responsible for releasing it. - */ -int i2400m_rx(struct i2400m *i2400m, struct sk_buff *skb) -{ - int i, result; - struct device *dev = i2400m_dev(i2400m); - const struct i2400m_msg_hdr *msg_hdr; - size_t pl_itr, pl_size; - unsigned long flags; - unsigned num_pls, single_last, skb_len; - - skb_len = skb->len; - d_fnstart(4, dev, "(i2400m %p skb %p [size %u]) ", - i2400m, skb, skb_len); - msg_hdr = (void *) skb->data; - result = i2400m_rx_msg_hdr_check(i2400m, msg_hdr, skb_len); - if (result < 0) - goto error_msg_hdr_check; - result = -eio; - num_pls = le16_to_cpu(msg_hdr->num_pls); - /* check payload descriptor(s) */ - pl_itr = struct_size(msg_hdr, pld, num_pls); - pl_itr = align(pl_itr, i2400m_pl_align); - if (pl_itr > skb_len) { /* got all the payload descriptors? */ - dev_err(dev, "rx: hw bug? message too short (%u bytes) for " - "%u payload descriptors (%zu each, total %zu) ", - skb_len, num_pls, sizeof(msg_hdr->pld[0]), pl_itr); - goto error_pl_descr_short; - } - /* walk each payload payload--check we really got it */ - for (i = 0; i < num_pls; i++) { - /* work around old gcc warnings */ - pl_size = i2400m_pld_size(&msg_hdr->pld[i]); - result = i2400m_rx_pl_descr_check(i2400m, &msg_hdr->pld[i], - pl_itr, skb_len); - if (result < 0) - goto error_pl_descr_check; - single_last = num_pls == 1 || i == num_pls - 1; - i2400m_rx_payload(i2400m, skb, single_last, &msg_hdr->pld[i], - skb->data + pl_itr); - pl_itr += align(pl_size, i2400m_pl_align); - cond_resched(); /* don't monopolize */ - } - kfree_skb(skb); - /* update device statistics */ - spin_lock_irqsave(&i2400m->rx_lock, flags); - i2400m->rx_pl_num += i; - if (i > i2400m->rx_pl_max) - i2400m->rx_pl_max = i; - if (i < i2400m->rx_pl_min) - i2400m->rx_pl_min = i; - i2400m->rx_num++; - i2400m->rx_size_acc += skb_len; - if (skb_len < i2400m->rx_size_min) - i2400m->rx_size_min = skb_len; - if (skb_len > i2400m->rx_size_max) - i2400m->rx_size_max = skb_len; - spin_unlock_irqrestore(&i2400m->rx_lock, flags); -error_pl_descr_check: -error_pl_descr_short: -error_msg_hdr_check: - d_fnend(4, dev, "(i2400m %p skb %p [size %u]) = %d ", - i2400m, skb, skb_len, result); - return result; -} -export_symbol_gpl(i2400m_rx); - - -void i2400m_unknown_barker(struct i2400m *i2400m, - const void *buf, size_t size) -{ - struct device *dev = i2400m_dev(i2400m); - char prefix[64]; - const __le32 *barker = buf; - dev_err(dev, "rx: hw bug? unknown barker %08x, " - "dropping %zu bytes ", le32_to_cpu(*barker), size); - snprintf(prefix, sizeof(prefix), "%s %s: ", - dev_driver_string(dev), dev_name(dev)); - if (size > 64) { - print_hex_dump(kern_err, prefix, dump_prefix_offset, - 8, 4, buf, 64, 0); - printk(kern_err "%s... (only first 64 bytes " - "dumped) ", prefix); - } else - print_hex_dump(kern_err, prefix, dump_prefix_offset, - 8, 4, buf, size, 0); -} -export_symbol(i2400m_unknown_barker); - - -/* - * initialize the rx queue and infrastructure - * - * this sets up all the rx reordering infrastructures, which will not - * be used if reordering is not enabled or if the firmware does not - * support it. the device is told to do reordering in - * i2400m_dev_initialize(), where it also looks at the value of the - * i2400m->rx_reorder switch before taking a decission. - * - * note we allocate the roq queues in one chunk and the actual logging - * support for it (logging) in another one and then we setup the - * pointers from the first to the last. - */ -int i2400m_rx_setup(struct i2400m *i2400m) -{ - int result = 0; - - i2400m->rx_reorder = i2400m_rx_reorder_disabled? 0 : 1; - if (i2400m->rx_reorder) { - unsigned itr; - struct i2400m_roq_log *rd; - - result = -enomem; - - i2400m->rx_roq = kcalloc(i2400m_ro_cin + 1, - sizeof(i2400m->rx_roq[0]), gfp_kernel); - if (i2400m->rx_roq == null) - goto error_roq_alloc; - - rd = kcalloc(i2400m_ro_cin + 1, sizeof(*i2400m->rx_roq[0].log), - gfp_kernel); - if (rd == null) { - result = -enomem; - goto error_roq_log_alloc; - } - - for(itr = 0; itr < i2400m_ro_cin + 1; itr++) { - __i2400m_roq_init(&i2400m->rx_roq[itr]); - i2400m->rx_roq[itr].log = &rd[itr]; - } - kref_init(&i2400m->rx_roq_refcount); - } - return 0; - -error_roq_log_alloc: - kfree(i2400m->rx_roq); -error_roq_alloc: - return result; -} - - -/* tear down the rx queue and infrastructure */ -void i2400m_rx_release(struct i2400m *i2400m) -{ - unsigned long flags; - - if (i2400m->rx_reorder) { - spin_lock_irqsave(&i2400m->rx_lock, flags); - kref_put(&i2400m->rx_roq_refcount, i2400m_rx_roq_destroy); - spin_unlock_irqrestore(&i2400m->rx_lock, flags); - } - /* at this point, nothing can be received... */ - i2400m_report_hook_flush(i2400m); -} diff --git a/drivers/staging/wimax/i2400m/sysfs.c b/drivers/staging/wimax/i2400m/sysfs.c --- a/drivers/staging/wimax/i2400m/sysfs.c +++ /dev/null -// spdx-license-identifier: gpl-2.0-only -/* - * intel wireless wimax connection 2400m - * sysfs interfaces to show driver and device information - * - * copyright (c) 2007 intel corporation <linux-wimax@intel.com> - * inaky perez-gonzalez <inaky.perez-gonzalez@intel.com> - */ - -#include <linux/netdevice.h> -#include <linux/etherdevice.h> -#include <linux/spinlock.h> -#include <linux/device.h> -#include "i2400m.h" - - -#define d_submodule sysfs -#include "debug-levels.h" - - -/* - * set the idle timeout (msecs) - * - * fixme: eventually this should be a common wimax stack method, but - * would like to wait to see how other devices manage it. - */ -static -ssize_t i2400m_idle_timeout_store(struct device *dev, - struct device_attribute *attr, - const char *buf, size_t size) -{ - ssize_t result; - struct i2400m *i2400m = net_dev_to_i2400m(to_net_dev(dev)); - unsigned val; - - result = -einval; - if (sscanf(buf, "%u ", &val) != 1) - goto error_no_unsigned; - if (val != 0 && (val < 100 || val > 300000 || val % 100 != 0)) { - dev_err(dev, "idle_timeout: %u: invalid msecs specification; " - "valid values are 0, 100-300000 in 100 increments ", - val); - goto error_bad_value; - } - result = i2400m_set_idle_timeout(i2400m, val); - if (result >= 0) - result = size; -error_no_unsigned: -error_bad_value: - return result; -} - -static -device_attr_wo(i2400m_idle_timeout); - -static -struct attribute *i2400m_dev_attrs[] = { - &dev_attr_i2400m_idle_timeout.attr, - null, -}; - -struct attribute_group i2400m_dev_attr_group = { - .name = null, /* we want them in the same directory */ - .attrs = i2400m_dev_attrs, -}; diff --git a/drivers/staging/wimax/i2400m/tx.c b/drivers/staging/wimax/i2400m/tx.c --- a/drivers/staging/wimax/i2400m/tx.c +++ /dev/null -/* - * intel wireless wimax connection 2400m - * generic (non-bus specific) tx handling - * - * - * copyright (c) 2007-2008 intel corporation. all rights reserved. - * - * redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * - * * redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * * redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in - * the documentation and/or other materials provided with the - * distribution. - * * neither the name of intel corporation nor the names of its - * contributors may be used to endorse or promote products derived - * from this software without specific prior written permission. - * - * this software is provided by the copyright holders and contributors - * "as is" and any express or implied warranties, including, but not - * limited to, the implied warranties of merchantability and fitness for - * a particular purpose are disclaimed. in no event shall the copyright - * owner or contributors be liable for any direct, indirect, incidental, - * special, exemplary, or consequential damages (including, but not - * limited to, procurement of substitute goods or services; loss of use, - * data, or profits; or business interruption) however caused and on any - * theory of liability, whether in contract, strict liability, or tort - * (including negligence or otherwise) arising in any way out of the use - * of this software, even if advised of the possibility of such damage. - * - * - * intel corporation <linux-wimax@intel.com> - * yanir lubetkin <yanirx.lubetkin@intel.com> - * - initial implementation - * - * intel corporation <linux-wimax@intel.com> - * inaky perez-gonzalez <inaky.perez-gonzalez@intel.com> - * - rewritten to use a single fifo to lower the memory allocation - * pressure and optimize cache hits when copying to the queue, as - * well as splitting out bus-specific code. - * - * - * implements data transmission to the device; this is done through a - * software fifo, as data/control frames can be coalesced (while the - * device is reading the previous tx transaction, others accumulate). - * - * a fifo is used because at the end it is resource-cheaper that trying - * to implement scatter/gather over usb. as well, most traffic is going - * to be download (vs upload). - * - * the format for sending/receiving data to/from the i2400m is - * described in detail in rx.c:protocol format. in here we implement - * the transmission of that. this is split between a bus-independent - * part that just prepares everything and a bus-specific part that - * does the actual transmission over the bus to the device (in the - * bus-specific driver). - * - * - * the general format of a device-host transaction is msg-hdr, pld1, - * pld2...pldn, pl1, pl2,...pln, padding. - * - * because we need the send payload descriptors and then payloads and - * because it is kind of expensive to do scatterlists in usb (one urb - * per node), it becomes cheaper to append all the data to a fifo - * (copying to a fifo potentially in cache is cheaper). - * - * then the bus-specific code takes the parts of that fifo that are - * written and passes them to the device. - * - * so the concepts to keep in mind there are: - * - * we use a fifo to queue the data in a linear buffer. we first append - * a msg-hdr, space for i2400m_tx_pld_max payload descriptors and then - * go appending payloads until we run out of space or of payload - * descriptors. then we append padding to make the whole transaction a - * multiple of i2400m->bus_tx_block_size (as defined by the bus layer). - * - * - a tx message: a combination of a message header, payload - * descriptors and payloads. - * - * open: it is marked as active (i2400m->tx_msg is valid) and we - * can keep adding payloads to it. - * - * closed: we are not appending more payloads to this tx message - * (exhausted space in the queue, too many payloads or - * whichever). we have appended padding so the whole message - * length is aligned to i2400m->bus_tx_block_size (as set by the - * bus/transport layer). - * - * - most of the time we keep a tx message open to which we append - * payloads. - * - * - if we are going to append and there is no more space (we are at - * the end of the fifo), we close the message, mark the rest of the - * fifo space unusable (skip_tail), create a new message at the - * beginning of the fifo (if there is space) and append the message - * there. - * - * this is because we need to give linear tx messages to the bus - * engine. so we don't write a message to the remaining fifo space - * until the tail and continue at the head of it. - * - * - we overload one of the fields in the message header to use it as - * 'size' of the tx message, so we can iterate over them. it also - * contains a flag that indicates if we have to skip it or not. - * when we send the buffer, we update that to its real on-the-wire - * value. - * - * - the msg-hdr pld1...pld2 stuff has to be a size multiple of 16. - * - * it follows that if msg-hdr says we have n messages, the whole - * header + descriptors is 16 + 4*n; for those to be a multiple of - * 16, it follows that n can be 4, 8, 12, ... (32, 48, 64, 80... - * bytes). - * - * so if we have only 1 payload, we have to submit a header that in - * all truth has space for 4. - * - * the implication is that we reserve space for 12 (64 bytes); but - * if we fill up only (eg) 2, our header becomes 32 bytes only. so - * the tx engine has to shift those 32 bytes of msg header and 2 - * payloads and padding so that right after it the payloads start - * and the tx engine has to know about that. - * - * it is cheaper to move the header up than the whole payloads down. - * - * we do this in i2400m_tx_close(). see 'i2400m_msg_hdr->offset'. - * - * - each payload has to be size-padded to 16 bytes; before appending - * it, we just do it. - * - * - the whole message has to be padded to i2400m->bus_tx_block_size; - * we do this at close time. thus, when reserving space for the - * payload, we always make sure there is also free space for this - * padding that sooner or later will happen. - * - * when we append a message, we tell the bus specific code to kick in - * txs. it will tx (in parallel) until the buffer is exhausted--hence - * the lockin we do. the tx code will only send a tx message at the - * time (which remember, might contain more than one payload). of - * course, when the bus-specific driver attempts to tx a message that - * is still open, it gets closed first. - * - * gee, this is messy; well a picture. in the example below we have a - * partially full fifo, with a closed message ready to be delivered - * (with a moved message header to make sure it is size-aligned to - * 16), tail room that was unusable (and thus is marked with a message - * header that says 'skip this') and at the head of the buffer, an - * incomplete message with a couple of payloads. - * - * n ___________________________________________________ - * | | - * | tail room | - * | | - * | msg_hdr to skip (size |= 0x80000) | - * |---------------------------------------------------|------- - * | | /|\ - * | | | - * | tx message padding | | - * | | | - * | | | - * |- - - - - - - - - - - - - - - - - - - - - - - - - -| | - * | | | - * | payload 1 | | - * | | n * tx_block_size - * | | | - * |- - - - - - - - - - - - - - - - - - - - - - - - - -| | - * | | | - * | payload 1 | | - * | | | - * | | | - * |- - - - - - - - - - - - - - - - - - - - - - - - - -|- -|- - - - - * | padding 3 /|\ | | /|\ - * | padding 2 | | | | - * | pld 1 32 bytes (2 * 16) | | | - * | pld 0 | | | | - * | moved msg_hdr \|/ | \|/ | - * |- - - - - - - - - - - - - - - - - - - - - - - - - -|- - - | - * | | _pld_size - * | unused | | - * | | | - * |- - - - - - - - - - - - - - - - - - - - - - - - - -| | - * | msg_hdr (size x) [this message is closed] | \|/ - * |===================================================|========== <=== out - * | | - * | | - * | | - * | free rooom | - * | | - * | | - * | | - * | | - * | | - * | | - * | | - * | | - * | | - * |===================================================|========== <=== in - * | | - * | | - * | | - * | | - * | payload 1 | - * | | - * | | - * |- - - - - - - - - - - - - - - - - - - - - - - - - -| - * | | - * | payload 0 | - * | | - * | | - * |- - - - - - - - - - - - - - - - - - - - - - - - - -| - * | pld 11 /|\ | - * | ... | | - * | pld 1 64 bytes (2 * 16) | - * | pld 0 | | - * | msg_hdr (size x) \|/ [message is open] | - * 0 --------------------------------------------------- - * - * - * roadmap - * - * i2400m_tx_setup() called by i2400m_setup - * i2400m_tx_release() called by i2400m_release() - * - * i2400m_tx() called to send data or control frames - * i2400m_tx_fifo_push() allocates append-space in the fifo - * i2400m_tx_new() opens a new message in the fifo - * i2400m_tx_fits() checks if a new payload fits in the message - * i2400m_tx_close() closes an open message in the fifo - * i2400m_tx_skip_tail() marks unusable fifo tail space - * i2400m->bus_tx_kick() - * - * now i2400m->bus_tx_kick() is the the bus-specific driver backend - * implementation; that would do: - * - * i2400m->bus_tx_kick() - * i2400m_tx_msg_get() gets first message ready to go - * ...sends it... - * i2400m_tx_msg_sent() ack the message is sent; repeat from - * _tx_msg_get() until it returns null - * (fifo empty). - */ -#include <linux/netdevice.h> -#include <linux/slab.h> -#include <linux/export.h> -#include "i2400m.h" - - -#define d_submodule tx -#include "debug-levels.h" - -enum { - /** - * tx buffer size - * - * doc says maximum transaction is 16kib. if we had 16kib en - * route and 16kib being queued, it boils down to needing - * 32kib. - * 32kib is insufficient for 1400 mtu, hence increasing - * tx buffer size to 64kib. - */ - i2400m_tx_buf_size = 65536, - /** - * message header and payload descriptors have to be 16 - * aligned (16 + 4 * n = 16 * m). if we take that average sent - * packets are mtu size (~1400-~1500) it follows that we could - * fit at most 10-11 payloads in one transaction. to meet the - * alignment requirement, that means we need to leave space - * for 12 (64 bytes). to simplify, we leave space for that. if - * at the end there are less, we pad up to the nearest - * multiple of 16. - */ - /* - * according to intel wimax i3200, i5x50 and i6x50 specification - * documents, the maximum number of payloads per message can be - * up to 60. increasing the number of payloads to 60 per message - * helps to accommodate smaller payloads in a single transaction. - */ - i2400m_tx_pld_max = 60, - i2400m_tx_pld_size = sizeof(struct i2400m_msg_hdr) - + i2400m_tx_pld_max * sizeof(struct i2400m_pld), - i2400m_tx_skip = 0x80000000, - /* - * according to intel wimax i3200, i5x50 and i6x50 specification - * documents, the maximum size of each message can be up to 16kib. - */ - i2400m_tx_msg_size = 16384, -}; - -#define tail_full ((void *)~(unsigned long)null) - -/* - * calculate how much tail room is available - * - * note the trick here. this path is only called for case a (see - * i2400m_tx_fifo_push() below), where we have: - * - * case a - * n ___________ - * | tail room | - * | | - * |<- in ->| - * | | - * | data | - * | | - * |<- out ->| - * | | - * | head room | - * 0 ----------- - * - * when calculating the tail_room, tx_in might get to be zero if - * i2400m->tx_in is right at the end of the buffer (really full - * buffer) if there is no head room. in this case, tail_room would be - * i2400m_tx_buf_size, although it is actually zero. hence the final - * mod (%) operation. however, when doing this kind of optimization, - * i2400m->tx_in being zero would fail, so we treat is an a special - * case. - */ -static inline -size_t __i2400m_tx_tail_room(struct i2400m *i2400m) -{ - size_t tail_room; - size_t tx_in; - - if (unlikely(i2400m->tx_in == 0)) - return i2400m_tx_buf_size; - tx_in = i2400m->tx_in % i2400m_tx_buf_size; - tail_room = i2400m_tx_buf_size - tx_in; - tail_room %= i2400m_tx_buf_size; - return tail_room; -} - - -/* - * allocate @size bytes in the tx fifo, return a pointer to it - * - * @i2400m: device descriptor - * @size: size of the buffer we need to allocate - * @padding: ensure that there is at least this many bytes of free - * contiguous space in the fifo. this is needed because later on - * we might need to add padding. - * @try_head: specify either to allocate head room or tail room space - * in the tx fifo. this boolean is required to avoids a system hang - * due to an infinite loop caused by i2400m_tx_fifo_push(). - * the caller must always try to allocate tail room space first by - * calling this routine with try_head = 0. in case if there - * is not enough tail room space but there is enough head room space, - * (i2400m_tx_fifo_push() returns tail_full) try to allocate head - * room space, by calling this routine again with try_head = 1. - * - * returns: - * - * pointer to the allocated space. null if there is no - * space. tail_full if there is no space at the tail but there is at - * the head (case b below). - * - * these are the two basic cases we need to keep an eye for -- it is - * much better explained in linux/kernel/kfifo.c, but this code - * basically does the same. no rocket science here. - * - * case a case b - * n ___________ ___________ - * | tail room | | data | - * | | | | - * |<- in ->| |<- out ->| - * | | | | - * | data | | room | - * | | | | - * |<- out ->| |<- in ->| - * | | | | - * | head room | | data | - * 0 ----------- ----------- - * - * we allocate only *contiguous* space. - * - * we can allocate only from 'room'. in case b, it is simple; in case - * a, we only try from the tail room; if it is not enough, we just - * fail and return tail_full and let the caller figure out if we wants to - * skip the tail room and try to allocate from the head. - * - * there is a corner case, wherein i2400m_tx_new() can get into - * an infinite loop calling i2400m_tx_fifo_push(). - * in certain situations, tx_in would have reached on the top of tx fifo - * and i2400m_tx_tail_room() returns 0, as described below: - * - * n ___________ tail room is zero - * |<- in ->| - * | | - * | | - * | | - * | data | - * |<- out ->| - * | | - * | | - * | head room | - * 0 ----------- - * during such a time, where tail room is zero in the tx fifo and if there - * is a request to add a payload to tx fifo, which calls: - * i2400m_tx() - * ->calls i2400m_tx_close() - * ->calls i2400m_tx_skip_tail() - * goto try_new; - * ->calls i2400m_tx_new() - * |----> [try_head:] - * infinite loop | ->calls i2400m_tx_fifo_push() - * | if (tail_room < needed) - * | if (head_room => needed) - * | return tail_full; - * |<---- goto try_head; - * - * i2400m_tx() calls i2400m_tx_close() to close the message, since there - * is no tail room to accommodate the payload and calls - * i2400m_tx_skip_tail() to skip the tail space. now i2400m_tx() calls - * i2400m_tx_new() to allocate space for new message header calling - * i2400m_tx_fifo_push() that returns tail_full, since there is no tail space - * to accommodate the message header, but there is enough head space. - * the i2400m_tx_new() keeps re-retrying by calling i2400m_tx_fifo_push() - * ending up in a loop causing system freeze. - * - * this corner case is avoided by using a try_head boolean, - * as an argument to i2400m_tx_fifo_push(). - * - * note: - * - * assumes i2400m->tx_lock is taken, and we use that as a barrier - * - * the indexes keep increasing and we reset them to zero when we - * pop data off the queue - */ -static -void *i2400m_tx_fifo_push(struct i2400m *i2400m, size_t size, - size_t padding, bool try_head) -{ - struct device *dev = i2400m_dev(i2400m); - size_t room, tail_room, needed_size; - void *ptr; - - needed_size = size + padding; - room = i2400m_tx_buf_size - (i2400m->tx_in - i2400m->tx_out); - if (room < needed_size) { /* this takes care of case b */ - d_printf(2, dev, "fifo push %zu/%zu: no space ", - size, padding); - return null; - } - /* is there space at the tail? */ - tail_room = __i2400m_tx_tail_room(i2400m); - if (!try_head && tail_room < needed_size) { - /* - * if the tail room space is not enough to push the message - * in the tx fifo, then there are two possibilities: - * 1. there is enough head room space to accommodate - * this message in the tx fifo. - * 2. there is not enough space in the head room and - * in tail room of the tx fifo to accommodate the message. - * in the case (1), return tail_full so that the caller - * can figure out, if the caller wants to push the message - * into the head room space. - * in the case (2), return null, indicating that the tx fifo - * cannot accommodate the message. - */ - if (room - tail_room >= needed_size) { - d_printf(2, dev, "fifo push %zu/%zu: tail full ", - size, padding); - return tail_full; /* there might be head space */ - } else { - d_printf(2, dev, "fifo push %zu/%zu: no head space ", - size, padding); - return null; /* there is no space */ - } - } - ptr = i2400m->tx_buf + i2400m->tx_in % i2400m_tx_buf_size; - d_printf(2, dev, "fifo push %zu/%zu: at @%zu ", size, padding, - i2400m->tx_in % i2400m_tx_buf_size); - i2400m->tx_in += size; - return ptr; -} - - -/* - * mark the tail of the fifo buffer as 'to-skip' - * - * we should never hit the bug_on() because all the sizes we push to - * the fifo are padded to be a multiple of 16 -- the size of *msg - * (i2400m_pl_pad for the payloads, i2400m_tx_pld_size for the - * header). - * - * tail room can get to be zero if a message was opened when there was - * space only for a header. _tx_close() will mark it as to-skip (as it - * will have no payloads) and there will be no more space to flush, so - * nothing has to be done here. this is probably cheaper than ensuring - * in _tx_new() that there is some space for payloads...as we could - * always possibly hit the same problem if the payload wouldn't fit. - * - * note: - * - * assumes i2400m->tx_lock is taken, and we use that as a barrier - * - * this path is only taken for case a fifo situations [see - * i2400m_tx_fifo_push()] - */ -static -void i2400m_tx_skip_tail(struct i2400m *i2400m) -{ - struct device *dev = i2400m_dev(i2400m); - size_t tx_in = i2400m->tx_in % i2400m_tx_buf_size; - size_t tail_room = __i2400m_tx_tail_room(i2400m); - struct i2400m_msg_hdr *msg = i2400m->tx_buf + tx_in; - if (unlikely(tail_room == 0)) - return; - bug_on(tail_room < sizeof(*msg)); - msg->size = tail_room | i2400m_tx_skip; - d_printf(2, dev, "skip tail: skipping %zu bytes @%zu ", - tail_room, tx_in); - i2400m->tx_in += tail_room; -} - - -/* - * check if a skb will fit in the tx queue's current active tx - * message (if there are still descriptors left unused). - * - * returns: - * 0 if the message won't fit, 1 if it will. - * - * note: - * - * assumes a tx message is active (i2400m->tx_msg). - * - * assumes i2400m->tx_lock is taken, and we use that as a barrier - */ -static -unsigned i2400m_tx_fits(struct i2400m *i2400m) -{ - struct i2400m_msg_hdr *msg_hdr = i2400m->tx_msg; - return le16_to_cpu(msg_hdr->num_pls) < i2400m_tx_pld_max; - -} - - -/* - * start a new tx message header in the queue. - * - * reserve memory from the base fifo engine and then just initialize - * the message header. - * - * we allocate the biggest tx message header we might need (one that'd - * fit i2400m_tx_pld_max payloads) -- when it is closed it will be - * 'ironed it out' and the unneeded parts removed. - * - * note: - * - * assumes that the previous message is closed (eg: either - * there was none or 'i2400m_tx_close()' was called on it). - * - * assumes i2400m->tx_lock is taken, and we use that as a barrier - */ -static -void i2400m_tx_new(struct i2400m *i2400m) -{ - struct device *dev = i2400m_dev(i2400m); - struct i2400m_msg_hdr *tx_msg; - bool try_head = false; - bug_on(i2400m->tx_msg != null); - /* - * in certain situations, tx queue might have enough space to - * accommodate the new message header i2400m_tx_pld_size, but - * might not have enough space to accommodate the payloads. - * adding bus_tx_room_min padding while allocating a new tx message - * increases the possibilities of including at least one payload of the - * size <= bus_tx_room_min. - */ -try_head: - tx_msg = i2400m_tx_fifo_push(i2400m, i2400m_tx_pld_size, - i2400m->bus_tx_room_min, try_head); - if (tx_msg == null) - goto out; - else if (tx_msg == tail_full) { - i2400m_tx_skip_tail(i2400m); - d_printf(2, dev, "new tx message: tail full, trying head "); - try_head = true; - goto try_head; - } - memset(tx_msg, 0, i2400m_tx_pld_size); - tx_msg->size = i2400m_tx_pld_size; -out: - i2400m->tx_msg = tx_msg; - d_printf(2, dev, "new tx message: %p @%zu ", - tx_msg, (void *) tx_msg - i2400m->tx_buf); -} - - -/* - * finalize the current tx message header - * - * sets the message header to be at the proper location depending on - * how many descriptors we have (check documentation at the file's - * header for more info on that). - * - * appends padding bytes to make sure the whole tx message (counting - * from the 'relocated' message header) is aligned to - * tx_block_size. we assume the _append() code has left enough space - * in the fifo for that. if there are no payloads, just pass, as it - * won't be transferred. - * - * the amount of padding bytes depends on how many payloads are in the - * tx message, as the "msg header and payload descriptors" will be - * shifted up in the buffer. - */ -static -void i2400m_tx_close(struct i2400m *i2400m) -{ - struct device *dev = i2400m_dev(i2400m); - struct i2400m_msg_hdr *tx_msg = i2400m->tx_msg; - struct i2400m_msg_hdr *tx_msg_moved; - size_t aligned_size, padding, hdr_size; - void *pad_buf; - unsigned num_pls; - - if (tx_msg->size & i2400m_tx_skip) /* a skipper? nothing to do */ - goto out; - num_pls = le16_to_cpu(tx_msg->num_pls); - /* we can get this situation when a new message was started - * and there was no space to add payloads before hitting the - tail (and taking padding into consideration). */ - if (num_pls == 0) { - tx_msg->size |= i2400m_tx_skip; - goto out; - } - /* relocate the message header - * - * find the current header size, align it to 16 and if we need - * to move it so the tail is next to the payloads, move it and - * set the offset. - * - * if it moved, this header is good only for transmission; the - * original one (it is kept if we moved) is still used to - * figure out where the next tx message starts (and where the - * offset to the moved header is). - */ - hdr_size = struct_size(tx_msg, pld, le16_to_cpu(tx_msg->num_pls)); - hdr_size = align(hdr_size, i2400m_pl_align); - tx_msg->offset = i2400m_tx_pld_size - hdr_size; - tx_msg_moved = (void *) tx_msg + tx_msg->offset; - memmove(tx_msg_moved, tx_msg, hdr_size); - tx_msg_moved->size -= tx_msg->offset; - /* - * now figure out how much we have to add to the (moved!) - * message so the size is a multiple of i2400m->bus_tx_block_size. - */ - aligned_size = align(tx_msg_moved->size, i2400m->bus_tx_block_size); - padding = aligned_size - tx_msg_moved->size; - if (padding > 0) { - pad_buf = i2400m_tx_fifo_push(i2400m, padding, 0, 0); - if (warn_on(pad_buf == null || pad_buf == tail_full)) { - /* this should not happen -- append should verify - * there is always space left at least to append - * tx_block_size */ - dev_err(dev, - "sw bug! possible data leakage from memory the " - "device should not read for padding - " - "size %lu aligned_size %zu tx_buf %p in " - "%zu out %zu ", - (unsigned long) tx_msg_moved->size, - aligned_size, i2400m->tx_buf, i2400m->tx_in, - i2400m->tx_out); - } else - memset(pad_buf, 0xad, padding); - } - tx_msg_moved->padding = cpu_to_le16(padding); - tx_msg_moved->size += padding; - if (tx_msg != tx_msg_moved) - tx_msg->size += padding; -out: - i2400m->tx_msg = null; -} - - -/** - * i2400m_tx - send the data in a buffer to the device - * - * @i2400m: device descriptor - * - * @buf: pointer to the buffer to transmit - * - * @buf_len: buffer size - * - * @pl_type: type of the payload we are sending. - * - * returns: - * 0 if ok, < 0 errno code on error (-enospc, if there is no more - * room for the message in the queue). - * - * appends the buffer to the tx fifo and notifies the bus-specific - * part of the driver that there is new data ready to transmit. - * once this function returns, the buffer has been copied, so it can - * be reused. - * - * the steps followed to append are explained in detail in the file - * header. - * - * whenever we write to a message, we increase msg->size, so it - * reflects exactly how big the message is. this is needed so that if - * we concatenate two messages before they can be sent, the code that - * sends the messages can find the boundaries (and it will replace the - * size with the real barker before sending). - * - * note: - * - * cold and warm reset payloads need to be sent as a single - * payload, so we handle that. - */ -int i2400m_tx(struct i2400m *i2400m, const void *buf, size_t buf_len, - enum i2400m_pt pl_type) -{ - int result = -enospc; - struct device *dev = i2400m_dev(i2400m); - unsigned long flags; - size_t padded_len; - void *ptr; - bool try_head = false; - unsigned is_singleton = pl_type == i2400m_pt_reset_warm - || pl_type == i2400m_pt_reset_cold; - - d_fnstart(3, dev, "(i2400m %p skb %p [%zu bytes] pt %u) ", - i2400m, buf, buf_len, pl_type); - padded_len = align(buf_len, i2400m_pl_align); - d_printf(5, dev, "padded_len %zd buf_len %zd ", padded_len, buf_len); - /* if there is no current tx message, create one; if the - * current one is out of payload slots or we have a singleton, - * close it and start a new one */ - spin_lock_irqsave(&i2400m->tx_lock, flags); - /* if tx_buf is null, device is shutdown */ - if (i2400m->tx_buf == null) { - result = -eshutdown; - goto error_tx_new; - } -try_new: - if (unlikely(i2400m->tx_msg == null)) - i2400m_tx_new(i2400m); - else if (unlikely(!i2400m_tx_fits(i2400m) - || (is_singleton && i2400m->tx_msg->num_pls != 0))) { - d_printf(2, dev, "closing tx message (fits %u singleton " - "%u num_pls %u) ", i2400m_tx_fits(i2400m), - is_singleton, i2400m->tx_msg->num_pls); - i2400m_tx_close(i2400m); - i2400m_tx_new(i2400m); - } - if (i2400m->tx_msg == null) - goto error_tx_new; - /* - * check if this skb will fit in the tx queue's current active - * tx message. the total message size must not exceed the maximum - * size of each message i2400m_tx_msg_size. if it exceeds, - * close the current message and push this skb into the new message. - */ - if (i2400m->tx_msg->size + padded_len > i2400m_tx_msg_size) { - d_printf(2, dev, "tx: message too big, going new "); - i2400m_tx_close(i2400m); - i2400m_tx_new(i2400m); - } - if (i2400m->tx_msg == null) - goto error_tx_new; - /* so we have a current message header; now append space for - * the message -- if there is not enough, try the head */ - ptr = i2400m_tx_fifo_push(i2400m, padded_len, - i2400m->bus_tx_block_size, try_head); - if (ptr == tail_full) { /* tail is full, try head */ - d_printf(2, dev, "pl append: tail full "); - i2400m_tx_close(i2400m); - i2400m_tx_skip_tail(i2400m); - try_head = true; - goto try_new; - } else if (ptr == null) { /* all full */ - result = -enospc; - d_printf(2, dev, "pl append: all full "); - } else { /* got space, copy it, set padding */ - struct i2400m_msg_hdr *tx_msg = i2400m->tx_msg; - unsigned num_pls = le16_to_cpu(tx_msg->num_pls); - memcpy(ptr, buf, buf_len); - memset(ptr + buf_len, 0xad, padded_len - buf_len); - i2400m_pld_set(&tx_msg->pld[num_pls], buf_len, pl_type); - d_printf(3, dev, "pld 0x%08x (type 0x%1x len 0x%04zx ", - le32_to_cpu(tx_msg->pld[num_pls].val), - pl_type, buf_len); - tx_msg->num_pls = cpu_to_le16(num_pls + 1); - tx_msg->size += padded_len; - d_printf(2, dev, "tx: appended %zu b (up to %u b) pl #%u ", - padded_len, tx_msg->size, num_pls+1); - d_printf(2, dev, - "tx: appended hdr @%zu %zu b pl #%u @%zu %zu/%zu b ", - (void *)tx_msg - i2400m->tx_buf, (size_t)tx_msg->size, - num_pls+1, ptr - i2400m->tx_buf, buf_len, padded_len); - result = 0; - if (is_singleton) - i2400m_tx_close(i2400m); - } -error_tx_new: - spin_unlock_irqrestore(&i2400m->tx_lock, flags); - /* kick in most cases, except when the tx subsys is down, as - * it might free space */ - if (likely(result != -eshutdown)) - i2400m->bus_tx_kick(i2400m); - d_fnend(3, dev, "(i2400m %p skb %p [%zu bytes] pt %u) = %d ", - i2400m, buf, buf_len, pl_type, result); - return result; -} -export_symbol_gpl(i2400m_tx); - - -/** - * i2400m_tx_msg_get - get the first tx message in the fifo to start sending it - * - * @i2400m: device descriptors - * @bus_size: where to place the size of the tx message - * - * called by the bus-specific driver to get the first tx message at - * the fif that is ready for transmission. - * - * it sets the state in @i2400m to indicate the bus-specific driver is - * transferring that message (i2400m->tx_msg_size). - * - * once the transfer is completed, call i2400m_tx_msg_sent(). - * - * notes: - * - * the size of the tx message to be transmitted might be smaller than - * that of the tx message in the fifo (in case the header was - * shorter). hence, we copy it in @bus_size, for the bus layer to - * use. we keep the message's size in i2400m->tx_msg_size so that - * when the bus later is done transferring we know how much to - * advance the fifo. - * - * we collect statistics here as all the data is available and we - * assume it is going to work [see i2400m_tx_msg_sent()]. - */ -struct i2400m_msg_hdr *i2400m_tx_msg_get(struct i2400m *i2400m, - size_t *bus_size) -{ - struct device *dev = i2400m_dev(i2400m); - struct i2400m_msg_hdr *tx_msg, *tx_msg_moved; - unsigned long flags, pls; - - d_fnstart(3, dev, "(i2400m %p bus_size %p) ", i2400m, bus_size); - spin_lock_irqsave(&i2400m->tx_lock, flags); - tx_msg_moved = null; - if (i2400m->tx_buf == null) - goto out_unlock; -skip: - tx_msg_moved = null; - if (i2400m->tx_in == i2400m->tx_out) { /* empty fifo? */ - i2400m->tx_in = 0; - i2400m->tx_out = 0; - d_printf(2, dev, "tx: fifo empty: resetting "); - goto out_unlock; - } - tx_msg = i2400m->tx_buf + i2400m->tx_out % i2400m_tx_buf_size; - if (tx_msg->size & i2400m_tx_skip) { /* skip? */ - d_printf(2, dev, "tx: skip: msg @%zu (%zu b) ", - i2400m->tx_out % i2400m_tx_buf_size, - (size_t) tx_msg->size & ~i2400m_tx_skip); - i2400m->tx_out += tx_msg->size & ~i2400m_tx_skip; - goto skip; - } - - if (tx_msg->num_pls == 0) { /* no payloads? */ - if (tx_msg == i2400m->tx_msg) { /* open, we are done */ - d_printf(2, dev, - "tx: fifo empty: open msg w/o payloads @%zu ", - (void *) tx_msg - i2400m->tx_buf); - tx_msg = null; - goto out_unlock; - } else { /* closed, skip it */ - d_printf(2, dev, - "tx: skip msg w/o payloads @%zu (%zu b) ", - (void *) tx_msg - i2400m->tx_buf, - (size_t) tx_msg->size); - i2400m->tx_out += tx_msg->size & ~i2400m_tx_skip; - goto skip; - } - } - if (tx_msg == i2400m->tx_msg) /* open msg? */ - i2400m_tx_close(i2400m); - - /* now we have a valid tx message (with payloads) to tx */ - tx_msg_moved = (void *) tx_msg + tx_msg->offset; - i2400m->tx_msg_size = tx_msg->size; - *bus_size = tx_msg_moved->size; - d_printf(2, dev, "tx: pid %d msg hdr at @%zu offset +@%zu " - "size %zu bus_size %zu ", - current->pid, (void *) tx_msg - i2400m->tx_buf, - (size_t) tx_msg->offset, (size_t) tx_msg->size, - (size_t) tx_msg_moved->size); - tx_msg_moved->barker = cpu_to_le32(i2400m_h2d_preview_barker); - tx_msg_moved->sequence = cpu_to_le32(i2400m->tx_sequence++); - - pls = le16_to_cpu(tx_msg_moved->num_pls); - i2400m->tx_pl_num += pls; /* update stats */ - if (pls > i2400m->tx_pl_max) - i2400m->tx_pl_max = pls; - if (pls < i2400m->tx_pl_min) - i2400m->tx_pl_min = pls; - i2400m->tx_num++; - i2400m->tx_size_acc += *bus_size; - if (*bus_size < i2400m->tx_size_min) - i2400m->tx_size_min = *bus_size; - if (*bus_size > i2400m->tx_size_max) - i2400m->tx_size_max = *bus_size; -out_unlock: - spin_unlock_irqrestore(&i2400m->tx_lock, flags); - d_fnstart(3, dev, "(i2400m %p bus_size %p [%zu]) = %p ", - i2400m, bus_size, *bus_size, tx_msg_moved); - return tx_msg_moved; -} -export_symbol_gpl(i2400m_tx_msg_get); - - -/** - * i2400m_tx_msg_sent - indicate the transmission of a tx message - * - * @i2400m: device descriptor - * - * called by the bus-specific driver when a message has been sent; - * this pops it from the fifo; and as there is space, start the queue - * in case it was stopped. - * - * should be called even if the message send failed and we are - * dropping this tx message. - */ -void i2400m_tx_msg_sent(struct i2400m *i2400m) -{ - unsigned n; - unsigned long flags; - struct device *dev = i2400m_dev(i2400m); - - d_fnstart(3, dev, "(i2400m %p) ", i2400m); - spin_lock_irqsave(&i2400m->tx_lock, flags); - if (i2400m->tx_buf == null) - goto out_unlock; - i2400m->tx_out += i2400m->tx_msg_size; - d_printf(2, dev, "tx: sent %zu b ", (size_t) i2400m->tx_msg_size); - i2400m->tx_msg_size = 0; - bug_on(i2400m->tx_out > i2400m->tx_in); - /* level them fifo markers off */ - n = i2400m->tx_out / i2400m_tx_buf_size; - i2400m->tx_out %= i2400m_tx_buf_size; - i2400m->tx_in -= n * i2400m_tx_buf_size; -out_unlock: - spin_unlock_irqrestore(&i2400m->tx_lock, flags); - d_fnend(3, dev, "(i2400m %p) = void ", i2400m); -} -export_symbol_gpl(i2400m_tx_msg_sent); - - -/** - * i2400m_tx_setup - initialize the tx queue and infrastructure - * - * @i2400m: device descriptor - * - * make sure we reset the tx sequence to zero, as when this function - * is called, the firmware has been just restarted. same rational - * for tx_in, tx_out, tx_msg_size and tx_msg. we reset them since - * the memory for tx queue is reallocated. - */ -int i2400m_tx_setup(struct i2400m *i2400m) -{ - int result = 0; - void *tx_buf; - unsigned long flags; - - /* do this here only once -- can't do on - * i2400m_hard_start_xmit() as we'll cause race conditions if - * the ws was scheduled on another cpu */ - init_work(&i2400m->wake_tx_ws, i2400m_wake_tx_work); - - tx_buf = kmalloc(i2400m_tx_buf_size, gfp_atomic); - if (tx_buf == null) { - result = -enomem; - goto error_kmalloc; - } - - /* - * fail the build if we can't fit at least two maximum size messages - * on the tx fifo [one being delivered while one is constructed]. - */ - build_bug_on(2 * i2400m_tx_msg_size > i2400m_tx_buf_size); - spin_lock_irqsave(&i2400m->tx_lock, flags); - i2400m->tx_sequence = 0; - i2400m->tx_in = 0; - i2400m->tx_out = 0; - i2400m->tx_msg_size = 0; - i2400m->tx_msg = null; - i2400m->tx_buf = tx_buf; - spin_unlock_irqrestore(&i2400m->tx_lock, flags); - /* huh? the bus layer has to define this... */ - bug_on(i2400m->bus_tx_block_size == 0); -error_kmalloc: - return result; - -} - - -/* - * i2400m_tx_release - tear down the tx queue and infrastructure - */ -void i2400m_tx_release(struct i2400m *i2400m) -{ - unsigned long flags; - spin_lock_irqsave(&i2400m->tx_lock, flags); - kfree(i2400m->tx_buf); - i2400m->tx_buf = null; - spin_unlock_irqrestore(&i2400m->tx_lock, flags); -} diff --git a/drivers/staging/wimax/i2400m/usb-debug-levels.h b/drivers/staging/wimax/i2400m/usb-debug-levels.h --- a/drivers/staging/wimax/i2400m/usb-debug-levels.h +++ /dev/null -/* spdx-license-identifier: gpl-2.0-only */ -/* - * intel wireless wimax connection 2400m - * debug levels control file for the i2400m-usb module - * - * copyright (c) 2007-2008 intel corporation <linux-wimax@intel.com> - * inaky perez-gonzalez <inaky.perez-gonzalez@intel.com> - */ -#ifndef __debug_levels__h__ -#define __debug_levels__h__ - -/* maximum compile and run time debug level for all submodules */ -#define d_modulename i2400m_usb -#define d_master config_wimax_i2400m_debug_level - -#include "../linux-wimax-debug.h" - -/* list of all the enabled modules */ -enum d_module { - d_submodule_declare(usb), - d_submodule_declare(fw), - d_submodule_declare(notif), - d_submodule_declare(rx), - d_submodule_declare(tx), -}; - - -#endif /* #ifndef __debug_levels__h__ */ diff --git a/drivers/staging/wimax/i2400m/usb-fw.c b/drivers/staging/wimax/i2400m/usb-fw.c --- a/drivers/staging/wimax/i2400m/usb-fw.c +++ /dev/null -/* - * intel wireless wimax connection 2400m - * firmware uploader's usb specifics - * - * - * copyright (c) 2007-2008 intel corporation. all rights reserved. - * - * redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * - * * redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * * redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in - * the documentation and/or other materials provided with the - * distribution. - * * neither the name of intel corporation nor the names of its - * contributors may be used to endorse or promote products derived - * from this software without specific prior written permission. - * - * this software is provided by the copyright holders and contributors - * "as is" and any express or implied warranties, including, but not - * limited to, the implied warranties of merchantability and fitness for - * a particular purpose are disclaimed. in no event shall the copyright - * owner or contributors be liable for any direct, indirect, incidental, - * special, exemplary, or consequential damages (including, but not - * limited to, procurement of substitute goods or services; loss of use, - * data, or profits; or business interruption) however caused and on any - * theory of liability, whether in contract, strict liability, or tort - * (including negligence or otherwise) arising in any way out of the use - * of this software, even if advised of the possibility of such damage. - * - * - * intel corporation <linux-wimax@intel.com> - * yanir lubetkin <yanirx.lubetkin@intel.com> - * inaky perez-gonzalez <inaky.perez-gonzalez@intel.com> - * - initial implementation - * - * inaky perez-gonzalez <inaky.perez-gonzalez@intel.com> - * - bus generic/specific split - * - * the procedure - * - * see fw.c for the generic description of this procedure. - * - * this file implements only the usb specifics. it boils down to how - * to send a command and waiting for an acknowledgement from the - * device. - * - * this code (and process) is single threaded. it assumes it is the - * only thread poking around (guaranteed by fw.c). - * - * command execution - * - * a write urb is posted with the buffer to the bulk output endpoint. - * - * ack reception - * - * we just post a urb to the notification endpoint and wait for - * data. we repeat until we get all the data we expect (as indicated - * by the call from the bus generic code). - * - * the data is not read from the bulk in endpoint for boot mode. - * - * roadmap - * - * i2400mu_bus_bm_cmd_send - * i2400m_bm_cmd_prepare... - * i2400mu_tx_bulk_out - * - * i2400mu_bus_bm_wait_for_ack - * i2400m_notif_submit - */ -#include <linux/usb.h> -#include <linux/gfp.h> -#include "i2400m-usb.h" - - -#define d_submodule fw -#include "usb-debug-levels.h" - - -/* - * synchronous write to the device - * - * takes care of updating edc counts and thus, handle device errors. - */ -static -ssize_t i2400mu_tx_bulk_out(struct i2400mu *i2400mu, void *buf, size_t buf_size) -{ - int result; - struct device *dev = &i2400mu->usb_iface->dev; - int len; - struct usb_endpoint_descriptor *epd; - int pipe, do_autopm = 1; - - result = usb_autopm_get_interface(i2400mu->usb_iface); - if (result < 0) { - dev_err(dev, "bm-cmd: can't get autopm: %d ", result); - do_autopm = 0; - } - epd = usb_get_epd(i2400mu->usb_iface, i2400mu->endpoint_cfg.bulk_out); - pipe = usb_sndbulkpipe(i2400mu->usb_dev, epd->bendpointaddress); -retry: - result = usb_bulk_msg(i2400mu->usb_dev, pipe, buf, buf_size, &len, 200); - switch (result) { - case 0: - if (len != buf_size) { - dev_err(dev, "bm-cmd: short write (%u b vs %zu " - "expected) ", len, buf_size); - result = -eio; - break; - } - result = len; - break; - case -epipe: - /* - * stall -- maybe the device is choking with our - * requests. clear it and give it some time. if they - * happen to often, it might be another symptom, so we - * reset. - * - * no error handling for usb_clear_halt(0; if it - * works, the retry works; if it fails, this switch - * does the error handling for us. - */ - if (edc_inc(&i2400mu->urb_edc, - 10 * edc_max_errors, edc_error_timeframe)) { - dev_err(dev, "bm-cmd: too many stalls in " - "urb; resetting device "); - usb_queue_reset_device(i2400mu->usb_iface); - } else { - usb_clear_halt(i2400mu->usb_dev, pipe); - msleep(10); /* give the device some time */ - goto retry; - } - fallthrough; - case -einval: /* while removing driver */ - case -enodev: /* dev disconnect ... */ - case -enoent: /* just ignore it */ - case -eshutdown: /* and exit */ - case -econnreset: - result = -eshutdown; - break; - case -etimedout: /* bah... */ - break; - default: /* any other? */ - if (edc_inc(&i2400mu->urb_edc, - edc_max_errors, edc_error_timeframe)) { - dev_err(dev, "bm-cmd: maximum errors in " - "urb exceeded; resetting device "); - usb_queue_reset_device(i2400mu->usb_iface); - result = -enodev; - break; - } - dev_err(dev, "bm-cmd: urb error %d, retrying ", - result); - goto retry; - } - if (do_autopm) - usb_autopm_put_interface(i2400mu->usb_iface); - return result; -} - - -/* - * send a boot-mode command over the bulk-out pipe - * - * command can be a raw command, which requires no preparation (and - * which might not even be following the command format). checks that - * the right amount of data was transferred. - * - * to satisfy usb requirements (no onstack, vmalloc or in data segment - * buffers), we copy the command to i2400m->bm_cmd_buf and send it from - * there. - * - * @flags: pass thru from i2400m_bm_cmd() - * @return: cmd_size if ok, < 0 errno code on error. - */ -ssize_t i2400mu_bus_bm_cmd_send(struct i2400m *i2400m, - const struct i2400m_bootrom_header *_cmd, - size_t cmd_size, int flags) -{ - ssize_t result; - struct device *dev = i2400m_dev(i2400m); - struct i2400mu *i2400mu = container_of(i2400m, struct i2400mu, i2400m); - int opcode = _cmd == null ? -1 : i2400m_brh_get_opcode(_cmd); - struct i2400m_bootrom_header *cmd; - size_t cmd_size_a = align(cmd_size, 16); /* usb restriction */ - - d_fnstart(8, dev, "(i2400m %p cmd %p size %zu) ", - i2400m, _cmd, cmd_size); - result = -e2big; - if (cmd_size > i2400m_bm_cmd_buf_size) - goto error_too_big; - if (_cmd != i2400m->bm_cmd_buf) - memmove(i2400m->bm_cmd_buf, _cmd, cmd_size); - cmd = i2400m->bm_cmd_buf; - if (cmd_size_a > cmd_size) /* zero pad space */ - memset(i2400m->bm_cmd_buf + cmd_size, 0, cmd_size_a - cmd_size); - if ((flags & i2400m_bm_cmd_raw) == 0) { - if (warn_on(i2400m_brh_get_response_required(cmd) == 0)) - dev_warn(dev, "sw bug: response_required == 0 "); - i2400m_bm_cmd_prepare(cmd); - } - result = i2400mu_tx_bulk_out(i2400mu, i2400m->bm_cmd_buf, cmd_size); - if (result < 0) { - dev_err(dev, "boot-mode cmd %d: cannot send: %zd ", - opcode, result); - goto error_cmd_send; - } - if (result != cmd_size) { /* all was transferred? */ - dev_err(dev, "boot-mode cmd %d: incomplete transfer " - "(%zd vs %zu submitted) ", opcode, result, cmd_size); - result = -eio; - goto error_cmd_size; - } -error_cmd_size: -error_cmd_send: -error_too_big: - d_fnend(8, dev, "(i2400m %p cmd %p size %zu) = %zd ", - i2400m, _cmd, cmd_size, result); - return result; -} - - -static -void __i2400mu_bm_notif_cb(struct urb *urb) -{ - complete(urb->context); -} - - -/* - * submit a read to the notification endpoint - * - * @i2400m: device descriptor - * @urb: urb to use - * @completion: completion variable to complete when done - * - * data is always read to i2400m->bm_ack_buf - */ -static -int i2400mu_notif_submit(struct i2400mu *i2400mu, struct urb *urb, - struct completion *completion) -{ - struct i2400m *i2400m = &i2400mu->i2400m; - struct usb_endpoint_descriptor *epd; - int pipe; - - epd = usb_get_epd(i2400mu->usb_iface, - i2400mu->endpoint_cfg.notification); - pipe = usb_rcvintpipe(i2400mu->usb_dev, epd->bendpointaddress); - usb_fill_int_urb(urb, i2400mu->usb_dev, pipe, - i2400m->bm_ack_buf, i2400m_bm_ack_buf_size, - __i2400mu_bm_notif_cb, completion, - epd->binterval); - return usb_submit_urb(urb, gfp_kernel); -} - - -/* - * read an ack from the notification endpoint - * - * @i2400m: - * @_ack: pointer to where to store the read data - * @ack_size: how many bytes we should read - * - * returns: < 0 errno code on error; otherwise, amount of received bytes. - * - * submits a notification read, appends the read data to the given ack - * buffer and then repeats (until @ack_size bytes have been - * received). - */ -ssize_t i2400mu_bus_bm_wait_for_ack(struct i2400m *i2400m, - struct i2400m_bootrom_header *_ack, - size_t ack_size) -{ - ssize_t result = -enomem; - struct device *dev = i2400m_dev(i2400m); - struct i2400mu *i2400mu = container_of(i2400m, struct i2400mu, i2400m); - struct urb notif_urb; - void *ack = _ack; - size_t offset, len; - long val; - int do_autopm = 1; - declare_completion_onstack(notif_completion); - - d_fnstart(8, dev, "(i2400m %p ack %p size %zu) ", - i2400m, ack, ack_size); - bug_on(_ack == i2400m->bm_ack_buf); - result = usb_autopm_get_interface(i2400mu->usb_iface); - if (result < 0) { - dev_err(dev, "bm-ack: can't get autopm: %d ", (int) result); - do_autopm = 0; - } - usb_init_urb(¬if_urb); /* ready notifications */ - usb_get_urb(¬if_urb); - offset = 0; - while (offset < ack_size) { - init_completion(¬if_completion); - result = i2400mu_notif_submit(i2400mu, ¬if_urb, - ¬if_completion); - if (result < 0) - goto error_notif_urb_submit; - val = wait_for_completion_interruptible_timeout( - ¬if_completion, hz); - if (val == 0) { - result = -etimedout; - usb_kill_urb(¬if_urb); /* timedout */ - goto error_notif_wait; - } - if (val == -erestartsys) { - result = -eintr; /* interrupted */ - usb_kill_urb(¬if_urb); - goto error_notif_wait; - } - result = notif_urb.status; /* how was the ack? */ - switch (result) { - case 0: - break; - case -einval: /* while removing driver */ - case -enodev: /* dev disconnect ... */ - case -enoent: /* just ignore it */ - case -eshutdown: /* and exit */ - case -econnreset: - result = -eshutdown; - goto error_dev_gone; - default: /* any other? */ - usb_kill_urb(¬if_urb); /* timedout */ - if (edc_inc(&i2400mu->urb_edc, - edc_max_errors, edc_error_timeframe)) - goto error_exceeded; - dev_err(dev, "bm-ack: urb error %d, " - "retrying ", notif_urb.status); - continue; /* retry */ - } - if (notif_urb.actual_length == 0) { - d_printf(6, dev, "zlp received, retrying "); - continue; - } - /* got data, append it to the buffer */ - len = min(ack_size - offset, (size_t) notif_urb.actual_length); - memcpy(ack + offset, i2400m->bm_ack_buf, len); - offset += len; - } - result = offset; -error_notif_urb_submit: -error_notif_wait: -error_dev_gone: -out: - if (do_autopm) - usb_autopm_put_interface(i2400mu->usb_iface); - d_fnend(8, dev, "(i2400m %p ack %p size %zu) = %ld ", - i2400m, ack, ack_size, (long) result); - usb_put_urb(¬if_urb); - return result; - -error_exceeded: - dev_err(dev, "bm: maximum errors in notification urb exceeded; " - "resetting device "); - usb_queue_reset_device(i2400mu->usb_iface); - goto out; -} diff --git a/drivers/staging/wimax/i2400m/usb-notif.c b/drivers/staging/wimax/i2400m/usb-notif.c --- a/drivers/staging/wimax/i2400m/usb-notif.c +++ /dev/null -/* - * intel wireless wimax connection 2400m over usb - * notification handling - * - * - * copyright (c) 2007-2008 intel corporation. all rights reserved. - * - * redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * - * * redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * * redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in - * the documentation and/or other materials provided with the - * distribution. - * * neither the name of intel corporation nor the names of its - * contributors may be used to endorse or promote products derived - * from this software without specific prior written permission. - * - * this software is provided by the copyright holders and contributors - * "as is" and any express or implied warranties, including, but not - * limited to, the implied warranties of merchantability and fitness for - * a particular purpose are disclaimed. in no event shall the copyright - * owner or contributors be liable for any direct, indirect, incidental, - * special, exemplary, or consequential damages (including, but not - * limited to, procurement of substitute goods or services; loss of use, - * data, or profits; or business interruption) however caused and on any - * theory of liability, whether in contract, strict liability, or tort - * (including negligence or otherwise) arising in any way out of the use - * of this software, even if advised of the possibility of such damage. - * - * - * intel corporation <linux-wimax@intel.com> - * yanir lubetkin <yanirx.lubetkin@intel.com> - * inaky perez-gonzalez <inaky.perez-gonzalez@intel.com> - * - initial implementation - * - * - * the notification endpoint is active when the device is not in boot - * mode; in here we just read and get notifications; based on those, - * we act to either reinitialize the device after a reboot or to - * submit a rx request. - * - * roadmap - * - * i2400mu_usb_notification_setup() - * - * i2400mu_usb_notification_release() - * - * i2400mu_usb_notification_cb() called when a urb is ready - * i2400mu_notif_grok() - * i2400m_is_boot_barker() - * i2400m_dev_reset_handle() - * i2400mu_rx_kick() - */ -#include <linux/usb.h> -#include <linux/slab.h> -#include "i2400m-usb.h" - - -#define d_submodule notif -#include "usb-debug-levels.h" - - -static const -__le32 i2400m_zero_barker[4] = { 0, 0, 0, 0 }; - - -/* - * process a received notification - * - * in normal operation mode, we can only receive two types of payloads - * on the notification endpoint: - * - * - a reboot barker, we do a bootstrap (the device has reseted). - * - * - a block of zeroes: there is pending data in the in endpoint - */ -static -int i2400mu_notification_grok(struct i2400mu *i2400mu, const void *buf, - size_t buf_len) -{ - int ret; - struct device *dev = &i2400mu->usb_iface->dev; - struct i2400m *i2400m = &i2400mu->i2400m; - - d_fnstart(4, dev, "(i2400m %p buf %p buf_len %zu) ", - i2400mu, buf, buf_len); - ret = -eio; - if (buf_len < sizeof(i2400m_zero_barker)) - /* not a bug, just ignore */ - goto error_bad_size; - ret = 0; - if (!memcmp(i2400m_zero_barker, buf, sizeof(i2400m_zero_barker))) { - i2400mu_rx_kick(i2400mu); - goto out; - } - ret = i2400m_is_boot_barker(i2400m, buf, buf_len); - if (unlikely(ret >= 0)) - ret = i2400m_dev_reset_handle(i2400m, "device rebooted"); - else /* unknown or unexpected data in the notif message */ - i2400m_unknown_barker(i2400m, buf, buf_len); -error_bad_size: -out: - d_fnend(4, dev, "(i2400m %p buf %p buf_len %zu) = %d ", - i2400mu, buf, buf_len, ret); - return ret; -} - - -/* - * urb callback for the notification endpoint - * - * @urb: the urb received from the notification endpoint - * - * this function will just process the usb side of the transaction, - * checking everything is fine, pass the processing to - * i2400m_notification_grok() and resubmit the urb. - */ -static -void i2400mu_notification_cb(struct urb *urb) -{ - int ret; - struct i2400mu *i2400mu = urb->context; - struct device *dev = &i2400mu->usb_iface->dev; - - d_fnstart(4, dev, "(urb %p status %d actual_length %d) ", - urb, urb->status, urb->actual_length); - ret = urb->status; - switch (ret) { - case 0: - ret = i2400mu_notification_grok(i2400mu, urb->transfer_buffer, - urb->actual_length); - if (ret == -eio && edc_inc(&i2400mu->urb_edc, edc_max_errors, - edc_error_timeframe)) - goto error_exceeded; - if (ret == -enomem) /* uff...power cycle? shutdown? */ - goto error_exceeded; - break; - case -einval: /* while removing driver */ - case -enodev: /* dev disconnect ... */ - case -enoent: /* ditto */ - case -eshutdown: /* urb killed */ - case -econnreset: /* disconnection */ - goto out; /* notify around */ - default: /* some error? */ - if (edc_inc(&i2400mu->urb_edc, - edc_max_errors, edc_error_timeframe)) - goto error_exceeded; - dev_err(dev, "notification: urb error %d, retrying ", - urb->status); - } - usb_mark_last_busy(i2400mu->usb_dev); - ret = usb_submit_urb(i2400mu->notif_urb, gfp_atomic); - switch (ret) { - case 0: - case -einval: /* while removing driver */ - case -enodev: /* dev disconnect ... */ - case -enoent: /* ditto */ - case -eshutdown: /* urb killed */ - case -econnreset: /* disconnection */ - break; /* just ignore */ - default: /* some error? */ - dev_err(dev, "notification: cannot submit urb: %d ", ret); - goto error_submit; - } - d_fnend(4, dev, "(urb %p status %d actual_length %d) = void ", - urb, urb->status, urb->actual_length); - return; - -error_exceeded: - dev_err(dev, "maximum errors in notification urb exceeded; " - "resetting device "); -error_submit: - usb_queue_reset_device(i2400mu->usb_iface); -out: - d_fnend(4, dev, "(urb %p status %d actual_length %d) = void ", - urb, urb->status, urb->actual_length); -} - - -/* - * setup the notification endpoint - * - * @i2400m: device descriptor - * - * this procedure prepares the notification urb and handler for receiving - * unsolicited barkers from the device. - */ -int i2400mu_notification_setup(struct i2400mu *i2400mu) -{ - struct device *dev = &i2400mu->usb_iface->dev; - int usb_pipe, ret = 0; - struct usb_endpoint_descriptor *epd; - char *buf; - - d_fnstart(4, dev, "(i2400m %p) ", i2400mu); - buf = kmalloc(i2400mu_max_notification_len, gfp_kernel | gfp_dma); - if (buf == null) { - ret = -enomem; - goto error_buf_alloc; - } - - i2400mu->notif_urb = usb_alloc_urb(0, gfp_kernel); - if (!i2400mu->notif_urb) { - ret = -enomem; - goto error_alloc_urb; - } - epd = usb_get_epd(i2400mu->usb_iface, - i2400mu->endpoint_cfg.notification); - usb_pipe = usb_rcvintpipe(i2400mu->usb_dev, epd->bendpointaddress); - usb_fill_int_urb(i2400mu->notif_urb, i2400mu->usb_dev, usb_pipe, - buf, i2400mu_max_notification_len, - i2400mu_notification_cb, i2400mu, epd->binterval); - ret = usb_submit_urb(i2400mu->notif_urb, gfp_kernel); - if (ret != 0) { - dev_err(dev, "notification: cannot submit urb: %d ", ret); - goto error_submit; - } - d_fnend(4, dev, "(i2400m %p) = %d ", i2400mu, ret); - return ret; - -error_submit: - usb_free_urb(i2400mu->notif_urb); -error_alloc_urb: - kfree(buf); -error_buf_alloc: - d_fnend(4, dev, "(i2400m %p) = %d ", i2400mu, ret); - return ret; -} - - -/* - * tear down of the notification mechanism - * - * @i2400m: device descriptor - * - * kill the interrupt endpoint urb, free any allocated resources. - * - * we need to check if we have done it before as for example, - * _suspend() call this; if after a suspend() we get a _disconnect() - * (as the case is when hibernating), nothing bad happens. - */ -void i2400mu_notification_release(struct i2400mu *i2400mu) -{ - struct device *dev = &i2400mu->usb_iface->dev; - - d_fnstart(4, dev, "(i2400mu %p) ", i2400mu); - if (i2400mu->notif_urb != null) { - usb_kill_urb(i2400mu->notif_urb); - kfree(i2400mu->notif_urb->transfer_buffer); - usb_free_urb(i2400mu->notif_urb); - i2400mu->notif_urb = null; - } - d_fnend(4, dev, "(i2400mu %p) ", i2400mu); -} diff --git a/drivers/staging/wimax/i2400m/usb-rx.c b/drivers/staging/wimax/i2400m/usb-rx.c --- a/drivers/staging/wimax/i2400m/usb-rx.c +++ /dev/null -/* - * intel wireless wimax connection 2400m - * usb rx handling - * - * - * copyright (c) 2007-2008 intel corporation. all rights reserved. - * - * redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * - * * redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * * redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in - * the documentation and/or other materials provided with the - * distribution. - * * neither the name of intel corporation nor the names of its - * contributors may be used to endorse or promote products derived - * from this software without specific prior written permission. - * - * this software is provided by the copyright holders and contributors - * "as is" and any express or implied warranties, including, but not - * limited to, the implied warranties of merchantability and fitness for - * a particular purpose are disclaimed. in no event shall the copyright - * owner or contributors be liable for any direct, indirect, incidental, - * special, exemplary, or consequential damages (including, but not - * limited to, procurement of substitute goods or services; loss of use, - * data, or profits; or business interruption) however caused and on any - * theory of liability, whether in contract, strict liability, or tort - * (including negligence or otherwise) arising in any way out of the use - * of this software, even if advised of the possibility of such damage. - * - * - * intel corporation <linux-wimax@intel.com> - * yanir lubetkin <yanirx.lubetkin@intel.com> - * - initial implementation - * inaky perez-gonzalez <inaky.perez-gonzalez@intel.com> - * - use skb_clone(), break up processing in chunks - * - split transport/device specific - * - make buffer size dynamic to exert less memory pressure - * - * - * this handles the rx path on usb. - * - * when a notification is received that says 'there is rx data ready', - * we call i2400mu_rx_kick(); that wakes up the rx kthread, which - * reads a buffer from usb and passes it to i2400m_rx() in the generic - * handling code. the rx buffer has an specific format that is - * described in rx.c. - * - * we use a kernel thread in a loop because: - * - * - we want to be able to call the usb power management get/put - * functions (blocking) before each transaction. - * - * - we might get a lot of notifications and we don't want to submit - * a zillion reads; by serializing, we are throttling. - * - * - rx data processing can get heavy enough so that it is not - * appropriate for doing it in the usb callback; thus we run it in a - * process context. - * - * we provide a read buffer of an arbitrary size (short of a page); if - * the callback reports -eoverflow, it means it was too small, so we - * just double the size and retry (being careful to append, as - * sometimes the device provided some data). every now and then we - * check if the average packet size is smaller than the current packet - * size and if so, we halve it. at the end, the size of the - * preallocated buffer should be following the average received - * transaction size, adapting dynamically to it. - * - * roadmap - * - * i2400mu_rx_kick() called from notif.c when we get a - * 'data ready' notification - * i2400mu_rxd() kernel rx daemon - * i2400mu_rx() receive usb data - * i2400m_rx() send data to generic i2400m rx handling - * - * i2400mu_rx_setup() called from i2400mu_bus_dev_start() - * - * i2400mu_rx_release() called from i2400mu_bus_dev_stop() - */ -#include <linux/workqueue.h> -#include <linux/slab.h> -#include <linux/usb.h> -#include "i2400m-usb.h" - - -#define d_submodule rx -#include "usb-debug-levels.h" - -/* - * dynamic rx size - * - * we can't let the rx_size be a multiple of 512 bytes (the rx - * endpoint's max packet size). on some usb host controllers (we - * haven't been able to fully characterize which), if the device is - * about to send (for example) x bytes and we only post a buffer to - * receive n*512, it will fail to mark that as babble (so that - * i2400mu_rx() [case -eoverflow] can resize the buffer and get the - * rest). - * - * so on growing or shrinking, if it is a multiple of the - * maxpacketsize, we remove some (instead of incresing some, so in a - * buddy allocator we try to waste less space). - * - * note we also need a hook for this on i2400mu_rx() -- when we do the - * first read, we are sure we won't hit this spot because - * i240mm->rx_size has been set properly. however, if we have to - * double because of -eoverflow, when we launch the read to get the - * rest of the data, we *have* to make sure that also is not a - * multiple of the max_pkt_size. - */ - -static -size_t i2400mu_rx_size_grow(struct i2400mu *i2400mu) -{ - struct device *dev = &i2400mu->usb_iface->dev; - size_t rx_size; - const size_t max_pkt_size = 512; - - rx_size = 2 * i2400mu->rx_size; - if (rx_size % max_pkt_size == 0) { - rx_size -= 8; - d_printf(1, dev, - "rx: expected size grew to %zu [adjusted -8] " - "from %zu ", - rx_size, i2400mu->rx_size); - } else - d_printf(1, dev, - "rx: expected size grew to %zu from %zu ", - rx_size, i2400mu->rx_size); - return rx_size; -} - - -static -void i2400mu_rx_size_maybe_shrink(struct i2400mu *i2400mu) -{ - const size_t max_pkt_size = 512; - struct device *dev = &i2400mu->usb_iface->dev; - - if (unlikely(i2400mu->rx_size_cnt >= 100 - && i2400mu->rx_size_auto_shrink)) { - size_t avg_rx_size = - i2400mu->rx_size_acc / i2400mu->rx_size_cnt; - size_t new_rx_size = i2400mu->rx_size / 2; - if (avg_rx_size < new_rx_size) { - if (new_rx_size % max_pkt_size == 0) { - new_rx_size -= 8; - d_printf(1, dev, - "rx: expected size shrank to %zu " - "[adjusted -8] from %zu ", - new_rx_size, i2400mu->rx_size); - } else - d_printf(1, dev, - "rx: expected size shrank to %zu " - "from %zu ", - new_rx_size, i2400mu->rx_size); - i2400mu->rx_size = new_rx_size; - i2400mu->rx_size_cnt = 0; - i2400mu->rx_size_acc = i2400mu->rx_size; - } - } -} - -/* - * receive a message with payloads from the usb bus into an skb - * - * @i2400mu: usb device descriptor - * @rx_skb: skb where to place the received message - * - * deals with all the usb-specifics of receiving, dynamically - * increasing the buffer size if so needed. returns the payload in the - * skb, ready to process. on a zero-length packet, we retry. - * - * on soft usb errors, we retry (until they become too frequent and - * then are promoted to hard); on hard usb errors, we reset the - * device. on other errors (skb realloacation, we just drop it and - * hope for the next invocation to solve it). - * - * returns: pointer to the skb if ok, err_ptr on error. - * note: this function might realloc the skb (if it is too small), - * so always update with the one returned. - * err_ptr() is < 0 on error. - * will return null if it cannot reallocate -- this can be - * considered a transient retryable error. - */ -static -struct sk_buff *i2400mu_rx(struct i2400mu *i2400mu, struct sk_buff *rx_skb) -{ - int result = 0; - struct device *dev = &i2400mu->usb_iface->dev; - int usb_pipe, read_size, rx_size, do_autopm; - struct usb_endpoint_descriptor *epd; - const size_t max_pkt_size = 512; - - d_fnstart(4, dev, "(i2400mu %p) ", i2400mu); - do_autopm = atomic_read(&i2400mu->do_autopm); - result = do_autopm ? - usb_autopm_get_interface(i2400mu->usb_iface) : 0; - if (result < 0) { - dev_err(dev, "rx: can't get autopm: %d ", result); - do_autopm = 0; - } - epd = usb_get_epd(i2400mu->usb_iface, i2400mu->endpoint_cfg.bulk_in); - usb_pipe = usb_rcvbulkpipe(i2400mu->usb_dev, epd->bendpointaddress); -retry: - rx_size = skb_end_pointer(rx_skb) - rx_skb->data - rx_skb->len; - if (unlikely(rx_size % max_pkt_size == 0)) { - rx_size -= 8; - d_printf(1, dev, "rx: rx_size adapted to %d [-8] ", rx_size); - } - result = usb_bulk_msg( - i2400mu->usb_dev, usb_pipe, rx_skb->data + rx_skb->len, - rx_size, &read_size, 200); - usb_mark_last_busy(i2400mu->usb_dev); - switch (result) { - case 0: - if (read_size == 0) - goto retry; /* zlp, just resubmit */ - skb_put(rx_skb, read_size); - break; - case -epipe: - /* - * stall -- maybe the device is choking with our - * requests. clear it and give it some time. if they - * happen to often, it might be another symptom, so we - * reset. - * - * no error handling for usb_clear_halt(0; if it - * works, the retry works; if it fails, this switch - * does the error handling for us. - */ - if (edc_inc(&i2400mu->urb_edc, - 10 * edc_max_errors, edc_error_timeframe)) { - dev_err(dev, "bm-cmd: too many stalls in " - "urb; resetting device "); - goto do_reset; - } - usb_clear_halt(i2400mu->usb_dev, usb_pipe); - msleep(10); /* give the device some time */ - goto retry; - case -einval: /* while removing driver */ - case -enodev: /* dev disconnect ... */ - case -enoent: /* just ignore it */ - case -eshutdown: - case -econnreset: - break; - case -eoverflow: { /* too small, reallocate */ - struct sk_buff *new_skb; - rx_size = i2400mu_rx_size_grow(i2400mu); - if (rx_size <= (1 << 16)) /* cap it */ - i2400mu->rx_size = rx_size; - else if (printk_ratelimit()) { - dev_err(dev, "bug? rx_size up to %d ", rx_size); - result = -einval; - goto out; - } - skb_put(rx_skb, read_size); - new_skb = skb_copy_expand(rx_skb, 0, rx_size - rx_skb->len, - gfp_kernel); - if (new_skb == null) { - kfree_skb(rx_skb); - rx_skb = null; - goto out; /* drop it...*/ - } - kfree_skb(rx_skb); - rx_skb = new_skb; - i2400mu->rx_size_cnt = 0; - i2400mu->rx_size_acc = i2400mu->rx_size; - d_printf(1, dev, "rx: size changed to %d, received %d, " - "copied %d, capacity %ld ", - rx_size, read_size, rx_skb->len, - (long) skb_end_offset(new_skb)); - goto retry; - } - /* in most cases, it happens due to the hardware scheduling a - * read when there was no data - unfortunately, we have no way - * to tell this timeout from a usb timeout. so we just ignore - * it. */ - case -etimedout: - dev_err(dev, "rx: timeout: %d ", result); - result = 0; - break; - default: /* any error */ - if (edc_inc(&i2400mu->urb_edc, - edc_max_errors, edc_error_timeframe)) - goto error_reset; - dev_err(dev, "rx: error receiving urb: %d, retrying ", result); - goto retry; - } -out: - if (do_autopm) - usb_autopm_put_interface(i2400mu->usb_iface); - d_fnend(4, dev, "(i2400mu %p) = %p ", i2400mu, rx_skb); - return rx_skb; - -error_reset: - dev_err(dev, "rx: maximum errors in urb exceeded; " - "resetting device "); -do_reset: - usb_queue_reset_device(i2400mu->usb_iface); - rx_skb = err_ptr(result); - goto out; -} - - -/* - * kernel thread for usb reception of data - * - * this thread waits for a kick; once kicked, it will allocate an skb - * and receive a single message to it from usb (using - * i2400mu_rx()). once received, it is passed to the generic i2400m rx - * code for processing. - * - * when done processing, it runs some dirty statistics to verify if - * the last 100 messages received were smaller than half of the - * current rx buffer size. in that case, the rx buffer size is - * halved. this will helps lowering the pressure on the memory - * allocator. - * - * hard errors force the thread to exit. - */ -static -int i2400mu_rxd(void *_i2400mu) -{ - int result = 0; - struct i2400mu *i2400mu = _i2400mu; - struct i2400m *i2400m = &i2400mu->i2400m; - struct device *dev = &i2400mu->usb_iface->dev; - struct net_device *net_dev = i2400m->wimax_dev.net_dev; - size_t pending; - int rx_size; - struct sk_buff *rx_skb; - unsigned long flags; - - d_fnstart(4, dev, "(i2400mu %p) ", i2400mu); - spin_lock_irqsave(&i2400m->rx_lock, flags); - bug_on(i2400mu->rx_kthread != null); - i2400mu->rx_kthread = current; - spin_unlock_irqrestore(&i2400m->rx_lock, flags); - while (1) { - d_printf(2, dev, "rx: waiting for messages "); - pending = 0; - wait_event_interruptible( - i2400mu->rx_wq, - (kthread_should_stop() /* check this first! */ - || (pending = atomic_read(&i2400mu->rx_pending_count))) - ); - if (kthread_should_stop()) - break; - if (pending == 0) - continue; - rx_size = i2400mu->rx_size; - d_printf(2, dev, "rx: reading up to %d bytes ", rx_size); - rx_skb = __netdev_alloc_skb(net_dev, rx_size, gfp_kernel); - if (rx_skb == null) { - dev_err(dev, "rx: can't allocate skb [%d bytes] ", - rx_size); - msleep(50); /* give it some time? */ - continue; - } - - /* receive the message with the payloads */ - rx_skb = i2400mu_rx(i2400mu, rx_skb); - result = ptr_err(rx_skb); - if (is_err(rx_skb)) - goto out; - atomic_dec(&i2400mu->rx_pending_count); - if (rx_skb == null || rx_skb->len == 0) { - /* some "ignorable" condition */ - kfree_skb(rx_skb); - continue; - } - - /* deliver the message to the generic i2400m code */ - i2400mu->rx_size_cnt++; - i2400mu->rx_size_acc += rx_skb->len; - result = i2400m_rx(i2400m, rx_skb); - if (result == -eio - && edc_inc(&i2400mu->urb_edc, - edc_max_errors, edc_error_timeframe)) { - goto error_reset; - } - - /* maybe adjust rx buffer size */ - i2400mu_rx_size_maybe_shrink(i2400mu); - } - result = 0; -out: - spin_lock_irqsave(&i2400m->rx_lock, flags); - i2400mu->rx_kthread = null; - spin_unlock_irqrestore(&i2400m->rx_lock, flags); - d_fnend(4, dev, "(i2400mu %p) = %d ", i2400mu, result); - return result; - -error_reset: - dev_err(dev, "rx: maximum errors in received buffer exceeded; " - "resetting device "); - usb_queue_reset_device(i2400mu->usb_iface); - goto out; -} - - -/* - * start reading from the device - * - * @i2400m: device instance - * - * notify the rx thread that there is data pending. - */ -void i2400mu_rx_kick(struct i2400mu *i2400mu) -{ - struct i2400m *i2400m = &i2400mu->i2400m; - struct device *dev = &i2400mu->usb_iface->dev; - - d_fnstart(3, dev, "(i2400mu %p) ", i2400m); - atomic_inc(&i2400mu->rx_pending_count); - wake_up_all(&i2400mu->rx_wq); - d_fnend(3, dev, "(i2400m %p) = void ", i2400m); -} - - -int i2400mu_rx_setup(struct i2400mu *i2400mu) -{ - int result = 0; - struct i2400m *i2400m = &i2400mu->i2400m; - struct device *dev = &i2400mu->usb_iface->dev; - struct wimax_dev *wimax_dev = &i2400m->wimax_dev; - struct task_struct *kthread; - - kthread = kthread_run(i2400mu_rxd, i2400mu, "%s-rx", - wimax_dev->name); - /* the kthread function sets i2400mu->rx_thread */ - if (is_err(kthread)) { - result = ptr_err(kthread); - dev_err(dev, "rx: cannot start thread: %d ", result); - } - return result; -} - - -void i2400mu_rx_release(struct i2400mu *i2400mu) -{ - unsigned long flags; - struct i2400m *i2400m = &i2400mu->i2400m; - struct device *dev = i2400m_dev(i2400m); - struct task_struct *kthread; - - spin_lock_irqsave(&i2400m->rx_lock, flags); - kthread = i2400mu->rx_kthread; - i2400mu->rx_kthread = null; - spin_unlock_irqrestore(&i2400m->rx_lock, flags); - if (kthread) - kthread_stop(kthread); - else - d_printf(1, dev, "rx: kthread had already exited "); -} - diff --git a/drivers/staging/wimax/i2400m/usb-tx.c b/drivers/staging/wimax/i2400m/usb-tx.c --- a/drivers/staging/wimax/i2400m/usb-tx.c +++ /dev/null -/* - * intel wireless wimax connection 2400m - * usb specific tx handling - * - * - * copyright (c) 2007-2008 intel corporation. all rights reserved. - * - * redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * - * * redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * * redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in - * the documentation and/or other materials provided with the - * distribution. - * * neither the name of intel corporation nor the names of its - * contributors may be used to endorse or promote products derived - * from this software without specific prior written permission. - * - * this software is provided by the copyright holders and contributors - * "as is" and any express or implied warranties, including, but not - * limited to, the implied warranties of merchantability and fitness for - * a particular purpose are disclaimed. in no event shall the copyright - * owner or contributors be liable for any direct, indirect, incidental, - * special, exemplary, or consequential damages (including, but not - * limited to, procurement of substitute goods or services; loss of use, - * data, or profits; or business interruption) however caused and on any - * theory of liability, whether in contract, strict liability, or tort - * (including negligence or otherwise) arising in any way out of the use - * of this software, even if advised of the possibility of such damage. - * - * - * intel corporation <linux-wimax@intel.com> - * yanir lubetkin <yanirx.lubetkin@intel.com> - * - initial implementation - * inaky perez-gonzalez <inaky.perez-gonzalez@intel.com> - * - split transport/device specific - * - * - * takes the tx messages in the i2400m's driver tx fifo and sends them - * to the device until there are no more. - * - * if we fail sending the message, we just drop it. there isn't much - * we can do at this point. we could also retry, but the usb stack has - * already retried and still failed, so there is not much of a - * point. as well, most of the traffic is network, which has recovery - * methods for dropped packets. - * - * for sending we just obtain a fifo buffer to send, send it to the - * usb bulk out, tell the tx fifo code we have sent it; query for - * another one, etc... until done. - * - * we use a thread so we can call usb_autopm_enable() and - * usb_autopm_disable() for each transaction; this way when the device - * goes idle, it will suspend. it also has less overhead than a - * dedicated workqueue, as it is being used for a single task. - * - * roadmap - * - * i2400mu_tx_setup() - * i2400mu_tx_release() - * - * i2400mu_bus_tx_kick() - called by the tx.c code when there - * is new data in the fifo. - * i2400mu_txd() - * i2400m_tx_msg_get() - * i2400m_tx_msg_sent() - */ -#include "i2400m-usb.h" - - -#define d_submodule tx -#include "usb-debug-levels.h" - - -/* - * get the next tx message in the tx fifo and send it to the device - * - * note that any iteration consumes a message to be sent, no matter if - * it succeeds or fails (we have no real way to retry or complain). - * - * return: 0 if ok, < 0 errno code on hard error. - */ -static -int i2400mu_tx(struct i2400mu *i2400mu, struct i2400m_msg_hdr *tx_msg, - size_t tx_msg_size) -{ - int result = 0; - struct i2400m *i2400m = &i2400mu->i2400m; - struct device *dev = &i2400mu->usb_iface->dev; - int usb_pipe, sent_size, do_autopm; - struct usb_endpoint_descriptor *epd; - - d_fnstart(4, dev, "(i2400mu %p) ", i2400mu); - do_autopm = atomic_read(&i2400mu->do_autopm); - result = do_autopm ? - usb_autopm_get_interface(i2400mu->usb_iface) : 0; - if (result < 0) { - dev_err(dev, "tx: can't get autopm: %d ", result); - do_autopm = 0; - } - epd = usb_get_epd(i2400mu->usb_iface, i2400mu->endpoint_cfg.bulk_out); - usb_pipe = usb_sndbulkpipe(i2400mu->usb_dev, epd->bendpointaddress); -retry: - result = usb_bulk_msg(i2400mu->usb_dev, usb_pipe, - tx_msg, tx_msg_size, &sent_size, 200); - usb_mark_last_busy(i2400mu->usb_dev); - switch (result) { - case 0: - if (sent_size != tx_msg_size) { /* too short? drop it */ - dev_err(dev, "tx: short write (%d b vs %zu " - "expected) ", sent_size, tx_msg_size); - result = -eio; - } - break; - case -epipe: - /* - * stall -- maybe the device is choking with our - * requests. clear it and give it some time. if they - * happen to often, it might be another symptom, so we - * reset. - * - * no error handling for usb_clear_halt(0; if it - * works, the retry works; if it fails, this switch - * does the error handling for us. - */ - if (edc_inc(&i2400mu->urb_edc, - 10 * edc_max_errors, edc_error_timeframe)) { - dev_err(dev, "bm-cmd: too many stalls in " - "urb; resetting device "); - usb_queue_reset_device(i2400mu->usb_iface); - } else { - usb_clear_halt(i2400mu->usb_dev, usb_pipe); - msleep(10); /* give the device some time */ - goto retry; - } - fallthrough; - case -einval: /* while removing driver */ - case -enodev: /* dev disconnect ... */ - case -enoent: /* just ignore it */ - case -eshutdown: /* and exit */ - case -econnreset: - result = -eshutdown; - break; - default: /* some error? */ - if (edc_inc(&i2400mu->urb_edc, - edc_max_errors, edc_error_timeframe)) { - dev_err(dev, "tx: maximum errors in urb " - "exceeded; resetting device "); - usb_queue_reset_device(i2400mu->usb_iface); - } else { - dev_err(dev, "tx: cannot send urb; retrying. " - "tx_msg @%zu %zu b [%d sent]: %d ", - (void *) tx_msg - i2400m->tx_buf, - tx_msg_size, sent_size, result); - goto retry; - } - } - if (do_autopm) - usb_autopm_put_interface(i2400mu->usb_iface); - d_fnend(4, dev, "(i2400mu %p) = result ", i2400mu); - return result; -} - - -/* - * get the next tx message in the tx fifo and send it to the device - * - * note we exit the loop if i2400mu_tx() fails; that function only - * fails on hard error (failing to tx a buffer not being one of them, - * see its doc). - * - * return: 0 - */ -static -int i2400mu_txd(void *_i2400mu) -{ - struct i2400mu *i2400mu = _i2400mu; - struct i2400m *i2400m = &i2400mu->i2400m; - struct device *dev = &i2400mu->usb_iface->dev; - struct i2400m_msg_hdr *tx_msg; - size_t tx_msg_size; - unsigned long flags; - - d_fnstart(4, dev, "(i2400mu %p) ", i2400mu); - - spin_lock_irqsave(&i2400m->tx_lock, flags); - bug_on(i2400mu->tx_kthread != null); - i2400mu->tx_kthread = current; - spin_unlock_irqrestore(&i2400m->tx_lock, flags); - - while (1) { - d_printf(2, dev, "tx: waiting for messages "); - tx_msg = null; - wait_event_interruptible( - i2400mu->tx_wq, - (kthread_should_stop() /* check this first! */ - || (tx_msg = i2400m_tx_msg_get(i2400m, &tx_msg_size))) - ); - if (kthread_should_stop()) - break; - warn_on(tx_msg == null); /* should not happen...*/ - d_printf(2, dev, "tx: submitting %zu bytes ", tx_msg_size); - d_dump(5, dev, tx_msg, tx_msg_size); - /* yeah, we ignore errors ... not much we can do */ - i2400mu_tx(i2400mu, tx_msg, tx_msg_size); - i2400m_tx_msg_sent(i2400m); /* ack it, advance the fifo */ - } - - spin_lock_irqsave(&i2400m->tx_lock, flags); - i2400mu->tx_kthread = null; - spin_unlock_irqrestore(&i2400m->tx_lock, flags); - - d_fnend(4, dev, "(i2400mu %p) ", i2400mu); - return 0; -} - - -/* - * i2400m tx engine notifies us that there is data in the fifo ready - * for tx - * - * if there is a urb in flight, don't do anything; when it finishes, - * it will see there is data in the fifo and send it. else, just - * submit a write. - */ -void i2400mu_bus_tx_kick(struct i2400m *i2400m) -{ - struct i2400mu *i2400mu = container_of(i2400m, struct i2400mu, i2400m); - struct device *dev = &i2400mu->usb_iface->dev; - - d_fnstart(3, dev, "(i2400m %p) = void ", i2400m); - wake_up_all(&i2400mu->tx_wq); - d_fnend(3, dev, "(i2400m %p) = void ", i2400m); -} - - -int i2400mu_tx_setup(struct i2400mu *i2400mu) -{ - int result = 0; - struct i2400m *i2400m = &i2400mu->i2400m; - struct device *dev = &i2400mu->usb_iface->dev; - struct wimax_dev *wimax_dev = &i2400m->wimax_dev; - struct task_struct *kthread; - - kthread = kthread_run(i2400mu_txd, i2400mu, "%s-tx", - wimax_dev->name); - /* the kthread function sets i2400mu->tx_thread */ - if (is_err(kthread)) { - result = ptr_err(kthread); - dev_err(dev, "tx: cannot start thread: %d ", result); - } - return result; -} - -void i2400mu_tx_release(struct i2400mu *i2400mu) -{ - unsigned long flags; - struct i2400m *i2400m = &i2400mu->i2400m; - struct device *dev = i2400m_dev(i2400m); - struct task_struct *kthread; - - spin_lock_irqsave(&i2400m->tx_lock, flags); - kthread = i2400mu->tx_kthread; - i2400mu->tx_kthread = null; - spin_unlock_irqrestore(&i2400m->tx_lock, flags); - if (kthread) - kthread_stop(kthread); - else - d_printf(1, dev, "tx: kthread had already exited "); -} diff --git a/drivers/staging/wimax/i2400m/usb.c b/drivers/staging/wimax/i2400m/usb.c --- a/drivers/staging/wimax/i2400m/usb.c +++ /dev/null -// spdx-license-identifier: gpl-2.0-only -/* - * intel wireless wimax connection 2400m - * linux driver model glue for usb device, reset & fw upload - * - * copyright (c) 2007-2008 intel corporation <linux-wimax@intel.com> - * inaky perez-gonzalez <inaky.perez-gonzalez@intel.com> - * yanir lubetkin <yanirx.lubetkin@intel.com> - * - * see i2400m-usb.h for a general description of this driver. - * - * this file implements driver model glue, and hook ups for the - * generic driver to implement the bus-specific functions (device - * communication setup/tear down, firmware upload and resetting). - * - * roadmap - * - * i2400mu_probe() - * alloc_netdev()... - * i2400mu_netdev_setup() - * i2400mu_init() - * i2400m_netdev_setup() - * i2400m_setup()... - * - * i2400mu_disconnect - * i2400m_release() - * free_netdev() - * - * i2400mu_suspend() - * i2400m_cmd_enter_powersave() - * i2400mu_notification_release() - * - * i2400mu_resume() - * i2400mu_notification_setup() - * - * i2400mu_bus_dev_start() called by i2400m_dev_start() [who is - * i2400mu_tx_setup() called by i2400m_setup()] - * i2400mu_rx_setup() - * i2400mu_notification_setup() - * - * i2400mu_bus_dev_stop() called by i2400m_dev_stop() [who is - * i2400mu_notification_release() called by i2400m_release()] - * i2400mu_rx_release() - * i2400mu_tx_release() - * - * i2400mu_bus_reset() called by i2400m_reset - * __i2400mu_reset() - * __i2400mu_send_barker() - * usb_reset_device() - */ -#include "i2400m-usb.h" -#include "linux-wimax-i2400m.h" -#include <linux/debugfs.h> -#include <linux/ethtool.h> -#include <linux/slab.h> -#include <linux/module.h> - - -#define d_submodule usb -#include "usb-debug-levels.h" - -static char i2400mu_debug_params[128]; -module_param_string(debug, i2400mu_debug_params, sizeof(i2400mu_debug_params), - 0644); -module_parm_desc(debug, - "string of space-separated name:value pairs, where names " - "are the different debug submodules and value are the " - "initial debug value to set."); - -/* our firmware file name */ -static const char *i2400mu_bus_fw_names_5x50[] = { -#define i2400mu_fw_file_name_v1_5 "i2400m-fw-usb-1.5.sbcf" - i2400mu_fw_file_name_v1_5, -#define i2400mu_fw_file_name_v1_4 "i2400m-fw-usb-1.4.sbcf" - i2400mu_fw_file_name_v1_4, - null, -}; - - -static const char *i2400mu_bus_fw_names_6050[] = { -#define i6050u_fw_file_name_v1_5 "i6050-fw-usb-1.5.sbcf" - i6050u_fw_file_name_v1_5, - null, -}; - - -static -int i2400mu_bus_dev_start(struct i2400m *i2400m) -{ - int result; - struct i2400mu *i2400mu = container_of(i2400m, struct i2400mu, i2400m); - struct device *dev = &i2400mu->usb_iface->dev; - - d_fnstart(3, dev, "(i2400m %p) ", i2400m); - result = i2400mu_tx_setup(i2400mu); - if (result < 0) - goto error_usb_tx_setup; - result = i2400mu_rx_setup(i2400mu); - if (result < 0) - goto error_usb_rx_setup; - result = i2400mu_notification_setup(i2400mu); - if (result < 0) - goto error_notif_setup; - d_fnend(3, dev, "(i2400m %p) = %d ", i2400m, result); - return result; - -error_notif_setup: - i2400mu_rx_release(i2400mu); -error_usb_rx_setup: - i2400mu_tx_release(i2400mu); -error_usb_tx_setup: - d_fnend(3, dev, "(i2400m %p) = void ", i2400m); - return result; -} - - -static -void i2400mu_bus_dev_stop(struct i2400m *i2400m) -{ - struct i2400mu *i2400mu = container_of(i2400m, struct i2400mu, i2400m); - struct device *dev = &i2400mu->usb_iface->dev; - - d_fnstart(3, dev, "(i2400m %p) ", i2400m); - i2400mu_notification_release(i2400mu); - i2400mu_rx_release(i2400mu); - i2400mu_tx_release(i2400mu); - d_fnend(3, dev, "(i2400m %p) = void ", i2400m); -} - - -/* - * sends a barker buffer to the device - * - * this helper will allocate a kmalloced buffer and use it to transmit - * (then free it). reason for this is that other arches cannot use - * stack/vmalloc/text areas for dma transfers. - * - * error recovery here is simpler: anything is considered a hard error - * and will move the reset code to use a last-resort bus-based reset. - */ -static -int __i2400mu_send_barker(struct i2400mu *i2400mu, - const __le32 *barker, - size_t barker_size, - unsigned endpoint) -{ - struct usb_endpoint_descriptor *epd = null; - int pipe, actual_len, ret; - struct device *dev = &i2400mu->usb_iface->dev; - void *buffer; - int do_autopm = 1; - - ret = usb_autopm_get_interface(i2400mu->usb_iface); - if (ret < 0) { - dev_err(dev, "reset: can't get autopm: %d ", ret); - do_autopm = 0; - } - ret = -enomem; - buffer = kmalloc(barker_size, gfp_kernel); - if (buffer == null) - goto error_kzalloc; - epd = usb_get_epd(i2400mu->usb_iface, endpoint); - pipe = usb_sndbulkpipe(i2400mu->usb_dev, epd->bendpointaddress); - memcpy(buffer, barker, barker_size); -retry: - ret = usb_bulk_msg(i2400mu->usb_dev, pipe, buffer, barker_size, - &actual_len, 200); - switch (ret) { - case 0: - if (actual_len != barker_size) { /* too short? drop it */ - dev_err(dev, "e: %s: short write (%d b vs %zu " - "expected) ", - __func__, actual_len, barker_size); - ret = -eio; - } - break; - case -epipe: - /* - * stall -- maybe the device is choking with our - * requests. clear it and give it some time. if they - * happen to often, it might be another symptom, so we - * reset. - * - * no error handling for usb_clear_halt(0; if it - * works, the retry works; if it fails, this switch - * does the error handling for us. - */ - if (edc_inc(&i2400mu->urb_edc, - 10 * edc_max_errors, edc_error_timeframe)) { - dev_err(dev, "e: %s: too many stalls in " - "urb; resetting device ", __func__); - usb_queue_reset_device(i2400mu->usb_iface); - /* fallthrough */ - } else { - usb_clear_halt(i2400mu->usb_dev, pipe); - msleep(10); /* give the device some time */ - goto retry; - } - fallthrough; - case -einval: /* while removing driver */ - case -enodev: /* dev disconnect ... */ - case -enoent: /* just ignore it */ - case -eshutdown: /* and exit */ - case -econnreset: - ret = -eshutdown; - break; - default: /* some error? */ - if (edc_inc(&i2400mu->urb_edc, - edc_max_errors, edc_error_timeframe)) { - dev_err(dev, "e: %s: maximum errors in urb " - "exceeded; resetting device ", - __func__); - usb_queue_reset_device(i2400mu->usb_iface); - } else { - dev_warn(dev, "w: %s: cannot send urb: %d ", - __func__, ret); - goto retry; - } - } - kfree(buffer); -error_kzalloc: - if (do_autopm) - usb_autopm_put_interface(i2400mu->usb_iface); - return ret; -} - - -/* - * reset a device at different levels (warm, cold or bus) - * - * @i2400m: device descriptor - * @reset_type: soft, warm or bus reset (i2400m_rt_warm/soft/bus) - * - * warm and cold resets get a usb reset if they fail. - * - * warm reset: - * - * the device will be fully reset internally, but won't be - * disconnected from the usb bus (so no reenumeration will - * happen). firmware upload will be necessary. - * - * the device will send a reboot barker in the notification endpoint - * that will trigger the driver to reinitialize the state - * automatically from notif.c:i2400m_notification_grok() into - * i2400m_dev_bootstrap_delayed(). - * - * cold and bus (usb) reset: - * - * the device will be fully reset internally, disconnected from the - * usb bus an a reenumeration will happen. firmware upload will be - * necessary. thus, we don't do any locking or struct - * reinitialization, as we are going to be fully disconnected and - * reenumerated. - * - * note we need to return -enodev if a warm reset was requested and we - * had to resort to a bus reset. see i2400m_op_reset(), wimax_reset() - * and wimax_dev->op_reset. - * - * warning: no driver state saved/fixed - */ -static -int i2400mu_bus_reset(struct i2400m *i2400m, enum i2400m_reset_type rt) -{ - int result; - struct i2400mu *i2400mu = - container_of(i2400m, struct i2400mu, i2400m); - struct device *dev = i2400m_dev(i2400m); - static const __le32 i2400m_warm_boot_barker[4] = { - cpu_to_le32(i2400m_warm_reset_barker), - cpu_to_le32(i2400m_warm_reset_barker), - cpu_to_le32(i2400m_warm_reset_barker), - cpu_to_le32(i2400m_warm_reset_barker), - }; - static const __le32 i2400m_cold_boot_barker[4] = { - cpu_to_le32(i2400m_cold_reset_barker), - cpu_to_le32(i2400m_cold_reset_barker), - cpu_to_le32(i2400m_cold_reset_barker), - cpu_to_le32(i2400m_cold_reset_barker), - }; - - d_fnstart(3, dev, "(i2400m %p rt %u) ", i2400m, rt); - if (rt == i2400m_rt_warm) - result = __i2400mu_send_barker( - i2400mu, i2400m_warm_boot_barker, - sizeof(i2400m_warm_boot_barker), - i2400mu->endpoint_cfg.bulk_out); - else if (rt == i2400m_rt_cold) - result = __i2400mu_send_barker( - i2400mu, i2400m_cold_boot_barker, - sizeof(i2400m_cold_boot_barker), - i2400mu->endpoint_cfg.reset_cold); - else if (rt == i2400m_rt_bus) { - result = usb_reset_device(i2400mu->usb_dev); - switch (result) { - case 0: - case -einval: /* device is gone */ - case -enodev: - case -enoent: - case -eshutdown: - result = 0; - break; /* we assume the device is disconnected */ - default: - dev_err(dev, "usb reset failed (%d), giving up! ", - result); - } - } else { - result = -einval; /* shut gcc up in certain arches */ - bug(); - } - if (result < 0 - && result != -einval /* device is gone */ - && rt != i2400m_rt_bus) { - /* - * things failed -- resort to lower level reset, that - * we queue in another context; the reason for this is - * that the pre and post reset functionality requires - * the i2400m->init_mutex; rt_warm and rt_cold can - * come from areas where i2400m->init_mutex is taken. - */ - dev_err(dev, "%s reset failed (%d); trying usb reset ", - rt == i2400m_rt_warm ? "warm" : "cold", result); - usb_queue_reset_device(i2400mu->usb_iface); - result = -enodev; - } - d_fnend(3, dev, "(i2400m %p rt %u) = %d ", i2400m, rt, result); - return result; -} - -static void i2400mu_get_drvinfo(struct net_device *net_dev, - struct ethtool_drvinfo *info) -{ - struct i2400m *i2400m = net_dev_to_i2400m(net_dev); - struct i2400mu *i2400mu = container_of(i2400m, struct i2400mu, i2400m); - struct usb_device *udev = i2400mu->usb_dev; - - strscpy(info->driver, kbuild_modname, sizeof(info->driver)); - strscpy(info->fw_version, i2400m->fw_name ? : "", - sizeof(info->fw_version)); - usb_make_path(udev, info->bus_info, sizeof(info->bus_info)); -} - -static const struct ethtool_ops i2400mu_ethtool_ops = { - .get_drvinfo = i2400mu_get_drvinfo, - .get_link = ethtool_op_get_link, -}; - -static -void i2400mu_netdev_setup(struct net_device *net_dev) -{ - struct i2400m *i2400m = net_dev_to_i2400m(net_dev); - struct i2400mu *i2400mu = container_of(i2400m, struct i2400mu, i2400m); - i2400mu_init(i2400mu); - i2400m_netdev_setup(net_dev); - net_dev->ethtool_ops = &i2400mu_ethtool_ops; -} - - -/* - * debug levels control; see debug.h - */ -struct d_level d_level[] = { - d_submodule_define(usb), - d_submodule_define(fw), - d_submodule_define(notif), - d_submodule_define(rx), - d_submodule_define(tx), -}; -size_t d_level_size = array_size(d_level); - -static -void i2400mu_debugfs_add(struct i2400mu *i2400mu) -{ - struct dentry *dentry = i2400mu->i2400m.wimax_dev.debugfs_dentry; - - dentry = debugfs_create_dir("i2400m-usb", dentry); - i2400mu->debugfs_dentry = dentry; - - d_level_register_debugfs("dl_", usb, dentry); - d_level_register_debugfs("dl_", fw, dentry); - d_level_register_debugfs("dl_", notif, dentry); - d_level_register_debugfs("dl_", rx, dentry); - d_level_register_debugfs("dl_", tx, dentry); - - /* don't touch these if you don't know what you are doing */ - debugfs_create_u8("rx_size_auto_shrink", 0600, dentry, - &i2400mu->rx_size_auto_shrink); - - debugfs_create_size_t("rx_size", 0600, dentry, &i2400mu->rx_size); -} - - -static struct device_type i2400mu_type = { - .name = "wimax", -}; - -/* - * probe a i2400m interface and register it - * - * @iface: usb interface to link to - * @id: usb class/subclass/protocol id - * @returns: 0 if ok, < 0 errno code on error. - * - * alloc a net device, initialize the bus-specific details and then - * calls the bus-generic initialization routine. that will register - * the wimax and netdev devices, upload the firmware [using - * _bus_bm_*()], call _bus_dev_start() to finalize the setup of the - * communication with the device and then will start to talk to it to - * finnish setting it up. - */ -static -int i2400mu_probe(struct usb_interface *iface, - const struct usb_device_id *id) -{ - int result; - struct net_device *net_dev; - struct device *dev = &iface->dev; - struct i2400m *i2400m; - struct i2400mu *i2400mu; - struct usb_device *usb_dev = interface_to_usbdev(iface); - - if (iface->cur_altsetting->desc.bnumendpoints < 4) - return -enodev; - - if (usb_dev->speed != usb_speed_high) - dev_err(dev, "device not connected as high speed "); - - /* allocate instance [calls i2400m_netdev_setup() on it]. */ - result = -enomem; - net_dev = alloc_netdev(sizeof(*i2400mu), "wmx%d", net_name_unknown, - i2400mu_netdev_setup); - if (net_dev == null) { - dev_err(dev, "no memory for network device instance "); - goto error_alloc_netdev; - } - set_netdev_dev(net_dev, dev); - set_netdev_devtype(net_dev, &i2400mu_type); - i2400m = net_dev_to_i2400m(net_dev); - i2400mu = container_of(i2400m, struct i2400mu, i2400m); - i2400m->wimax_dev.net_dev = net_dev; - i2400mu->usb_dev = usb_get_dev(usb_dev); - i2400mu->usb_iface = iface; - usb_set_intfdata(iface, i2400mu); - - i2400m->bus_tx_block_size = i2400mu_blk_size; - /* - * room required in the tx queue for usb message to accommodate - * a smallest payload while allocating header space is 16 bytes. - * adding this room for the new tx message increases the - * possibilities of including any payload with size <= 16 bytes. - */ - i2400m->bus_tx_room_min = i2400mu_blk_size; - i2400m->bus_pl_size_max = i2400mu_pl_size_max; - i2400m->bus_setup = null; - i2400m->bus_dev_start = i2400mu_bus_dev_start; - i2400m->bus_dev_stop = i2400mu_bus_dev_stop; - i2400m->bus_release = null; - i2400m->bus_tx_kick = i2400mu_bus_tx_kick; - i2400m->bus_reset = i2400mu_bus_reset; - i2400m->bus_bm_retries = i2400m_usb_boot_retries; - i2400m->bus_bm_cmd_send = i2400mu_bus_bm_cmd_send; - i2400m->bus_bm_wait_for_ack = i2400mu_bus_bm_wait_for_ack; - i2400m->bus_bm_mac_addr_impaired = 0; - - switch (id->idproduct) { - case usb_device_id_i6050: - case usb_device_id_i6050_2: - case usb_device_id_i6150: - case usb_device_id_i6150_2: - case usb_device_id_i6150_3: - case usb_device_id_i6250: - i2400mu->i6050 = 1; - break; - default: - break; - } - - if (i2400mu->i6050) { - i2400m->bus_fw_names = i2400mu_bus_fw_names_6050; - i2400mu->endpoint_cfg.bulk_out = 0; - i2400mu->endpoint_cfg.notification = 3; - i2400mu->endpoint_cfg.reset_cold = 2; - i2400mu->endpoint_cfg.bulk_in = 1; - } else { - i2400m->bus_fw_names = i2400mu_bus_fw_names_5x50; - i2400mu->endpoint_cfg.bulk_out = 0; - i2400mu->endpoint_cfg.notification = 1; - i2400mu->endpoint_cfg.reset_cold = 2; - i2400mu->endpoint_cfg.bulk_in = 3; - } -#ifdef config_pm - iface->needs_remote_wakeup = 1; /* autosuspend (15s delay) */ - device_init_wakeup(dev, 1); - pm_runtime_set_autosuspend_delay(&usb_dev->dev, 15000); - usb_enable_autosuspend(usb_dev); -#endif - - result = i2400m_setup(i2400m, i2400m_bri_mac_reinit); - if (result < 0) { - dev_err(dev, "cannot setup device: %d ", result); - goto error_setup; - } - i2400mu_debugfs_add(i2400mu); - return 0; - -error_setup: - usb_set_intfdata(iface, null); - usb_put_dev(i2400mu->usb_dev); - free_netdev(net_dev); -error_alloc_netdev: - return result; -} - - -/* - * disconnect a i2400m from the system. - * - * i2400m_stop() has been called before, so al the rx and tx contexts - * have been taken down already. make sure the queue is stopped, - * unregister netdev and i2400m, free and kill. - */ -static -void i2400mu_disconnect(struct usb_interface *iface) -{ - struct i2400mu *i2400mu = usb_get_intfdata(iface); - struct i2400m *i2400m = &i2400mu->i2400m; - struct net_device *net_dev = i2400m->wimax_dev.net_dev; - struct device *dev = &iface->dev; - - d_fnstart(3, dev, "(iface %p i2400m %p) ", iface, i2400m); - - debugfs_remove_recursive(i2400mu->debugfs_dentry); - i2400m_release(i2400m); - usb_set_intfdata(iface, null); - usb_put_dev(i2400mu->usb_dev); - free_netdev(net_dev); - d_fnend(3, dev, "(iface %p i2400m %p) = void ", iface, i2400m); -} - - -/* - * get the device ready for usb port or system standby and hibernation - * - * usb port and system standby are handled the same. - * - * when the system hibernates, the usb device is powered down and then - * up, so we don't really have to do much here, as it will be seen as - * a reconnect. still for simplicity we consider this case the same as - * suspend, so that the device has a chance to do notify the base - * station (if connected). - * - * so at the end, the three cases require common handling. - * - * if at the time of this call the device's firmware is not loaded, - * nothing has to be done. note we can be "loose" about not reading - * i2400m->updown under i2400m->init_mutex. if it happens to change - * inmediately, other parts of the call flow will fail and effectively - * catch it. - * - * if the firmware is loaded, we need to: - * - * - tell the device to go into host interface power save mode, wait - * for it to ack - * - * this is quite more interesting than it is; we need to execute a - * command, but this time, we don't want the code in usb-{tx,rx}.c - * to call the usb_autopm_get/put_interface() barriers as it'd - * deadlock, so we need to decrement i2400mu->do_autopm, that acts - * as a poor man's semaphore. ugly, but it works. - * - * as well, the device might refuse going to sleep for whichever - * reason. in this case we just fail. for system suspend/hibernate, - * we *can't* fail. we check pmsg_is_auto to see if the - * suspend call comes from the usb stack or from the system and act - * in consequence. - * - * - stop the notification endpoint polling - */ -static -int i2400mu_suspend(struct usb_interface *iface, pm_message_t pm_msg) -{ - int result = 0; - struct device *dev = &iface->dev; - struct i2400mu *i2400mu = usb_get_intfdata(iface); - unsigned is_autosuspend = 0; - struct i2400m *i2400m = &i2400mu->i2400m; - -#ifdef config_pm - if (pmsg_is_auto(pm_msg)) - is_autosuspend = 1; -#endif - - d_fnstart(3, dev, "(iface %p pm_msg %u) ", iface, pm_msg.event); - rmb(); /* see i2400m->updown's documentation */ - if (i2400m->updown == 0) - goto no_firmware; - if (i2400m->state == i2400m_ss_data_path_connected && is_autosuspend) { - /* ugh -- the device is connected and this suspend - * request is an autosuspend one (not a system standby - * / hibernate). - * - * the only way the device can go to standby is if the - * link with the base station is in idle mode; that - * were the case, we'd be in status - * i2400m_ss_connected_idle. but we are not. - * - * if we *tell* him to go power save now, it'll reset - * as a precautionary measure, so if this is an - * autosuspend thing, say no and it'll come back - * later, when the link is idle - */ - result = -ebadf; - d_printf(1, dev, "fw up, link up, not-idle, autosuspend: " - "not entering powersave "); - goto error_not_now; - } - d_printf(1, dev, "fw up: entering powersave "); - atomic_dec(&i2400mu->do_autopm); - result = i2400m_cmd_enter_powersave(i2400m); - atomic_inc(&i2400mu->do_autopm); - if (result < 0 && !is_autosuspend) { - /* system suspend, can't fail */ - dev_err(dev, "failed to suspend, will reset on resume "); - result = 0; - } - if (result < 0) - goto error_enter_powersave; - i2400mu_notification_release(i2400mu); - d_printf(1, dev, "powersave requested "); -error_enter_powersave: -error_not_now: -no_firmware: - d_fnend(3, dev, "(iface %p pm_msg %u) = %d ", - iface, pm_msg.event, result); - return result; -} - - -static -int i2400mu_resume(struct usb_interface *iface) -{ - int ret = 0; - struct device *dev = &iface->dev; - struct i2400mu *i2400mu = usb_get_intfdata(iface); - struct i2400m *i2400m = &i2400mu->i2400m; - - d_fnstart(3, dev, "(iface %p) ", iface); - rmb(); /* see i2400m->updown's documentation */ - if (i2400m->updown == 0) { - d_printf(1, dev, "fw was down, no resume needed "); - goto out; - } - d_printf(1, dev, "fw was up, resuming "); - i2400mu_notification_setup(i2400mu); - /* usb has flow control, so we don't need to give it time to - * come back; otherwise, we'd use something like a get-state - * command... */ -out: - d_fnend(3, dev, "(iface %p) = %d ", iface, ret); - return ret; -} - - -static -int i2400mu_reset_resume(struct usb_interface *iface) -{ - int result; - struct device *dev = &iface->dev; - struct i2400mu *i2400mu = usb_get_intfdata(iface); - struct i2400m *i2400m = &i2400mu->i2400m; - - d_fnstart(3, dev, "(iface %p) ", iface); - result = i2400m_dev_reset_handle(i2400m, "device reset on resume"); - d_fnend(3, dev, "(iface %p) = %d ", iface, result); - return result < 0 ? result : 0; -} - - -/* - * another driver or user space is triggering a reset on the device - * which contains the interface passed as an argument. cease io and - * save any device state you need to restore. - * - * if you need to allocate memory here, use gfp_noio or gfp_atomic, if - * you are in atomic context. - */ -static -int i2400mu_pre_reset(struct usb_interface *iface) -{ - struct i2400mu *i2400mu = usb_get_intfdata(iface); - return i2400m_pre_reset(&i2400mu->i2400m); -} - - -/* - * the reset has completed. restore any saved device state and begin - * using the device again. - * - * if you need to allocate memory here, use gfp_noio or gfp_atomic, if - * you are in atomic context. - */ -static -int i2400mu_post_reset(struct usb_interface *iface) -{ - struct i2400mu *i2400mu = usb_get_intfdata(iface); - return i2400m_post_reset(&i2400mu->i2400m); -} - - -static -struct usb_device_id i2400mu_id_table[] = { - { usb_device(0x8086, usb_device_id_i6050) }, - { usb_device(0x8086, usb_device_id_i6050_2) }, - { usb_device(0x8087, usb_device_id_i6150) }, - { usb_device(0x8087, usb_device_id_i6150_2) }, - { usb_device(0x8087, usb_device_id_i6150_3) }, - { usb_device(0x8086, usb_device_id_i6250) }, - { usb_device(0x8086, 0x0181) }, - { usb_device(0x8086, 0x1403) }, - { usb_device(0x8086, 0x1405) }, - { usb_device(0x8086, 0x0180) }, - { usb_device(0x8086, 0x0182) }, - { usb_device(0x8086, 0x1406) }, - { usb_device(0x8086, 0x1403) }, - { }, -}; -module_device_table(usb, i2400mu_id_table); - - -static -struct usb_driver i2400mu_driver = { - .name = kbuild_modname, - .suspend = i2400mu_suspend, - .resume = i2400mu_resume, - .reset_resume = i2400mu_reset_resume, - .probe = i2400mu_probe, - .disconnect = i2400mu_disconnect, - .pre_reset = i2400mu_pre_reset, - .post_reset = i2400mu_post_reset, - .id_table = i2400mu_id_table, - .supports_autosuspend = 1, -}; - -static -int __init i2400mu_driver_init(void) -{ - d_parse_params(d_level, d_level_size, i2400mu_debug_params, - "i2400m_usb.debug"); - return usb_register(&i2400mu_driver); -} -module_init(i2400mu_driver_init); - - -static -void __exit i2400mu_driver_exit(void) -{ - usb_deregister(&i2400mu_driver); -} -module_exit(i2400mu_driver_exit); - -module_author("intel corporation <linux-wimax@intel.com>"); -module_description("driver for usb based intel wireless wimax connection 2400m " - "(5x50 & 6050)"); -module_license("gpl"); -module_firmware(i2400mu_fw_file_name_v1_5); -module_firmware(i6050u_fw_file_name_v1_5); diff --git a/drivers/staging/wimax/id-table.c b/drivers/staging/wimax/id-table.c --- a/drivers/staging/wimax/id-table.c +++ /dev/null -// spdx-license-identifier: gpl-2.0-only -/* - * linux wimax - * mappping of generic netlink family ids to net devices - * - * copyright (c) 2005-2006 intel corporation <linux-wimax@intel.com> - * inaky perez-gonzalez <inaky.perez-gonzalez@intel.com> - * - * we assign a single generic netlink family id to each device (to - * simplify lookup). - * - * we need a way to map family id to a wimax_dev pointer. - * - * the idea is to use a very simple lookup. using a netlink attribute - * with (for example) the interface name implies a heavier search over - * all the network devices; seemed kind of a waste given that we know - * we are looking for a wimax device and that most systems will have - * just a single wimax adapter. - * - * we put all the wimax devices in the system in a linked list and - * match the generic link family id against the list. - * - * by using a linked list, the case of a single adapter in the system - * becomes (almost) no overhead, while still working for many more. if - * it ever goes beyond two, i'll be surprised. - */ -#include <linux/device.h> -#include <net/genetlink.h> -#include <linux/netdevice.h> -#include <linux/list.h> -#include "linux-wimax.h" -#include "wimax-internal.h" - - -#define d_submodule id_table -#include "debug-levels.h" - - -static define_spinlock(wimax_id_table_lock); -static struct list_head wimax_id_table = list_head_init(wimax_id_table); - - -/* - * wimax_id_table_add - add a gennetlink familiy id / wimax_dev mapping - * - * @wimax_dev: wimax device descriptor to associate to the generic - * netlink family id. - * - * look for an empty spot in the id table; if none found, double the - * table's size and get the first spot. - */ -void wimax_id_table_add(struct wimax_dev *wimax_dev) -{ - d_fnstart(3, null, "(wimax_dev %p) ", wimax_dev); - spin_lock(&wimax_id_table_lock); - list_add(&wimax_dev->id_table_node, &wimax_id_table); - spin_unlock(&wimax_id_table_lock); - d_fnend(3, null, "(wimax_dev %p) ", wimax_dev); -} - - -/* - * wimax_get_netdev_by_info - lookup a wimax_dev from the gennetlink info - * - * the generic netlink family id has been filled out in the - * nlmsghdr->nlmsg_type field, so we pull it from there, look it up in - * the mapping table and reference the wimax_dev. - * - * when done, the reference should be dropped with - * 'dev_put(wimax_dev->net_dev)'. - */ -struct wimax_dev *wimax_dev_get_by_genl_info( - struct genl_info *info, int ifindex) -{ - struct wimax_dev *wimax_dev = null; - - d_fnstart(3, null, "(info %p ifindex %d) ", info, ifindex); - spin_lock(&wimax_id_table_lock); - list_for_each_entry(wimax_dev, &wimax_id_table, id_table_node) { - if (wimax_dev->net_dev->ifindex == ifindex) { - dev_hold(wimax_dev->net_dev); - goto found; - } - } - wimax_dev = null; - d_printf(1, null, "wimax: no devices found with ifindex %d ", - ifindex); -found: - spin_unlock(&wimax_id_table_lock); - d_fnend(3, null, "(info %p ifindex %d) = %p ", - info, ifindex, wimax_dev); - return wimax_dev; -} - - -/* - * wimax_id_table_rm - remove a gennetlink familiy id / wimax_dev mapping - * - * @id: family id to remove from the table - */ -void wimax_id_table_rm(struct wimax_dev *wimax_dev) -{ - spin_lock(&wimax_id_table_lock); - list_del_init(&wimax_dev->id_table_node); - spin_unlock(&wimax_id_table_lock); -} - - -/* - * release the gennetlink family id / mapping table - * - * on debug, verify that the table is empty upon removal. we want the - * code always compiled, to ensure it doesn't bit rot. it will be - * compiled out if config_bug is disabled. - */ -void wimax_id_table_release(void) -{ - struct wimax_dev *wimax_dev; - -#ifndef config_bug - return; -#endif - spin_lock(&wimax_id_table_lock); - list_for_each_entry(wimax_dev, &wimax_id_table, id_table_node) { - pr_err("bug: %s wimax_dev %p ifindex %d not cleared ", - __func__, wimax_dev, wimax_dev->net_dev->ifindex); - warn_on(1); - } - spin_unlock(&wimax_id_table_lock); -} diff --git a/drivers/staging/wimax/linux-wimax-debug.h b/drivers/staging/wimax/linux-wimax-debug.h --- a/drivers/staging/wimax/linux-wimax-debug.h +++ /dev/null -/* spdx-license-identifier: gpl-2.0-only */ -/* - * linux wimax - * collection of tools to manage debug operations. - * - * copyright (c) 2005-2007 intel corporation - * inaky perez-gonzalez <inaky.perez-gonzalez@intel.com> - * - * don't #include this file directly, read on! - * - * executing debugging actions or not - * - * the main thing this framework provides is decission power to take a - * debug action (like printing a message) if the current debug level - * allows it. - * - * the decission power is at two levels: at compile-time (what does - * not make it is compiled out) and at run-time. the run-time - * selection is done per-submodule (as they are declared by the user - * of the framework). - * - * a call to d_test(l) (l being the target debug level) returns true - * if the action should be taken because the current debug levels - * allow it (both compile and run time). - * - * it follows that a call to d_test() that can be determined to be - * always false at compile time will get the code depending on it - * compiled out by optimization. - * - * debug levels - * - * it is up to the caller to define how much a debugging level is. - * - * convention sets 0 as "no debug" (so an action marked as debug level 0 - * will always be taken). the increasing debug levels are used for - * increased verbosity. - * - * usage - * - * group the code in modules and submodules inside each module [which - * in most cases maps to linux modules and .c files that compose - * those]. - * - * for each module, there is: - * - * - a modulename (single word, legal c identifier) - * - * - a debug-levels.h header file that declares the list of - * submodules and that is included by all .c files that use - * the debugging tools. the file name can be anything. - * - * - some (optional) .c code to manipulate the runtime debug levels - * through debugfs. - * - * the debug-levels.h file would look like: - * - * #ifndef __debug_levels__h__ - * #define __debug_levels__h__ - * - * #define d_modulename modulename - * #define d_master 10 - * - * #include "linux-wimax-debug.h" - * - * enum d_module { - * d_submodule_declare(submodule_1), - * d_submodule_declare(submodule_2), - * ... - * d_submodule_declare(submodule_n) - * }; - * - * #endif - * - * d_master is the maximum compile-time debug level; any debug actions - * above this will be out. d_modulename is the module name (legal c - * identifier), which has to be unique for each module (to avoid - * namespace collisions during linkage). note those #defines need to - * be done before #including debug.h - * - * we declare n different submodules whose debug level can be - * independently controlled during runtime. - * - * in a .c file of the module (and only in one of them), define the - * following code: - * - * struct d_level d_level[] = { - * d_submodule_define(submodule_1), - * d_submodule_define(submodule_2), - * ... - * d_submodule_define(submodule_n), - * }; - * size_t d_level_size = array_size(d_level); - * - * externs for d_level_modulename and d_level_size_modulename are used - * and declared in this file using the d_level and d_level_size macros - * #defined also in this file. - * - * to manipulate from user space the levels, create a debugfs dentry - * and then register each submodule with: - * - * d_level_register_debugfs("prefix_", submodule_x, parent); - * - * where prefix_ is a name of your chosing. this will create debugfs - * file with a single numeric value that can be use to tweak it. to - * remove the entires, just use debugfs_remove_recursive() on 'parent'. - * - * note: remember that even if this will show attached to some - * particular instance of a device, the settings are *global*. - * - * on each submodule (for example, .c files), the debug infrastructure - * should be included like this: - * - * #define d_submodule submodule_x // matches one in debug-levels.h - * #include "debug-levels.h" - * - * after #including all your include files. - * - * now you can use the d_*() macros below [d_test(), d_fnstart(), - * d_fnend(), d_printf(), d_dump()]. - * - * if their debug level is greater than d_master, they will be - * compiled out. - * - * if their debug level is lower or equal than d_master but greater - * than the current debug level of their submodule, they'll be - * ignored. - * - * otherwise, the action will be performed. - */ -#ifndef __debug__h__ -#define __debug__h__ - -#include <linux/types.h> -#include <linux/slab.h> - -struct device; - -/* backend stuff */ - -/* - * debug backend: generate a message header from a 'struct device' - * - * @head: buffer where to place the header - * @head_size: length of @head - * @dev: pointer to device used to generate a header from. if null, - * an empty ("") header is generated. - */ -static inline -void __d_head(char *head, size_t head_size, - struct device *dev) -{ - if (dev == null) - head[0] = 0; - else if ((unsigned long)dev < 4096) { - printk(kern_err "e: corrupt dev %p ", dev); - warn_on(1); - } else - snprintf(head, head_size, "%s %s: ", - dev_driver_string(dev), dev_name(dev)); -} - - -/* - * debug backend: log some message if debugging is enabled - * - * @l: intended debug level - * @tag: tag to prefix the message with - * @dev: 'struct device' associated to this message - * @f: printf-like format and arguments - * - * note this is optimized out if it doesn't pass the compile-time - * check; however, it is *always* compiled. this is useful to make - * sure the printf-like formats and variables are always checked and - * they don't get bit rot if you have all the debugging disabled. - */ -#define _d_printf(l, tag, dev, f, a...) \ -do { \ - char head[64]; \ - if (!d_test(l)) \ - break; \ - __d_head(head, sizeof(head), dev); \ - printk(kern_err "%s%s%s: " f, head, __func__, tag, ##a); \ -} while (0) - - -/* - * cpp syntactic sugar to generate a_b like symbol names when one of - * the arguments is a preprocessor #define. - */ -#define __d_paste__(varname, modulename) varname##_##modulename -#define __d_paste(varname, modulename) (__d_paste__(varname, modulename)) -#define _d_submodule_index(_name) (d_submodule_declare(_name)) - - -/* - * store a submodule's runtime debug level and name - */ -struct d_level { - u8 level; - const char *name; -}; - - -/* - * list of available submodules and their debug levels - * - * we call them d_level_modulename and d_level_size_modulename; the - * macros d_level and d_level_size contain the name already for - * convenience. - * - * this array and the size are defined on some .c file that is part of - * the current module. - */ -#define d_level __d_paste(d_level, d_modulename) -#define d_level_size __d_paste(d_level_size, d_modulename) - -extern struct d_level d_level[]; -extern size_t d_level_size; - - -/* - * frontend stuff - * - * - * stuff you need to declare prior to using the actual "debug" actions - * (defined below). - */ - -#ifndef d_modulename -#error d_modulename is not defined in your debug-levels.h file -/** - * d_module - name of the current module - * - * #define in your module's debug-levels.h, making sure it is - * unique. this has to be a legal c identifier. - */ -#define d_modulename undefined_modulename -#endif - - -#ifndef d_master -#warning d_master not defined, but debug.h included! [see docs] -/** - * d_master - compile time maximum debug level - * - * #define in your debug-levels.h file to the maximum debug level the - * runtime code will be allowed to have. this allows you to provide a - * main knob. - * - * anything above that level will be optimized out of the compile. - * - * defaults to zero (no debug code compiled in). - * - * maximum one definition per module (at the debug-levels.h file). - */ -#define d_master 0 -#endif - -#ifndef d_submodule -#error d_submodule not defined, but debug.h included! [see docs] -/** - * d_submodule - name of the current submodule - * - * #define in your submodule .c file before #including debug-levels.h - * to the name of the current submodule as previously declared and - * defined with d_submodule_declare() (in your module's - * debug-levels.h) and d_submodule_define(). - * - * this is used to provide runtime-control over the debug levels. - * - * maximum one per .c file! can be shared among different .c files - * (meaning they belong to the same submodule categorization). - */ -#define d_submodule undefined_module -#endif - - -/** - * d_submodule_declare - declare a submodule for runtime debug level control - * - * @_name: name of the submodule, restricted to the chars that make up a - * valid c identifier ([a-za-z0-9_]). - * - * declare in the module's debug-levels.h header file as: - * - * enum d_module { - * d_submodule_declare(submodule_1), - * d_submodule_declare(submodule_2), - * d_submodule_declare(submodule_3), - * }; - * - * some corresponding .c file needs to have a matching - * d_submodule_define(). - */ -#define d_submodule_declare(_name) __d_submodule_##_name - - -/** - * d_submodule_define - define a submodule for runtime debug level control - * - * @_name: name of the submodule, restricted to the chars that make up a - * valid c identifier ([a-za-z0-9_]). - * - * use once per module (in some .c file) as: - * - * static - * struct d_level d_level_submodulename[] = { - * d_submodule_define(submodule_1), - * d_submodule_define(submodule_2), - * d_submodule_define(submodule_3), - * }; - * size_t d_level_size_subdmodulename = array_size(d_level_subdmodulename); - * - * matching d_submodule_declare()s have to be present in a - * debug-levels.h header file. - */ -#define d_submodule_define(_name) \ -[__d_submodule_##_name] = { \ - .level = 0, \ - .name = #_name \ -} - - - -/* the actual "debug" operations */ - - -/** - * d_test - returns true if debugging should be enabled - * - * @l: intended debug level (unsigned) - * - * if the master debug switch is enabled and the current settings are - * higher or equal to the requested level, then debugging - * output/actions should be enabled. - * - * note: - * - * this needs to be coded so that it can be evaluated in compile - * time; this is why the ugly bug_on() is placed in there, so the - * d_master evaluation compiles all out if it is compile-time false. - */ -#define d_test(l) \ -({ \ - unsigned __l = l; /* type enforcer */ \ - (d_master) >= __l \ - && ({ \ - bug_on(_d_submodule_index(d_submodule) >= d_level_size);\ - d_level[_d_submodule_index(d_submodule)].level >= __l; \ - }); \ -}) - - -/** - * d_fnstart - log message at function start if debugging enabled - * - * @l: intended debug level - * @_dev: 'struct device' pointer, null if none (for context) - * @f: printf-like format and arguments - */ -#define d_fnstart(l, _dev, f, a...) _d_printf(l, " fnstart", _dev, f, ## a) - - -/** - * d_fnend - log message at function end if debugging enabled - * - * @l: intended debug level - * @_dev: 'struct device' pointer, null if none (for context) - * @f: printf-like format and arguments - */ -#define d_fnend(l, _dev, f, a...) _d_printf(l, " fnend", _dev, f, ## a) - - -/** - * d_printf - log message if debugging enabled - * - * @l: intended debug level - * @_dev: 'struct device' pointer, null if none (for context) - * @f: printf-like format and arguments - */ -#define d_printf(l, _dev, f, a...) _d_printf(l, "", _dev, f, ## a) - - -/** - * d_dump - log buffer hex dump if debugging enabled - * - * @l: intended debug level - * @_dev: 'struct device' pointer, null if none (for context) - * @f: printf-like format and arguments - */ -#define d_dump(l, dev, ptr, size) \ -do { \ - char head[64]; \ - if (!d_test(l)) \ - break; \ - __d_head(head, sizeof(head), dev); \ - print_hex_dump(kern_err, head, 0, 16, 1, \ - ((void *) ptr), (size), 0); \ -} while (0) - - -/** - * export a submodule's debug level over debugfs as prefixsubmodule - * - * @prefix: string to prefix the name with - * @submodule: name of submodule (not a string, just the name) - * @dentry: debugfs parent dentry - * - * for removing, just use debugfs_remove_recursive() on the parent. - */ -#define d_level_register_debugfs(prefix, name, parent) \ -({ \ - debugfs_create_u8( \ - prefix #name, 0600, parent, \ - &(d_level[__d_submodule_ ## name].level)); \ -}) - - -static inline -void d_submodule_set(struct d_level *d_level, size_t d_level_size, - const char *submodule, u8 level, const char *tag) -{ - struct d_level *itr, *top; - int index = -1; - - for (itr = d_level, top = itr + d_level_size; itr < top; itr++) { - index++; - if (itr->name == null) { - printk(kern_err "%s: itr->name null?? (%p, #%d) ", - tag, itr, index); - continue; - } - if (!strcmp(itr->name, submodule)) { - itr->level = level; - return; - } - } - printk(kern_err "%s: unknown submodule %s ", tag, submodule); -} - - -/** - * d_parse_params - parse a string with debug parameters from the - * command line - * - * @d_level: level structure (d_level) - * @d_level_size: number of items in the level structure - * (d_level_size). - * @_params: string with the parameters; this is a space (not tab!) - * separated list of name:value, where value is the debug level - * and name is the name of the submodule. - * @tag: string for error messages (example: module.argname). - */ -static inline -void d_parse_params(struct d_level *d_level, size_t d_level_size, - const char *_params, const char *tag) -{ - char submodule[130], *params, *params_orig, *token, *colon; - unsigned level, tokens; - - if (_params == null) - return; - params_orig = kstrdup(_params, gfp_kernel); - params = params_orig; - while (1) { - token = strsep(¶ms, " "); - if (token == null) - break; - if (*token == '') /* eat joint spaces */ - continue; - /* kernel's sscanf %s eats until whitespace, so we - * replace : by so it doesn't get eaten later by - * strsep */ - colon = strchr(token, ':'); - if (colon != null) - *colon = ' '; - tokens = sscanf(token, "%s %u", submodule, &level); - if (colon != null) - *colon = ':'; /* set back, for error messages */ - if (tokens == 2) - d_submodule_set(d_level, d_level_size, - submodule, level, tag); - else - printk(kern_err "%s: can't parse '%s' as a " - "submodule:level (%d tokens) ", - tag, token, tokens); - } - kfree(params_orig); -} - -#endif /* #ifndef __debug__h__ */ diff --git a/drivers/staging/wimax/linux-wimax.h b/drivers/staging/wimax/linux-wimax.h --- a/drivers/staging/wimax/linux-wimax.h +++ /dev/null -/* - * linux wimax - * api for user space - * - * - * copyright (c) 2007-2008 intel corporation. all rights reserved. - * - * redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * - * * redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * * redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in - * the documentation and/or other materials provided with the - * distribution. - * * neither the name of intel corporation nor the names of its - * contributors may be used to endorse or promote products derived - * from this software without specific prior written permission. - * - * this software is provided by the copyright holders and contributors - * "as is" and any express or implied warranties, including, but not - * limited to, the implied warranties of merchantability and fitness for - * a particular purpose are disclaimed. in no event shall the copyright - * owner or contributors be liable for any direct, indirect, incidental, - * special, exemplary, or consequential damages (including, but not - * limited to, procurement of substitute goods or services; loss of use, - * data, or profits; or business interruption) however caused and on any - * theory of liability, whether in contract, strict liability, or tort - * (including negligence or otherwise) arising in any way out of the use - * of this software, even if advised of the possibility of such damage. - * - * - * intel corporation <linux-wimax@intel.com> - * inaky perez-gonzalez <inaky.perez-gonzalez@intel.com> - * - initial implementation - * - * - * this file declares the user/kernel protocol that is spoken over - * generic netlink, as well as any type declaration that is to be used - * by kernel and user space. - * - * it is intended for user space to clone it verbatim to use it as a - * primary reference for definitions. - * - * stuff intended for kernel usage as well as full protocol and stack - * documentation is rooted in include/net/wimax.h. - */ - -#ifndef __linux__wimax_h__ -#define __linux__wimax_h__ - -#include <linux/types.h> - -enum { - /** - * version of the interface (unsigned decimal, mmm, max 25.5) - * m - major: change if removing or modifying an existing call. - * m - minor: change when adding a new call - */ - wimax_gnl_version = 01, - /* generic netlink attributes */ - wimax_gnl_attr_invalid = 0x00, - wimax_gnl_attr_max = 10, -}; - - -/* - * generic netlink operations - * - * most of these map to an api call; _op_ stands for operation, _rp_ - * for reply and _re_ for report (aka: signal). - */ -enum { - wimax_gnl_op_msg_from_user, /* user to kernel message */ - wimax_gnl_op_msg_to_user, /* kernel to user message */ - wimax_gnl_op_rfkill, /* run wimax_rfkill() */ - wimax_gnl_op_reset, /* run wimax_rfkill() */ - wimax_gnl_re_state_change, /* report: status change */ - wimax_gnl_op_state_get, /* request for current state */ -}; - - -/* message from user / to user */ -enum { - wimax_gnl_msg_ifidx = 1, - wimax_gnl_msg_pipe_name, - wimax_gnl_msg_data, -}; - - -/* - * wimax_rfkill() - * - * the state of the radio (on/off) is mapped to the rfkill subsystem's - * switch state (disabled/enabled). - */ -enum wimax_rf_state { - wimax_rf_off = 0, /* radio is off, rfkill on/enabled */ - wimax_rf_on = 1, /* radio is on, rfkill off/disabled */ - wimax_rf_query = 2, -}; - -/* attributes */ -enum { - wimax_gnl_rfkill_ifidx = 1, - wimax_gnl_rfkill_state, -}; - - -/* attributes for wimax_reset() */ -enum { - wimax_gnl_reset_ifidx = 1, -}; - -/* attributes for wimax_state_get() */ -enum { - wimax_gnl_stget_ifidx = 1, -}; - -/* - * attributes for the report state change - * - * for now we just have the old and new states; new attributes might - * be added later on. - */ -enum { - wimax_gnl_stch_ifidx = 1, - wimax_gnl_stch_state_old, - wimax_gnl_stch_state_new, -}; - - -/** - * enum wimax_st - the different states of a wimax device - * @__wimax_st_null: the device structure has been allocated and zeroed, - * but still wimax_dev_add() hasn't been called. there is no state. - * - * @wimax_st_down: the device has been registered with the wimax and - * networking stacks, but it is not initialized (normally that is - * done with 'ifconfig dev up' [or equivalent], which can upload - * firmware and enable communications with the device). - * in this state, the device is powered down and using as less - * power as possible. - * this state is the default after a call to wimax_dev_add(). it - * is ok to have drivers move directly to %wimax_st_uninitialized - * or %wimax_st_radio_off in _probe() after the call to - * wimax_dev_add(). - * it is recommended that the driver leaves this state when - * calling 'ifconfig dev up' and enters it back on 'ifconfig dev - * down'. - * - * @__wimax_st_quiescing: the device is being torn down, so no api - * operations are allowed to proceed except the ones needed to - * complete the device clean up process. - * - * @wimax_st_uninitialized: [optional] communication with the device - * is setup, but the device still requires some configuration - * before being operational. - * some wimax api calls might work. - * - * @wimax_st_radio_off: the device is fully up; radio is off (wether - * by hardware or software switches). - * it is recommended to always leave the device in this state - * after initialization. - * - * @wimax_st_ready: the device is fully up and radio is on. - * - * @wimax_st_scanning: [optional] the device has been instructed to - * scan. in this state, the device cannot be actively connected to - * a network. - * - * @wimax_st_connecting: the device is connecting to a network. this - * state exists because in some devices, the connect process can - * include a number of negotiations between user space, kernel - * space and the device. user space needs to know what the device - * is doing. if the connect sequence in a device is atomic and - * fast, the device can transition directly to connected - * - * @wimax_st_connected: the device is connected to a network. - * - * @__wimax_st_invalid: this is an invalid state used to mark the - * maximum numeric value of states. - * - * description: - * - * transitions from one state to another one are atomic and can only - * be caused in kernel space with wimax_state_change(). to read the - * state, use wimax_state_get(). - * - * states starting with __ are internal and shall not be used or - * referred to by drivers or userspace. they look ugly, but that's the - * point -- if any use is made non-internal to the stack, it is easier - * to catch on review. - * - * all api operations [with well defined exceptions] will take the - * device mutex before starting and then check the state. if the state - * is %__wimax_st_null, %wimax_st_down, %wimax_st_uninitialized or - * %__wimax_st_quiescing, it will drop the lock and quit with - * -%einval, -%enomedium, -%enotconn or -%eshutdown. - * - * the order of the definitions is important, so we can do numerical - * comparisons (eg: < %wimax_st_radio_off means the device is not ready - * to operate). - */ -/* - * the allowed state transitions are described in the table below - * (states in rows can go to states in columns where there is an x): - * - * unini radio ready scan connec connec - * null down quiescing tialized off ning ting ted - * null - x - * down - x x x - * quiescing x - - * uninitialized x - x - * radio_off x - x - * ready x x - x x x - * scanning x x x - x x - * connecting x x x x - x - * connected x x x - - * - * this table not available in kernel-doc because the formatting messes it up. - */ - enum wimax_st { - __wimax_st_null = 0, - wimax_st_down, - __wimax_st_quiescing, - wimax_st_uninitialized, - wimax_st_radio_off, - wimax_st_ready, - wimax_st_scanning, - wimax_st_connecting, - wimax_st_connected, - __wimax_st_invalid /* always keep last */ -}; - - -#endif /* #ifndef __linux__wimax_h__ */ diff --git a/drivers/staging/wimax/net-wimax.h b/drivers/staging/wimax/net-wimax.h --- a/drivers/staging/wimax/net-wimax.h +++ /dev/null -/* spdx-license-identifier: gpl-2.0-only */ -/* - * linux wimax - * kernel space api for accessing wimax devices - * - * copyright (c) 2007-2008 intel corporation <linux-wimax@intel.com> - * inaky perez-gonzalez <inaky.perez-gonzalez@intel.com> - * - * the wimax stack provides an api for controlling and managing the - * system's wimax devices. this api affects the control plane; the - * data plane is accessed via the network stack (netdev). - * - * parts of the wimax stack api and notifications are exported to - * user space via generic netlink. in user space, libwimax (part of - * the wimax-tools package) provides a shim layer for accessing those - * calls. - * - * the api is standarized for all wimax devices and different drivers - * implement the backend support for it. however, device-specific - * messaging pipes are provided that can be used to issue commands and - * receive notifications in free form. - * - * currently the messaging pipes are the only means of control as it - * is not known (due to the lack of more devices in the market) what - * will be a good abstraction layer. expect this to change as more - * devices show in the market. this api is designed to be growable in - * order to address this problem. - * - * usage - * - * embed a 'struct wimax_dev' at the beginning of the device's - * private structure, initialize and register it. for details, see - * 'struct wimax_dev's documentation. - * - * once this is done, wimax-tools's libwimaxll can be used to - * communicate with the driver from user space. you user space - * application does not have to forcibily use libwimaxll and can talk - * the generic netlink protocol directly if desired. - * - * remember this is a very low level api that will to provide all of - * wimax features. other daemons and services running in user space - * are the expected clients of it. they offer a higher level api that - * applications should use (an example of this is the intel's wimax - * network service for the i2400m). - * - * design - * - * although not set on final stone, this very basic interface is - * mostly completed. remember this is meant to grow as new common - * operations are decided upon. new operations will be added to the - * interface, intent being on keeping backwards compatibility as much - * as possible. - * - * this layer implements a set of calls to control a wimax device, - * exposing a frontend to the rest of the kernel and user space (via - * generic netlink) and a backend implementation in the driver through - * function pointers. - * - * wimax devices have a state, and a kernel-only api allows the - * drivers to manipulate that state. state transitions are atomic, and - * only some of them are allowed (see 'enum wimax_st'). - * - * most api calls will set the state automatically; in most cases - * drivers have to only report state changes due to external - * conditions. - * - * all api operations are 'atomic', serialized through a mutex in the - * 'struct wimax_dev'. - * - * exporting to user space through generic netlink - * - * the api is exported to user space using generic netlink (other - * methods can be added as needed). - * - * there is a generic netlink family named "wimax", where interfaces - * supporting the wimax interface receive commands and broadcast their - * signals over a multicast group named "msg". - * - * mapping to the source/destination interface is done by an interface - * index attribute. - * - * for user-to-kernel traffic (commands) we use a function call - * marshalling mechanism, where a message x with attributes a, b, c - * sent from user space to kernel space means executing the wimax api - * call wimax_x(a, b, c), sending the results back as a message. - * - * kernel-to-user (notifications or signals) communication is sent - * over multicast groups. this allows to have multiple applications - * monitoring them. - * - * each command/signal gets assigned it's own attribute policy. this - * way the validator will verify that all the attributes in there are - * only the ones that should be for each command/signal. thing of an - * attribute mapping to a type+argumentname for each command/signal. - * - * if we had a single policy for *all* commands/signals, after running - * the validator we'd have to check "does this attribute belong in - * here"? for each one. it can be done manually, but it's just easier - * to have the validator do that job with multiple policies. as well, - * it makes it easier to later expand each command/signal signature - * without affecting others and keeping the namespace more or less - * sane. not that it is too complicated, but it makes it even easier. - * - * no state information is maintained in the kernel for each user - * space connection (the connection is stateless). - * - * testing for the interface and versioning - * - * if network interface x is a wimax device, there will be a generic - * netlink family named "wimax x" and the device will present a - * "wimax" directory in it's network sysfs directory - * (/sys/class/net/device/wimax) [used by hal]. - * - * the inexistence of any of these means the device does not support - * this wimax api. - * - * by querying the generic netlink controller, versioning information - * and the multicast groups available can be found. applications using - * the interface can either rely on that or use the generic netlink - * controller to figure out which generic netlink commands/signals are - * supported. - * - * note: this versioning is a last resort to avoid hard - * incompatibilities. it is the intention of the design of this - * stack not to introduce backward incompatible changes. - * - * the version code has to fit in one byte (restrictions imposed by - * generic netlink); we use 'version / 10' for the major version and - * 'version % 10' for the minor. this gives 9 minors for each major - * and 25 majors. - * - * the version change protocol is as follow: - * - * - major versions: needs to be increased if an existing message/api - * call is changed or removed. doesn't need to be changed if a new - * message is added. - * - * - minor version: needs to be increased if new messages/api calls are - * being added or some other consideration that doesn't impact the - * user-kernel interface too much (like some kind of bug fix) and - * that is kind of left up in the air to common sense. - * - * user space code should not try to work if the major version it was - * compiled for differs from what the kernel offers. as well, if the - * minor version of the kernel interface is lower than the one user - * space is expecting (the one it was compiled for), the kernel - * might be missing api calls; user space shall be ready to handle - * said condition. use the generic netlink controller operations to - * find which ones are supported and which not. - * - * libwimaxll:wimaxll_open() takes care of checking versions. - * - * the operations: - * - * each operation is defined in its on file (drivers/net/wimax/op-*.c) - * for clarity. the parts needed for an operation are: - * - * - a function pointer in 'struct wimax_dev': optional, as the - * operation might be implemented by the stack and not by the - * driver. - * - * all function pointers are named wimax_dev->op_*(), and drivers - * must implement them except where noted otherwise. - * - * - when exported to user space, a 'struct nla_policy' to define the - * attributes of the generic netlink command and a 'struct genl_ops' - * to define the operation. - * - * all the declarations for the operation codes (wimax_gnl_op_<name>) - * and generic netlink attributes (wimax_gnl_<name>_*) are declared in - * include/linux/wimax.h; this file is intended to be cloned by user - * space to gain access to those declarations. - * - * a few caveats to remember: - * - * - need to define attribute numbers starting in 1; otherwise it - * fails. - * - * - the 'struct genl_family' requires a maximum attribute id; when - * defining the 'struct nla_policy' for each message, it has to have - * an array size of wimax_gnl_attr_max+1. - * - * the op_*() function pointers will not be called if the wimax_dev is - * in a state <= %wimax_st_uninitialized. the exception is: - * - * - op_reset: can be called at any time after wimax_dev_add() has - * been called. - * - * the pipe interface: - * - * this interface is kept intentionally simple. the driver can send - * and receive free-form messages to/from user space through a - * pipe. see drivers/net/wimax/op-msg.c for details. - * - * the kernel-to-user messages are sent with - * wimax_msg(). user-to-kernel messages are delivered via - * wimax_dev->op_msg_from_user(). - * - * rfkill: - * - * rfkill support is built into the wimax_dev layer; the driver just - * needs to call wimax_report_rfkill_{hw,sw}() to inform of changes in - * the hardware or software rf kill switches. when the stack wants to - * turn the radio off, it will call wimax_dev->op_rfkill_sw_toggle(), - * which the driver implements. - * - * user space can set the software rf kill switch by calling - * wimax_rfkill(). - * - * the code for now only supports devices that don't require polling; - * if the device needs to be polled, create a self-rearming delayed - * work struct for polling or look into adding polled support to the - * wimax stack. - * - * when initializing the hardware (_probe), after calling - * wimax_dev_add(), query the device for it's rf kill switches status - * and feed it back to the wimax stack using - * wimax_report_rfkill_{hw,sw}(). if any switch is missing, always - * report it as on. - * - * note: the wimax stack uses an inverted terminology to that of the - * rfkill subsystem: - * - * - on: radio is on, rfkill is disabled or off. - * - off: radio is off, rfkill is enabled or on. - * - * miscellaneous ops: - * - * wimax_reset() can be used to reset the device to power on state; by - * default it issues a warm reset that maintains the same device - * node. if that is not possible, it falls back to a cold reset - * (device reconnect). the driver implements the backend to this - * through wimax_dev->op_reset(). - */ - -#ifndef __net__wimax_h__ -#define __net__wimax_h__ - -#include "linux-wimax.h" -#include <net/genetlink.h> -#include <linux/netdevice.h> - -struct net_device; -struct genl_info; -struct wimax_dev; - -/** - * struct wimax_dev - generic wimax device - * - * @net_dev: [fill] pointer to the &struct net_device this wimax - * device implements. - * - * @op_msg_from_user: [fill] driver-specific operation to - * handle a raw message from user space to the driver. the - * driver can send messages to user space using with - * wimax_msg_to_user(). - * - * @op_rfkill_sw_toggle: [fill] driver-specific operation to act on - * userspace (or any other agent) requesting the wimax device to - * change the rf kill software switch (wimax_rf_on or - * wimax_rf_off). - * if such hardware support is not present, it is assumed the - * radio cannot be switched off and it is always on (and the stack - * will error out when trying to switch it off). in such case, - * this function pointer can be left as null. - * - * @op_reset: [fill] driver specific operation to reset the - * device. - * this operation should always attempt first a warm reset that - * does not disconnect the device from the bus and return 0. - * if that fails, it should resort to some sort of cold or bus - * reset (even if it implies a bus disconnection and device - * disappearance). in that case, -enodev should be returned to - * indicate the device is gone. - * this operation has to be synchronous, and return only when the - * reset is complete. in case of having had to resort to bus/cold - * reset implying a device disconnection, the call is allowed to - * return immediately. - * note: wimax_dev->mutex is not locked when this op is being - * called; however, wimax_dev->mutex_reset is locked to ensure - * serialization of calls to wimax_reset(). - * see wimax_reset()'s documentation. - * - * @name: [fill] a way to identify this device. we need to register a - * name with many subsystems (rfkill, workqueue creation, etc). - * we can't use the network device name as that - * might change and in some instances we don't know it yet (until - * we don't call register_netdev()). so we generate an unique one - * using the driver name and device bus id, place it here and use - * it across the board. recommended naming: - * drivername-busname:busid (dev->bus->name, dev->bus_id). - * - * @id_table_node: [private] link to the list of wimax devices kept by - * id-table.c. protected by it's own spinlock. - * - * @mutex: [private] serializes all concurrent access and execution of - * operations. - * - * @mutex_reset: [private] serializes reset operations. needs to be a - * different mutex because as part of the reset operation, the - * driver has to call back into the stack to do things such as - * state change, that require wimax_dev->mutex. - * - * @state: [private] current state of the wimax device. - * - * @rfkill: [private] integration into the rf-kill infrastructure. - * - * @rf_sw: [private] state of the software radio switch (off/on) - * - * @rf_hw: [private] state of the hardware radio switch (off/on) - * - * @debugfs_dentry: [private] used to hook up a debugfs entry. this - * shows up in the debugfs root as wimax\:devicename. - * - * description: - * this structure defines a common interface to access all wimax - * devices from different vendors and provides a common api as well as - * a free-form device-specific messaging channel. - * - * usage: - * 1. embed a &struct wimax_dev at *the beginning* the network - * device structure so that netdev_priv() points to it. - * - * 2. memset() it to zero - * - * 3. initialize with wimax_dev_init(). this will leave the wimax - * device in the %__wimax_st_null state. - * - * 4. fill all the fields marked with [fill]; once called - * wimax_dev_add(), those fields cannot be modified. - * - * 5. call wimax_dev_add() *after* registering the network - * device. this will leave the wimax device in the %wimax_st_down - * state. - * protect the driver's net_device->open() against succeeding if - * the wimax device state is lower than %wimax_st_down. - * - * 6. select when the device is going to be turned on/initialized; - * for example, it could be initialized on 'ifconfig up' (when the - * netdev op 'open()' is called on the driver). - * - * when the device is initialized (at 'ifconfig up' time, or right - * after calling wimax_dev_add() from _probe(), make sure the - * following steps are taken - * - * a. move the device to %wimax_st_uninitialized. this is needed so - * some api calls that shouldn't work until the device is ready - * can be blocked. - * - * b. initialize the device. make sure to turn the sw radio switch - * off and move the device to state %wimax_st_radio_off when - * done. when just initialized, a device should be left in radio - * off state until user space devices to turn it on. - * - * c. query the device for the state of the hardware rfkill switch - * and call wimax_rfkill_report_hw() and wimax_rfkill_report_sw() - * as needed. see below. - * - * wimax_dev_rm() undoes before unregistering the network device. once - * wimax_dev_add() is called, the driver can get called on the - * wimax_dev->op_* function pointers - * - * concurrency: - * - * the stack provides a mutex for each device that will disallow api - * calls happening concurrently; thus, op calls into the driver - * through the wimax_dev->op*() function pointers will always be - * serialized and *never* concurrent. - * - * for locking, take wimax_dev->mutex is taken; (most) operations in - * the api have to check for wimax_dev_is_ready() to return 0 before - * continuing (this is done internally). - * - * reference counting: - * - * the wimax device is reference counted by the associated network - * device. the only operation that can be used to reference the device - * is wimax_dev_get_by_genl_info(), and the reference it acquires has - * to be released with dev_put(wimax_dev->net_dev). - * - * rfkill: - * - * at startup, both hw and sw radio switchess are assumed to be off. - * - * at initialization time [after calling wimax_dev_add()], have the - * driver query the device for the status of the software and hardware - * rf kill switches and call wimax_report_rfkill_hw() and - * wimax_rfkill_report_sw() to indicate their state. if any is - * missing, just call it to indicate it is on (radio always on). - * - * whenever the driver detects a change in the state of the rf kill - * switches, it should call wimax_report_rfkill_hw() or - * wimax_report_rfkill_sw() to report it to the stack. - */ -struct wimax_dev { - struct net_device *net_dev; - struct list_head id_table_node; - struct mutex mutex; /* protects all members and api calls */ - struct mutex mutex_reset; - enum wimax_st state; - - int (*op_msg_from_user)(struct wimax_dev *wimax_dev, - const char *, - const void *, size_t, - const struct genl_info *info); - int (*op_rfkill_sw_toggle)(struct wimax_dev *wimax_dev, - enum wimax_rf_state); - int (*op_reset)(struct wimax_dev *wimax_dev); - - struct rfkill *rfkill; - unsigned int rf_hw; - unsigned int rf_sw; - char name[32]; - - struct dentry *debugfs_dentry; -}; - - - -/* - * wimax stack public api for device drivers - * ----------------------------------------- - * - * these functions are not exported to user space. - */ -void wimax_dev_init(struct wimax_dev *); -int wimax_dev_add(struct wimax_dev *, struct net_device *); -void wimax_dev_rm(struct wimax_dev *); - -static inline -struct wimax_dev *net_dev_to_wimax(struct net_device *net_dev) -{ - return netdev_priv(net_dev); -} - -static inline -struct device *wimax_dev_to_dev(struct wimax_dev *wimax_dev) -{ - return wimax_dev->net_dev->dev.parent; -} - -void wimax_state_change(struct wimax_dev *, enum wimax_st); -enum wimax_st wimax_state_get(struct wimax_dev *); - -/* - * radio switch state reporting. - * - * enum wimax_rf_state is declared in linux/wimax.h so the exports - * to user space can use it. - */ -void wimax_report_rfkill_hw(struct wimax_dev *, enum wimax_rf_state); -void wimax_report_rfkill_sw(struct wimax_dev *, enum wimax_rf_state); - - -/* - * free-form messaging to/from user space - * - * sending a message: - * - * wimax_msg(wimax_dev, pipe_name, buf, buf_size, gfp_kernel); - * - * broken up: - * - * skb = wimax_msg_alloc(wimax_dev, pipe_name, buf_size, gfp_kernel); - * ...fill up skb... - * wimax_msg_send(wimax_dev, pipe_name, skb); - * - * be sure not to modify skb->data in the middle (ie: don't use - * skb_push()/skb_pull()/skb_reserve() on the skb). - * - * "pipe_name" is any string, that can be interpreted as the name of - * the pipe or recipient; the interpretation of it is driver - * specific, so the recipient can multiplex it as wished. it can be - * null, it won't be used - an example is using a "diagnostics" tag to - * send diagnostics information that a device-specific diagnostics - * tool would be interested in. - */ -struct sk_buff *wimax_msg_alloc(struct wimax_dev *, const char *, const void *, - size_t, gfp_t); -int wimax_msg_send(struct wimax_dev *, struct sk_buff *); -int wimax_msg(struct wimax_dev *, const char *, const void *, size_t, gfp_t); - -const void *wimax_msg_data_len(struct sk_buff *, size_t *); -const void *wimax_msg_data(struct sk_buff *); -ssize_t wimax_msg_len(struct sk_buff *); - - -/* - * wimax stack user space api - * -------------------------- - * - * this api is what gets exported to user space for general - * operations. as well, they can be called from within the kernel, - * (with a properly referenced 'struct wimax_dev'). - * - * properly referenced means: the 'struct net_device' that embeds the - * device's control structure and (as such) the 'struct wimax_dev' is - * referenced by the caller. - */ -int wimax_rfkill(struct wimax_dev *, enum wimax_rf_state); -int wimax_reset(struct wimax_dev *); - -#endif /* #ifndef __net__wimax_h__ */ diff --git a/drivers/staging/wimax/op-msg.c b/drivers/staging/wimax/op-msg.c --- a/drivers/staging/wimax/op-msg.c +++ /dev/null -// spdx-license-identifier: gpl-2.0-only -/* - * linux wimax - * generic messaging interface between userspace and driver/device - * - * copyright (c) 2007-2008 intel corporation <linux-wimax@intel.com> - * inaky perez-gonzalez <inaky.perez-gonzalez@intel.com> - * - * this implements a direct communication channel between user space and - * the driver/device, by which free form messages can be sent back and - * forth. - * - * this is intended for device-specific features, vendor quirks, etc. - * - * see include/net/wimax.h - * - * generic netlink encoding and capacity - * - * a destination "pipe name" is added to each message; it is up to the - * drivers to assign or use those names (if using them at all). - * - * messages are encoded as a binary netlink attribute using nla_put() - * using type nla_unspec (as some versions of libnl still in - * deployment don't yet understand nla_binary). - * - * the maximum capacity of this transport is pagesize per message (so - * the actual payload will be bit smaller depending on the - * netlink/generic netlink attributes and headers). - * - * reception of messages - * - * when a message is received from user space, it is passed verbatim - * to the driver calling wimax_dev->op_msg_from_user(). the return - * value from this function is passed back to user space as an ack - * over the generic netlink protocol. - * - * the stack doesn't do any processing or interpretation of these - * messages. - * - * sending messages - * - * messages can be sent with wimax_msg(). - * - * if the message delivery needs to happen on a different context to - * that of its creation, wimax_msg_alloc() can be used to get a - * pointer to the message that can be delivered later on with - * wimax_msg_send(). - * - * roadmap - * - * wimax_gnl_doit_msg_from_user() process a message from user space - * wimax_dev_get_by_genl_info() - * wimax_dev->op_msg_from_user() delivery of message to the driver - * - * wimax_msg() send a message to user space - * wimax_msg_alloc() - * wimax_msg_send() - */ -#include <linux/device.h> -#include <linux/slab.h> -#include <net/genetlink.h> -#include <linux/netdevice.h> -#include "linux-wimax.h" -#include <linux/security.h> -#include <linux/export.h> -#include "wimax-internal.h" - - -#define d_submodule op_msg -#include "debug-levels.h" - - -/** - * wimax_msg_alloc - create a new skb for sending a message to userspace - * - * @wimax_dev: wimax device descriptor - * @pipe_name: "named pipe" the message will be sent to - * @msg: pointer to the message data to send - * @size: size of the message to send (in bytes), including the header. - * @gfp_flags: flags for memory allocation. - * - * returns: %0 if ok, negative errno code on error - * - * description: - * - * allocates an skb that will contain the message to send to user - * space over the messaging pipe and initializes it, copying the - * payload. - * - * once this call is done, you can deliver it with - * wimax_msg_send(). - * - * important: - * - * don't use skb_push()/skb_pull()/skb_reserve() on the skb, as - * wimax_msg_send() depends on skb->data being placed at the - * beginning of the user message. - * - * unlike other wimax stack calls, this call can be used way early, - * even before wimax_dev_add() is called, as long as the - * wimax_dev->net_dev pointer is set to point to a proper - * net_dev. this is so that drivers can use it early in case they need - * to send stuff around or communicate with user space. - */ -struct sk_buff *wimax_msg_alloc(struct wimax_dev *wimax_dev, - const char *pipe_name, - const void *msg, size_t size, - gfp_t gfp_flags) -{ - int result; - struct device *dev = wimax_dev_to_dev(wimax_dev); - size_t msg_size; - void *genl_msg; - struct sk_buff *skb; - - msg_size = nla_total_size(size) - + nla_total_size(sizeof(u32)) - + (pipe_name ? nla_total_size(strlen(pipe_name)) : 0); - result = -enomem; - skb = genlmsg_new(msg_size, gfp_flags); - if (skb == null) - goto error_new; - genl_msg = genlmsg_put(skb, 0, 0, &wimax_gnl_family, - 0, wimax_gnl_op_msg_to_user); - if (genl_msg == null) { - dev_err(dev, "no memory to create generic netlink message "); - goto error_genlmsg_put; - } - result = nla_put_u32(skb, wimax_gnl_msg_ifidx, - wimax_dev->net_dev->ifindex); - if (result < 0) { - dev_err(dev, "no memory to add ifindex attribute "); - goto error_nla_put; - } - if (pipe_name) { - result = nla_put_string(skb, wimax_gnl_msg_pipe_name, - pipe_name); - if (result < 0) { - dev_err(dev, "no memory to add pipe_name attribute "); - goto error_nla_put; - } - } - result = nla_put(skb, wimax_gnl_msg_data, size, msg); - if (result < 0) { - dev_err(dev, "no memory to add payload (msg %p size %zu) in " - "attribute: %d ", msg, size, result); - goto error_nla_put; - } - genlmsg_end(skb, genl_msg); - return skb; - -error_nla_put: -error_genlmsg_put: -error_new: - nlmsg_free(skb); - return err_ptr(result); -} -export_symbol_gpl(wimax_msg_alloc); - - -/** - * wimax_msg_data_len - return a pointer and size of a message's payload - * - * @msg: pointer to a message created with wimax_msg_alloc() - * @size: pointer to where to store the message's size - * - * returns the pointer to the message data. - */ -const void *wimax_msg_data_len(struct sk_buff *msg, size_t *size) -{ - struct nlmsghdr *nlh = (void *) msg->head; - struct nlattr *nla; - - nla = nlmsg_find_attr(nlh, sizeof(struct genlmsghdr), - wimax_gnl_msg_data); - if (nla == null) { - pr_err("cannot find attribute wimax_gnl_msg_data "); - return null; - } - *size = nla_len(nla); - return nla_data(nla); -} -export_symbol_gpl(wimax_msg_data_len); - - -/** - * wimax_msg_data - return a pointer to a message's payload - * - * @msg: pointer to a message created with wimax_msg_alloc() - */ -const void *wimax_msg_data(struct sk_buff *msg) -{ - struct nlmsghdr *nlh = (void *) msg->head; - struct nlattr *nla; - - nla = nlmsg_find_attr(nlh, sizeof(struct genlmsghdr), - wimax_gnl_msg_data); - if (nla == null) { - pr_err("cannot find attribute wimax_gnl_msg_data "); - return null; - } - return nla_data(nla); -} -export_symbol_gpl(wimax_msg_data); - - -/** - * wimax_msg_len - return a message's payload length - * - * @msg: pointer to a message created with wimax_msg_alloc() - */ -ssize_t wimax_msg_len(struct sk_buff *msg) -{ - struct nlmsghdr *nlh = (void *) msg->head; - struct nlattr *nla; - - nla = nlmsg_find_attr(nlh, sizeof(struct genlmsghdr), - wimax_gnl_msg_data); - if (nla == null) { - pr_err("cannot find attribute wimax_gnl_msg_data "); - return -einval; - } - return nla_len(nla); -} -export_symbol_gpl(wimax_msg_len); - - -/** - * wimax_msg_send - send a pre-allocated message to user space - * - * @wimax_dev: wimax device descriptor - * - * @skb: &struct sk_buff returned by wimax_msg_alloc(). note the - * ownership of @skb is transferred to this function. - * - * returns: 0 if ok, < 0 errno code on error - * - * description: - * - * sends a free-form message that was preallocated with - * wimax_msg_alloc() and filled up. - * - * assumes that once you pass an skb to this function for sending, it - * owns it and will release it when done (on success). - * - * important: - * - * don't use skb_push()/skb_pull()/skb_reserve() on the skb, as - * wimax_msg_send() depends on skb->data being placed at the - * beginning of the user message. - * - * unlike other wimax stack calls, this call can be used way early, - * even before wimax_dev_add() is called, as long as the - * wimax_dev->net_dev pointer is set to point to a proper - * net_dev. this is so that drivers can use it early in case they need - * to send stuff around or communicate with user space. - */ -int wimax_msg_send(struct wimax_dev *wimax_dev, struct sk_buff *skb) -{ - struct device *dev = wimax_dev_to_dev(wimax_dev); - void *msg = skb->data; - size_t size = skb->len; - might_sleep(); - - d_printf(1, dev, "ctx: wimax msg, %zu bytes ", size); - d_dump(2, dev, msg, size); - genlmsg_multicast(&wimax_gnl_family, skb, 0, 0, gfp_kernel); - d_printf(1, dev, "ctx: genl multicast done "); - return 0; -} -export_symbol_gpl(wimax_msg_send); - - -/** - * wimax_msg - send a message to user space - * - * @wimax_dev: wimax device descriptor (properly referenced) - * @pipe_name: "named pipe" the message will be sent to - * @buf: pointer to the message to send. - * @size: size of the buffer pointed to by @buf (in bytes). - * @gfp_flags: flags for memory allocation. - * - * returns: %0 if ok, negative errno code on error. - * - * description: - * - * sends a free-form message to user space on the device @wimax_dev. - * - * notes: - * - * once the @skb is given to this function, who will own it and will - * release it when done (unless it returns error). - */ -int wimax_msg(struct wimax_dev *wimax_dev, const char *pipe_name, - const void *buf, size_t size, gfp_t gfp_flags) -{ - int result = -enomem; - struct sk_buff *skb; - - skb = wimax_msg_alloc(wimax_dev, pipe_name, buf, size, gfp_flags); - if (is_err(skb)) - result = ptr_err(skb); - else - result = wimax_msg_send(wimax_dev, skb); - return result; -} -export_symbol_gpl(wimax_msg); - -/* - * relays a message from user space to the driver - * - * the skb is passed to the driver-specific function with the netlink - * and generic netlink headers already stripped. - * - * this call will block while handling/relaying the message. - */ -int wimax_gnl_doit_msg_from_user(struct sk_buff *skb, struct genl_info *info) -{ - int result, ifindex; - struct wimax_dev *wimax_dev; - struct device *dev; - struct nlmsghdr *nlh = info->nlhdr; - char *pipe_name; - void *msg_buf; - size_t msg_len; - - might_sleep(); - d_fnstart(3, null, "(skb %p info %p) ", skb, info); - result = -enodev; - if (info->attrs[wimax_gnl_msg_ifidx] == null) { - pr_err("wimax_gnl_msg_from_user: can't find ifidx attribute "); - goto error_no_wimax_dev; - } - ifindex = nla_get_u32(info->attrs[wimax_gnl_msg_ifidx]); - wimax_dev = wimax_dev_get_by_genl_info(info, ifindex); - if (wimax_dev == null) - goto error_no_wimax_dev; - dev = wimax_dev_to_dev(wimax_dev); - - /* unpack arguments */ - result = -einval; - if (info->attrs[wimax_gnl_msg_data] == null) { - dev_err(dev, "wimax_gnl_msg_from_user: can't find msg_data " - "attribute "); - goto error_no_data; - } - msg_buf = nla_data(info->attrs[wimax_gnl_msg_data]); - msg_len = nla_len(info->attrs[wimax_gnl_msg_data]); - - if (info->attrs[wimax_gnl_msg_pipe_name] == null) - pipe_name = null; - else { - struct nlattr *attr = info->attrs[wimax_gnl_msg_pipe_name]; - size_t attr_len = nla_len(attr); - /* libnl-1.1 does not yet support nla_nul_string */ - result = -enomem; - pipe_name = kstrndup(nla_data(attr), attr_len + 1, gfp_kernel); - if (pipe_name == null) - goto error_alloc; - pipe_name[attr_len] = 0; - } - mutex_lock(&wimax_dev->mutex); - result = wimax_dev_is_ready(wimax_dev); - if (result == -enomedium) - result = 0; - if (result < 0) - goto error_not_ready; - result = -enosys; - if (wimax_dev->op_msg_from_user == null) - goto error_noop; - - d_printf(1, dev, - "crx: nlmsghdr len %u type %u flags 0x%04x seq 0x%x pid %u ", - nlh->nlmsg_len, nlh->nlmsg_type, nlh->nlmsg_flags, - nlh->nlmsg_seq, nlh->nlmsg_pid); - d_printf(1, dev, "crx: wimax message %zu bytes ", msg_len); - d_dump(2, dev, msg_buf, msg_len); - - result = wimax_dev->op_msg_from_user(wimax_dev, pipe_name, - msg_buf, msg_len, info); -error_noop: -error_not_ready: - mutex_unlock(&wimax_dev->mutex); -error_alloc: - kfree(pipe_name); -error_no_data: - dev_put(wimax_dev->net_dev); -error_no_wimax_dev: - d_fnend(3, null, "(skb %p info %p) = %d ", skb, info, result); - return result; -} diff --git a/drivers/staging/wimax/op-reset.c b/drivers/staging/wimax/op-reset.c --- a/drivers/staging/wimax/op-reset.c +++ /dev/null -// spdx-license-identifier: gpl-2.0-only -/* - * linux wimax - * implement and export a method for resetting a wimax device - * - * copyright (c) 2008 intel corporation <linux-wimax@intel.com> - * inaky perez-gonzalez <inaky.perez-gonzalez@intel.com> - * - * this implements a simple synchronous call to reset a wimax device. - * - * resets aim at being warm, keeping the device handles active; - * however, when that fails, it falls back to a cold reset (that will - * disconnect and reconnect the device). - */ - -#include "net-wimax.h" -#include <net/genetlink.h> -#include "linux-wimax.h" -#include <linux/security.h> -#include <linux/export.h> -#include "wimax-internal.h" - -#define d_submodule op_reset -#include "debug-levels.h" - - -/** - * wimax_reset - reset a wimax device - * - * @wimax_dev: wimax device descriptor - * - * returns: - * - * %0 if ok and a warm reset was done (the device still exists in - * the system). - * - * -%enodev if a cold/bus reset had to be done (device has - * disconnected and reconnected, so current handle is not valid - * any more). - * - * -%einval if the device is not even registered. - * - * any other negative error code shall be considered as - * non-recoverable. - * - * description: - * - * called when wanting to reset the device for any reason. device is - * taken back to power on status. - * - * this call blocks; on successful return, the device has completed the - * reset process and is ready to operate. - */ -int wimax_reset(struct wimax_dev *wimax_dev) -{ - int result = -einval; - struct device *dev = wimax_dev_to_dev(wimax_dev); - enum wimax_st state; - - might_sleep(); - d_fnstart(3, dev, "(wimax_dev %p) ", wimax_dev); - mutex_lock(&wimax_dev->mutex); - dev_hold(wimax_dev->net_dev); - state = wimax_dev->state; - mutex_unlock(&wimax_dev->mutex); - - if (state >= wimax_st_down) { - mutex_lock(&wimax_dev->mutex_reset); - result = wimax_dev->op_reset(wimax_dev); - mutex_unlock(&wimax_dev->mutex_reset); - } - dev_put(wimax_dev->net_dev); - - d_fnend(3, dev, "(wimax_dev %p) = %d ", wimax_dev, result); - return result; -} -export_symbol(wimax_reset); - - -/* - * exporting to user space over generic netlink - * - * parse the reset command from user space, return error code. - * - * no attributes. - */ -int wimax_gnl_doit_reset(struct sk_buff *skb, struct genl_info *info) -{ - int result, ifindex; - struct wimax_dev *wimax_dev; - - d_fnstart(3, null, "(skb %p info %p) ", skb, info); - result = -enodev; - if (info->attrs[wimax_gnl_reset_ifidx] == null) { - pr_err("wimax_gnl_op_rfkill: can't find ifidx attribute "); - goto error_no_wimax_dev; - } - ifindex = nla_get_u32(info->attrs[wimax_gnl_reset_ifidx]); - wimax_dev = wimax_dev_get_by_genl_info(info, ifindex); - if (wimax_dev == null) - goto error_no_wimax_dev; - /* execute the operation and send the result back to user space */ - result = wimax_reset(wimax_dev); - dev_put(wimax_dev->net_dev); -error_no_wimax_dev: - d_fnend(3, null, "(skb %p info %p) = %d ", skb, info, result); - return result; -} diff --git a/drivers/staging/wimax/op-rfkill.c b/drivers/staging/wimax/op-rfkill.c --- a/drivers/staging/wimax/op-rfkill.c +++ /dev/null -// spdx-license-identifier: gpl-2.0-only -/* - * linux wimax - * rf-kill framework integration - * - * copyright (c) 2008 intel corporation <linux-wimax@intel.com> - * inaky perez-gonzalez <inaky.perez-gonzalez@intel.com> - * - * this integrates into the linux kernel rfkill susbystem so that the - * drivers just have to do the bare minimal work, which is providing a - * method to set the software rf-kill switch and to report changes in - * the software and hardware switch status. - * - * a non-polled generic rfkill device is embedded into the wimax - * subsystem's representation of a device. - * - * fixme: need polled support? let drivers provide a poll routine - * and hand it to rfkill ops then? - * - * all device drivers have to do is after wimax_dev_init(), call - * wimax_report_rfkill_hw() and wimax_report_rfkill_sw() to update - * initial state and then every time it changes. see wimax.h:struct - * wimax_dev for more information. - * - * roadmap - * - * wimax_gnl_doit_rfkill() user space calling wimax_rfkill() - * wimax_rfkill() kernel calling wimax_rfkill() - * __wimax_rf_toggle_radio() - * - * wimax_rfkill_set_radio_block() rf-kill subsystem calling - * __wimax_rf_toggle_radio() - * - * __wimax_rf_toggle_radio() - * wimax_dev->op_rfkill_sw_toggle() driver backend - * __wimax_state_change() - * - * wimax_report_rfkill_sw() driver reports state change - * __wimax_state_change() - * - * wimax_report_rfkill_hw() driver reports state change - * __wimax_state_change() - * - * wimax_rfkill_add() initialize/shutdown rfkill support - * wimax_rfkill_rm() [called by wimax_dev_add/rm()] - */ - -#include "net-wimax.h" -#include <net/genetlink.h> -#include "linux-wimax.h" -#include <linux/security.h> -#include <linux/rfkill.h> -#include <linux/export.h> -#include "wimax-internal.h" - -#define d_submodule op_rfkill -#include "debug-levels.h" - -/** - * wimax_report_rfkill_hw - reports changes in the hardware rf switch - * - * @wimax_dev: wimax device descriptor - * - * @state: new state of the rf kill switch. %wimax_rf_on radio on, - * %wimax_rf_off radio off. - * - * when the device detects a change in the state of thehardware rf - * switch, it must call this function to let the wimax kernel stack - * know that the state has changed so it can be properly propagated. - * - * the wimax stack caches the state (the driver doesn't need to). as - * well, as the change is propagated it will come back as a request to - * change the software state to mirror the hardware state. - * - * if the device doesn't have a hardware kill switch, just report - * it on initialization as always on (%wimax_rf_on, radio on). - */ -void wimax_report_rfkill_hw(struct wimax_dev *wimax_dev, - enum wimax_rf_state state) -{ - int result; - struct device *dev = wimax_dev_to_dev(wimax_dev); - enum wimax_st wimax_state; - - d_fnstart(3, dev, "(wimax_dev %p state %u) ", wimax_dev, state); - bug_on(state == wimax_rf_query); - bug_on(state != wimax_rf_on && state != wimax_rf_off); - - mutex_lock(&wimax_dev->mutex); - result = wimax_dev_is_ready(wimax_dev); - if (result < 0) - goto error_not_ready; - - if (state != wimax_dev->rf_hw) { - wimax_dev->rf_hw = state; - if (wimax_dev->rf_hw == wimax_rf_on && - wimax_dev->rf_sw == wimax_rf_on) - wimax_state = wimax_st_ready; - else - wimax_state = wimax_st_radio_off; - - result = rfkill_set_hw_state(wimax_dev->rfkill, - state == wimax_rf_off); - - __wimax_state_change(wimax_dev, wimax_state); - } -error_not_ready: - mutex_unlock(&wimax_dev->mutex); - d_fnend(3, dev, "(wimax_dev %p state %u) = void [%d] ", - wimax_dev, state, result); -} -export_symbol_gpl(wimax_report_rfkill_hw); - - -/** - * wimax_report_rfkill_sw - reports changes in the software rf switch - * - * @wimax_dev: wimax device descriptor - * - * @state: new state of the rf kill switch. %wimax_rf_on radio on, - * %wimax_rf_off radio off. - * - * reports changes in the software rf switch state to the wimax stack. - * - * the main use is during initialization, so the driver can query the - * device for its current software radio kill switch state and feed it - * to the system. - * - * on the side, the device does not change the software state by - * itself. in practice, this can happen, as the device might decide to - * switch (in software) the radio off for different reasons. - */ -void wimax_report_rfkill_sw(struct wimax_dev *wimax_dev, - enum wimax_rf_state state) -{ - int result; - struct device *dev = wimax_dev_to_dev(wimax_dev); - enum wimax_st wimax_state; - - d_fnstart(3, dev, "(wimax_dev %p state %u) ", wimax_dev, state); - bug_on(state == wimax_rf_query); - bug_on(state != wimax_rf_on && state != wimax_rf_off); - - mutex_lock(&wimax_dev->mutex); - result = wimax_dev_is_ready(wimax_dev); - if (result < 0) - goto error_not_ready; - - if (state != wimax_dev->rf_sw) { - wimax_dev->rf_sw = state; - if (wimax_dev->rf_hw == wimax_rf_on && - wimax_dev->rf_sw == wimax_rf_on) - wimax_state = wimax_st_ready; - else - wimax_state = wimax_st_radio_off; - __wimax_state_change(wimax_dev, wimax_state); - rfkill_set_sw_state(wimax_dev->rfkill, state == wimax_rf_off); - } -error_not_ready: - mutex_unlock(&wimax_dev->mutex); - d_fnend(3, dev, "(wimax_dev %p state %u) = void [%d] ", - wimax_dev, state, result); -} -export_symbol_gpl(wimax_report_rfkill_sw); - - -/* - * callback for the rf kill toggle operation - * - * this function is called by: - * - * - the rfkill subsystem when the rf-kill key is pressed in the - * hardware and the driver notifies through - * wimax_report_rfkill_hw(). the rfkill subsystem ends up calling back - * here so the software rf kill switch state is changed to reflect - * the hardware switch state. - * - * - when the user sets the state through sysfs' rfkill/state file - * - * - when the user calls wimax_rfkill(). - * - * this call blocks! - * - * warning! when we call rfkill_unregister(), this will be called with - * state 0! - * - * warning: wimax_dev must be locked - */ -static -int __wimax_rf_toggle_radio(struct wimax_dev *wimax_dev, - enum wimax_rf_state state) -{ - int result = 0; - struct device *dev = wimax_dev_to_dev(wimax_dev); - enum wimax_st wimax_state; - - might_sleep(); - d_fnstart(3, dev, "(wimax_dev %p state %u) ", wimax_dev, state); - if (wimax_dev->rf_sw == state) - goto out_no_change; - if (wimax_dev->op_rfkill_sw_toggle != null) - result = wimax_dev->op_rfkill_sw_toggle(wimax_dev, state); - else if (state == wimax_rf_off) /* no op? can't turn off */ - result = -enxio; - else /* no op? can turn on */ - result = 0; /* should never happen tho */ - if (result >= 0) { - result = 0; - wimax_dev->rf_sw = state; - wimax_state = state == wimax_rf_on ? - wimax_st_ready : wimax_st_radio_off; - __wimax_state_change(wimax_dev, wimax_state); - } -out_no_change: - d_fnend(3, dev, "(wimax_dev %p state %u) = %d ", - wimax_dev, state, result); - return result; -} - - -/* - * translate from rfkill state to wimax state - * - * note: special state handling rules here - * - * just pretend the call didn't happen if we are in a state where - * we know for sure it cannot be handled (wimax_st_down or - * __wimax_st_quiescing). rfkill() needs it to register and - * unregister, as it will run this path. - * - * note: this call will block until the operation is completed. - */ -static int wimax_rfkill_set_radio_block(void *data, bool blocked) -{ - int result; - struct wimax_dev *wimax_dev = data; - struct device *dev = wimax_dev_to_dev(wimax_dev); - enum wimax_rf_state rf_state; - - d_fnstart(3, dev, "(wimax_dev %p blocked %u) ", wimax_dev, blocked); - rf_state = wimax_rf_on; - if (blocked) - rf_state = wimax_rf_off; - mutex_lock(&wimax_dev->mutex); - if (wimax_dev->state <= __wimax_st_quiescing) - result = 0; - else - result = __wimax_rf_toggle_radio(wimax_dev, rf_state); - mutex_unlock(&wimax_dev->mutex); - d_fnend(3, dev, "(wimax_dev %p blocked %u) = %d ", - wimax_dev, blocked, result); - return result; -} - -static const struct rfkill_ops wimax_rfkill_ops = { - .set_block = wimax_rfkill_set_radio_block, -}; - -/** - * wimax_rfkill - set the software rf switch state for a wimax device - * - * @wimax_dev: wimax device descriptor - * - * @state: new rf state. - * - * returns: - * - * >= 0 toggle state if ok, < 0 errno code on error. the toggle state - * is returned as a bitmap, bit 0 being the hardware rf state, bit 1 - * the software rf state. - * - * 0 means disabled (%wimax_rf_on, radio on), 1 means enabled radio - * off (%wimax_rf_off). - * - * description: - * - * called by the user when he wants to request the wimax radio to be - * switched on (%wimax_rf_on) or off (%wimax_rf_off). with - * %wimax_rf_query, just the current state is returned. - * - * note: - * - * this call will block until the operation is complete. - */ -int wimax_rfkill(struct wimax_dev *wimax_dev, enum wimax_rf_state state) -{ - int result; - struct device *dev = wimax_dev_to_dev(wimax_dev); - - d_fnstart(3, dev, "(wimax_dev %p state %u) ", wimax_dev, state); - mutex_lock(&wimax_dev->mutex); - result = wimax_dev_is_ready(wimax_dev); - if (result < 0) { - /* while initializing, < 1.4.3 wimax-tools versions use - * this call to check if the device is a valid wimax - * device; so we allow it to proceed always, - * considering the radios are all off. - */ - if (result == -enomedium && state == wimax_rf_query) - result = wimax_rf_off << 1 | wimax_rf_off; - goto error_not_ready; - } - switch (state) { - case wimax_rf_on: - case wimax_rf_off: - result = __wimax_rf_toggle_radio(wimax_dev, state); - if (result < 0) - goto error; - rfkill_set_sw_state(wimax_dev->rfkill, state == wimax_rf_off); - break; - case wimax_rf_query: - break; - default: - result = -einval; - goto error; - } - result = wimax_dev->rf_sw << 1 | wimax_dev->rf_hw; -error: -error_not_ready: - mutex_unlock(&wimax_dev->mutex); - d_fnend(3, dev, "(wimax_dev %p state %u) = %d ", - wimax_dev, state, result); - return result; -} -export_symbol(wimax_rfkill); - - -/* - * register a new wimax device's rf kill support - * - * warning: wimax_dev->mutex must be unlocked - */ -int wimax_rfkill_add(struct wimax_dev *wimax_dev) -{ - int result; - struct rfkill *rfkill; - struct device *dev = wimax_dev_to_dev(wimax_dev); - - d_fnstart(3, dev, "(wimax_dev %p) ", wimax_dev); - /* initialize rf kill */ - result = -enomem; - rfkill = rfkill_alloc(wimax_dev->name, dev, rfkill_type_wimax, - &wimax_rfkill_ops, wimax_dev); - if (rfkill == null) - goto error_rfkill_allocate; - - d_printf(1, dev, "rfkill %p ", rfkill); - - wimax_dev->rfkill = rfkill; - - rfkill_init_sw_state(rfkill, 1); - result = rfkill_register(wimax_dev->rfkill); - if (result < 0) - goto error_rfkill_register; - - /* if there is no sw toggle op, sw rfkill is always on */ - if (wimax_dev->op_rfkill_sw_toggle == null) - wimax_dev->rf_sw = wimax_rf_on; - - d_fnend(3, dev, "(wimax_dev %p) = 0 ", wimax_dev); - return 0; - -error_rfkill_register: - rfkill_destroy(wimax_dev->rfkill); -error_rfkill_allocate: - d_fnend(3, dev, "(wimax_dev %p) = %d ", wimax_dev, result); - return result; -} - - -/* - * deregister a wimax device's rf kill support - * - * ick, we can't call rfkill_free() after rfkill_unregister()...oh - * well. - * - * warning: wimax_dev->mutex must be unlocked - */ -void wimax_rfkill_rm(struct wimax_dev *wimax_dev) -{ - struct device *dev = wimax_dev_to_dev(wimax_dev); - - d_fnstart(3, dev, "(wimax_dev %p) ", wimax_dev); - rfkill_unregister(wimax_dev->rfkill); - rfkill_destroy(wimax_dev->rfkill); - d_fnend(3, dev, "(wimax_dev %p) ", wimax_dev); -} - - -/* - * exporting to user space over generic netlink - * - * parse the rfkill command from user space, return a combination - * value that describe the states of the different toggles. - * - * only one attribute: the new state requested (on, off or no change, - * just query). - */ - -int wimax_gnl_doit_rfkill(struct sk_buff *skb, struct genl_info *info) -{ - int result, ifindex; - struct wimax_dev *wimax_dev; - struct device *dev; - enum wimax_rf_state new_state; - - d_fnstart(3, null, "(skb %p info %p) ", skb, info); - result = -enodev; - if (info->attrs[wimax_gnl_rfkill_ifidx] == null) { - pr_err("wimax_gnl_op_rfkill: can't find ifidx attribute "); - goto error_no_wimax_dev; - } - ifindex = nla_get_u32(info->attrs[wimax_gnl_rfkill_ifidx]); - wimax_dev = wimax_dev_get_by_genl_info(info, ifindex); - if (wimax_dev == null) - goto error_no_wimax_dev; - dev = wimax_dev_to_dev(wimax_dev); - result = -einval; - if (info->attrs[wimax_gnl_rfkill_state] == null) { - dev_err(dev, "wimax_gnl_rfkill: can't find rfkill_state attribute "); - goto error_no_pid; - } - new_state = nla_get_u32(info->attrs[wimax_gnl_rfkill_state]); - - /* execute the operation and send the result back to user space */ - result = wimax_rfkill(wimax_dev, new_state); -error_no_pid: - dev_put(wimax_dev->net_dev); -error_no_wimax_dev: - d_fnend(3, null, "(skb %p info %p) = %d ", skb, info, result); - return result; -} diff --git a/drivers/staging/wimax/op-state-get.c b/drivers/staging/wimax/op-state-get.c --- a/drivers/staging/wimax/op-state-get.c +++ /dev/null -// spdx-license-identifier: gpl-2.0-only -/* - * linux wimax - * implement and export a method for getting a wimax device current state - * - * copyright (c) 2009 paulius zaleckas <paulius.zaleckas@teltonika.lt> - * - * based on previous wimax core work by: - * copyright (c) 2008 intel corporation <linux-wimax@intel.com> - * inaky perez-gonzalez <inaky.perez-gonzalez@intel.com> - */ - -#include "net-wimax.h" -#include <net/genetlink.h> -#include "linux-wimax.h" -#include <linux/security.h> -#include "wimax-internal.h" - -#define d_submodule op_state_get -#include "debug-levels.h" - - -/* - * exporting to user space over generic netlink - * - * parse the state get command from user space, return a combination - * value that describe the current state. - * - * no attributes. - */ -int wimax_gnl_doit_state_get(struct sk_buff *skb, struct genl_info *info) -{ - int result, ifindex; - struct wimax_dev *wimax_dev; - - d_fnstart(3, null, "(skb %p info %p) ", skb, info); - result = -enodev; - if (info->attrs[wimax_gnl_stget_ifidx] == null) { - pr_err("wimax_gnl_op_state_get: can't find ifidx attribute "); - goto error_no_wimax_dev; - } - ifindex = nla_get_u32(info->attrs[wimax_gnl_stget_ifidx]); - wimax_dev = wimax_dev_get_by_genl_info(info, ifindex); - if (wimax_dev == null) - goto error_no_wimax_dev; - /* execute the operation and send the result back to user space */ - result = wimax_state_get(wimax_dev); - dev_put(wimax_dev->net_dev); -error_no_wimax_dev: - d_fnend(3, null, "(skb %p info %p) = %d ", skb, info, result); - return result; -} diff --git a/drivers/staging/wimax/stack.c b/drivers/staging/wimax/stack.c --- a/drivers/staging/wimax/stack.c +++ /dev/null -// spdx-license-identifier: gpl-2.0-only -/* - * linux wimax - * initialization, addition and removal of wimax devices - * - * copyright (c) 2005-2006 intel corporation <linux-wimax@intel.com> - * inaky perez-gonzalez <inaky.perez-gonzalez@intel.com> - * - * this implements: - * - * - basic life cycle of 'struct wimax_dev' [wimax_dev_*()]; on - * addition/registration initialize all subfields and allocate - * generic netlink resources for user space communication. on - * removal/unregistration, undo all that. - * - * - device state machine [wimax_state_change()] and support to send - * reports to user space when the state changes - * [wimax_gnl_re_state_change*()]. - * - * see include/net/wimax.h for rationales and design. - * - * roadmap - * - * [__]wimax_state_change() called by drivers to update device's state - * wimax_gnl_re_state_change_alloc() - * wimax_gnl_re_state_change_send() - * - * wimax_dev_init() init a device - * wimax_dev_add() register - * wimax_rfkill_add() - * wimax_gnl_add() register all the generic netlink resources. - * wimax_id_table_add() - * wimax_dev_rm() unregister - * wimax_id_table_rm() - * wimax_gnl_rm() - * wimax_rfkill_rm() - */ -#include <linux/device.h> -#include <linux/gfp.h> -#include <net/genetlink.h> -#include <linux/netdevice.h> -#include "linux-wimax.h" -#include <linux/module.h> -#include "wimax-internal.h" - - -#define d_submodule stack -#include "debug-levels.h" - -static char wimax_debug_params[128]; -module_param_string(debug, wimax_debug_params, sizeof(wimax_debug_params), - 0644); -module_parm_desc(debug, - "string of space-separated name:value pairs, where names " - "are the different debug submodules and value are the " - "initial debug value to set."); - -/* - * allocate a report state change message - * - * @header: save it, you need it for _send() - * - * creates and fills a basic state change message; different code - * paths can then add more attributes to the message as needed. - * - * use wimax_gnl_re_state_change_send() to send the returned skb. - * - * returns: skb with the genl message if ok, is_err() ptr on error - * with an errno code. - */ -static -struct sk_buff *wimax_gnl_re_state_change_alloc( - struct wimax_dev *wimax_dev, - enum wimax_st new_state, enum wimax_st old_state, - void **header) -{ - int result; - struct device *dev = wimax_dev_to_dev(wimax_dev); - void *data; - struct sk_buff *report_skb; - - d_fnstart(3, dev, "(wimax_dev %p new_state %u old_state %u) ", - wimax_dev, new_state, old_state); - result = -enomem; - report_skb = nlmsg_new(nlmsg_default_size, gfp_kernel); - if (report_skb == null) { - dev_err(dev, "re_stch: can't create message "); - goto error_new; - } - /* fixme: sending a group id as the seq is wrong */ - data = genlmsg_put(report_skb, 0, wimax_gnl_family.mcgrp_offset, - &wimax_gnl_family, 0, wimax_gnl_re_state_change); - if (data == null) { - dev_err(dev, "re_stch: can't put data into message "); - goto error_put; - } - *header = data; - - result = nla_put_u8(report_skb, wimax_gnl_stch_state_old, old_state); - if (result < 0) { - dev_err(dev, "re_stch: error adding old attr: %d ", result); - goto error_put; - } - result = nla_put_u8(report_skb, wimax_gnl_stch_state_new, new_state); - if (result < 0) { - dev_err(dev, "re_stch: error adding new attr: %d ", result); - goto error_put; - } - result = nla_put_u32(report_skb, wimax_gnl_stch_ifidx, - wimax_dev->net_dev->ifindex); - if (result < 0) { - dev_err(dev, "re_stch: error adding ifindex attribute "); - goto error_put; - } - d_fnend(3, dev, "(wimax_dev %p new_state %u old_state %u) = %p ", - wimax_dev, new_state, old_state, report_skb); - return report_skb; - -error_put: - nlmsg_free(report_skb); -error_new: - d_fnend(3, dev, "(wimax_dev %p new_state %u old_state %u) = %d ", - wimax_dev, new_state, old_state, result); - return err_ptr(result); -} - - -/* - * send a report state change message (as created with _alloc). - * - * @report_skb: as returned by wimax_gnl_re_state_change_alloc() - * @header: as returned by wimax_gnl_re_state_change_alloc() - * - * returns: 0 if ok, < 0 errno code on error. - * - * if the message is null, pretend it didn't happen. - */ -static -int wimax_gnl_re_state_change_send( - struct wimax_dev *wimax_dev, struct sk_buff *report_skb, - void *header) -{ - int result = 0; - struct device *dev = wimax_dev_to_dev(wimax_dev); - - d_fnstart(3, dev, "(wimax_dev %p report_skb %p) ", - wimax_dev, report_skb); - if (report_skb == null) { - result = -enomem; - goto out; - } - genlmsg_end(report_skb, header); - genlmsg_multicast(&wimax_gnl_family, report_skb, 0, 0, gfp_kernel); -out: - d_fnend(3, dev, "(wimax_dev %p report_skb %p) = %d ", - wimax_dev, report_skb, result); - return result; -} - - -static -void __check_new_state(enum wimax_st old_state, enum wimax_st new_state, - unsigned int allowed_states_bm) -{ - if (warn_on(((1 << new_state) & allowed_states_bm) == 0)) { - pr_err("sw bug! forbidden state change %u -> %u ", - old_state, new_state); - } -} - - -/* - * set the current state of a wimax device [unlocking version of - * wimax_state_change(). - */ -void __wimax_state_change(struct wimax_dev *wimax_dev, enum wimax_st new_state) -{ - struct device *dev = wimax_dev_to_dev(wimax_dev); - enum wimax_st old_state = wimax_dev->state; - struct sk_buff *stch_skb; - void *header; - - d_fnstart(3, dev, "(wimax_dev %p new_state %u [old %u]) ", - wimax_dev, new_state, old_state); - - if (warn_on(new_state >= __wimax_st_invalid)) { - dev_err(dev, "sw bug: requesting invalid state %u ", - new_state); - goto out; - } - if (old_state == new_state) - goto out; - header = null; /* gcc complains? can't grok why */ - stch_skb = wimax_gnl_re_state_change_alloc( - wimax_dev, new_state, old_state, &header); - - /* verify the state transition and do exit-from-state actions */ - switch (old_state) { - case __wimax_st_null: - __check_new_state(old_state, new_state, - 1 << wimax_st_down); - break; - case wimax_st_down: - __check_new_state(old_state, new_state, - 1 << __wimax_st_quiescing - | 1 << wimax_st_uninitialized - | 1 << wimax_st_radio_off); - break; - case __wimax_st_quiescing: - __check_new_state(old_state, new_state, 1 << wimax_st_down); - break; - case wimax_st_uninitialized: - __check_new_state(old_state, new_state, - 1 << __wimax_st_quiescing - | 1 << wimax_st_radio_off); - break; - case wimax_st_radio_off: - __check_new_state(old_state, new_state, - 1 << __wimax_st_quiescing - | 1 << wimax_st_ready); - break; - case wimax_st_ready: - __check_new_state(old_state, new_state, - 1 << __wimax_st_quiescing - | 1 << wimax_st_radio_off - | 1 << wimax_st_scanning - | 1 << wimax_st_connecting - | 1 << wimax_st_connected); - break; - case wimax_st_scanning: - __check_new_state(old_state, new_state, - 1 << __wimax_st_quiescing - | 1 << wimax_st_radio_off - | 1 << wimax_st_ready - | 1 << wimax_st_connecting - | 1 << wimax_st_connected); - break; - case wimax_st_connecting: - __check_new_state(old_state, new_state, - 1 << __wimax_st_quiescing - | 1 << wimax_st_radio_off - | 1 << wimax_st_ready - | 1 << wimax_st_scanning - | 1 << wimax_st_connected); - break; - case wimax_st_connected: - __check_new_state(old_state, new_state, - 1 << __wimax_st_quiescing - | 1 << wimax_st_radio_off - | 1 << wimax_st_ready); - netif_tx_disable(wimax_dev->net_dev); - netif_carrier_off(wimax_dev->net_dev); - break; - case __wimax_st_invalid: - default: - dev_err(dev, "sw bug: wimax_dev %p is in unknown state %u ", - wimax_dev, wimax_dev->state); - warn_on(1); - goto out; - } - - /* execute the actions of entry to the new state */ - switch (new_state) { - case __wimax_st_null: - dev_err(dev, "sw bug: wimax_dev %p entering null state " - "from %u ", wimax_dev, wimax_dev->state); - warn_on(1); /* nobody can enter this state */ - break; - case wimax_st_down: - break; - case __wimax_st_quiescing: - break; - case wimax_st_uninitialized: - break; - case wimax_st_radio_off: - break; - case wimax_st_ready: - break; - case wimax_st_scanning: - break; - case wimax_st_connecting: - break; - case wimax_st_connected: - netif_carrier_on(wimax_dev->net_dev); - netif_wake_queue(wimax_dev->net_dev); - break; - case __wimax_st_invalid: - default: - bug(); - } - __wimax_state_set(wimax_dev, new_state); - if (!is_err(stch_skb)) - wimax_gnl_re_state_change_send(wimax_dev, stch_skb, header); -out: - d_fnend(3, dev, "(wimax_dev %p new_state %u [old %u]) = void ", - wimax_dev, new_state, old_state); -} - - -/** - * wimax_state_change - set the current state of a wimax device - * - * @wimax_dev: wimax device descriptor (properly referenced) - * @new_state: new state to switch to - * - * this implements the state changes for the wimax devices. it will - * - * - verify that the state transition is legal (for now it'll just - * print a warning if not) according to the table in - * linux/wimax.h's documentation for 'enum wimax_st'. - * - * - perform the actions needed for leaving the current state and - * whichever are needed for entering the new state. - * - * - issue a report to user space indicating the new state (and an - * optional payload with information about the new state). - * - * note: @wimax_dev must be locked - */ -void wimax_state_change(struct wimax_dev *wimax_dev, enum wimax_st new_state) -{ - /* - * a driver cannot take the wimax_dev out of the - * __wimax_st_null state unless by calling wimax_dev_add(). if - * the wimax_dev's state is still null, we ignore any request - * to change its state because it means it hasn't been yet - * registered. - * - * there is no need to complain about it, as routines that - * call this might be shared from different code paths that - * are called before or after wimax_dev_add() has done its - * job. - */ - mutex_lock(&wimax_dev->mutex); - if (wimax_dev->state > __wimax_st_null) - __wimax_state_change(wimax_dev, new_state); - mutex_unlock(&wimax_dev->mutex); -} -export_symbol_gpl(wimax_state_change); - - -/** - * wimax_state_get() - return the current state of a wimax device - * - * @wimax_dev: wimax device descriptor - * - * returns: current state of the device according to its driver. - */ -enum wimax_st wimax_state_get(struct wimax_dev *wimax_dev) -{ - enum wimax_st state; - - mutex_lock(&wimax_dev->mutex); - state = wimax_dev->state; - mutex_unlock(&wimax_dev->mutex); - return state; -} -export_symbol_gpl(wimax_state_get); - - -/** - * wimax_dev_init - initialize a newly allocated instance - * - * @wimax_dev: wimax device descriptor to initialize. - * - * initializes fields of a freshly allocated @wimax_dev instance. this - * function assumes that after allocation, the memory occupied by - * @wimax_dev was zeroed. - */ -void wimax_dev_init(struct wimax_dev *wimax_dev) -{ - init_list_head(&wimax_dev->id_table_node); - __wimax_state_set(wimax_dev, __wimax_st_null); - mutex_init(&wimax_dev->mutex); - mutex_init(&wimax_dev->mutex_reset); -} -export_symbol_gpl(wimax_dev_init); - -/* - * there are multiple enums reusing the same values, adding - * others is only possible if they use a compatible policy. - */ -static const struct nla_policy wimax_gnl_policy[wimax_gnl_attr_max + 1] = { - /* - * wimax_gnl_reset_ifidx, wimax_gnl_rfkill_ifidx, - * wimax_gnl_stget_ifidx, wimax_gnl_msg_ifidx - */ - [1] = { .type = nla_u32, }, - /* - * wimax_gnl_rfkill_state, wimax_gnl_msg_pipe_name - */ - [2] = { .type = nla_u32, }, /* enum wimax_rf_state */ - /* - * wimax_gnl_msg_data - */ - [3] = { .type = nla_unspec, }, /* libnl doesn't grok binary yet */ -}; - -static const struct genl_small_ops wimax_gnl_ops[] = { - { - .cmd = wimax_gnl_op_msg_from_user, - .validate = genl_dont_validate_strict | genl_dont_validate_dump, - .flags = genl_admin_perm, - .doit = wimax_gnl_doit_msg_from_user, - }, - { - .cmd = wimax_gnl_op_reset, - .validate = genl_dont_validate_strict | genl_dont_validate_dump, - .flags = genl_admin_perm, - .doit = wimax_gnl_doit_reset, - }, - { - .cmd = wimax_gnl_op_rfkill, - .validate = genl_dont_validate_strict | genl_dont_validate_dump, - .flags = genl_admin_perm, - .doit = wimax_gnl_doit_rfkill, - }, - { - .cmd = wimax_gnl_op_state_get, - .validate = genl_dont_validate_strict | genl_dont_validate_dump, - .flags = genl_admin_perm, - .doit = wimax_gnl_doit_state_get, - }, -}; - - -static -size_t wimax_addr_scnprint(char *addr_str, size_t addr_str_size, - unsigned char *addr, size_t addr_len) -{ - unsigned int cnt, total; - - for (total = cnt = 0; cnt < addr_len; cnt++) - total += scnprintf(addr_str + total, addr_str_size - total, - "%02x%c", addr[cnt], - cnt == addr_len - 1 ? '' : ':'); - return total; -} - - -/** - * wimax_dev_add - register a new wimax device - * - * @wimax_dev: wimax device descriptor (as embedded in your @net_dev's - * priv data). you must have called wimax_dev_init() on it before. - * - * @net_dev: net device the @wimax_dev is associated with. the - * function expects set_netdev_dev() and register_netdev() were - * already called on it. - * - * registers the new wimax device, sets up the user-kernel control - * interface (generic netlink) and common wimax infrastructure. - * - * note that the parts that will allow interaction with user space are - * setup at the very end, when the rest is in place, as once that - * happens, the driver might get user space control requests via - * netlink or from debugfs that might translate into calls into - * wimax_dev->op_*(). - */ -int wimax_dev_add(struct wimax_dev *wimax_dev, struct net_device *net_dev) -{ - int result; - struct device *dev = net_dev->dev.parent; - char addr_str[32]; - - d_fnstart(3, dev, "(wimax_dev %p net_dev %p) ", wimax_dev, net_dev); - - /* do the rfkill setup before locking, as rfkill will call - * into our functions. - */ - wimax_dev->net_dev = net_dev; - result = wimax_rfkill_add(wimax_dev); - if (result < 0) - goto error_rfkill_add; - - /* set up user-space interaction */ - mutex_lock(&wimax_dev->mutex); - wimax_id_table_add(wimax_dev); - wimax_debugfs_add(wimax_dev); - - __wimax_state_set(wimax_dev, wimax_st_down); - mutex_unlock(&wimax_dev->mutex); - - wimax_addr_scnprint(addr_str, sizeof(addr_str), - net_dev->dev_addr, net_dev->addr_len); - dev_err(dev, "wimax interface %s (%s) ready ", - net_dev->name, addr_str); - d_fnend(3, dev, "(wimax_dev %p net_dev %p) = 0 ", wimax_dev, net_dev); - return 0; - -error_rfkill_add: - d_fnend(3, dev, "(wimax_dev %p net_dev %p) = %d ", - wimax_dev, net_dev, result); - return result; -} -export_symbol_gpl(wimax_dev_add); - - -/** - * wimax_dev_rm - unregister an existing wimax device - * - * @wimax_dev: wimax device descriptor - * - * unregisters a wimax device previously registered for use with - * wimax_add_rm(). - * - * important! must call before calling unregister_netdev(). - * - * after this function returns, you will not get any more user space - * control requests (via netlink or debugfs) and thus to wimax_dev->ops. - * - * reentrancy control is ensured by setting the state to - * %__wimax_st_quiescing. rfkill operations coming through - * wimax_*rfkill*() will be stopped by the quiescing state; ops coming - * from the rfkill subsystem will be stopped by the support being - * removed by wimax_rfkill_rm(). - */ -void wimax_dev_rm(struct wimax_dev *wimax_dev) -{ - d_fnstart(3, null, "(wimax_dev %p) ", wimax_dev); - - mutex_lock(&wimax_dev->mutex); - __wimax_state_change(wimax_dev, __wimax_st_quiescing); - wimax_debugfs_rm(wimax_dev); - wimax_id_table_rm(wimax_dev); - __wimax_state_change(wimax_dev, wimax_st_down); - mutex_unlock(&wimax_dev->mutex); - wimax_rfkill_rm(wimax_dev); - d_fnend(3, null, "(wimax_dev %p) = void ", wimax_dev); -} -export_symbol_gpl(wimax_dev_rm); - - -/* debug framework control of debug levels */ -struct d_level d_level[] = { - d_submodule_define(debugfs), - d_submodule_define(id_table), - d_submodule_define(op_msg), - d_submodule_define(op_reset), - d_submodule_define(op_rfkill), - d_submodule_define(op_state_get), - d_submodule_define(stack), -}; -size_t d_level_size = array_size(d_level); - - -static const struct genl_multicast_group wimax_gnl_mcgrps[] = { - { .name = "msg", }, -}; - -struct genl_family wimax_gnl_family __ro_after_init = { - .name = "wimax", - .version = wimax_gnl_version, - .hdrsize = 0, - .maxattr = wimax_gnl_attr_max, - .policy = wimax_gnl_policy, - .module = this_module, - .small_ops = wimax_gnl_ops, - .n_small_ops = array_size(wimax_gnl_ops), - .mcgrps = wimax_gnl_mcgrps, - .n_mcgrps = array_size(wimax_gnl_mcgrps), -}; - - - -/* shutdown the wimax stack */ -static -int __init wimax_subsys_init(void) -{ - int result; - - d_fnstart(4, null, "() "); - d_parse_params(d_level, d_level_size, wimax_debug_params, - "wimax.debug"); - - result = genl_register_family(&wimax_gnl_family); - if (unlikely(result < 0)) { - pr_err("cannot register generic netlink family: %d ", result); - goto error_register_family; - } - - d_fnend(4, null, "() = 0 "); - return 0; - -error_register_family: - d_fnend(4, null, "() = %d ", result); - return result; - -} -module_init(wimax_subsys_init); - - -/* shutdown the wimax stack */ -static -void __exit wimax_subsys_exit(void) -{ - wimax_id_table_release(); - genl_unregister_family(&wimax_gnl_family); -} -module_exit(wimax_subsys_exit); - -module_author("intel corporation <linux-wimax@intel.com>"); -module_description("linux wimax stack"); -module_license("gpl"); diff --git a/drivers/staging/wimax/wimax-internal.h b/drivers/staging/wimax/wimax-internal.h --- a/drivers/staging/wimax/wimax-internal.h +++ /dev/null -/* spdx-license-identifier: gpl-2.0-only */ -/* - * linux wimax - * internal api for kernel space wimax stack - * - * copyright (c) 2007 intel corporation <linux-wimax@intel.com> - * inaky perez-gonzalez <inaky.perez-gonzalez@intel.com> - * - * this header file is for declarations and definitions internal to - * the wimax stack. for public apis and documentation, see - * include/net/wimax.h and include/linux/wimax.h. - */ - -#ifndef __wimax_internal_h__ -#define __wimax_internal_h__ -#ifdef __kernel__ - -#ifdef pr_fmt -#undef pr_fmt -#endif - -#define pr_fmt(fmt) kbuild_modname ": " fmt - -#include <linux/device.h> -#include "net-wimax.h" - - -/* - * decide if a (locked) device is ready for use - * - * before using the device structure, it must be locked - * (wimax_dev->mutex). as well, most operations need to call this - * function to check if the state is the right one. - * - * an error value will be returned if the state is not the right - * one. in that case, the caller should not attempt to use the device - * and just unlock it. - */ -static inline __must_check -int wimax_dev_is_ready(struct wimax_dev *wimax_dev) -{ - if (wimax_dev->state == __wimax_st_null) - return -einval; /* device is not even registered! */ - if (wimax_dev->state == wimax_st_down) - return -enomedium; - if (wimax_dev->state == __wimax_st_quiescing) - return -eshutdown; - return 0; -} - - -static inline -void __wimax_state_set(struct wimax_dev *wimax_dev, enum wimax_st state) -{ - wimax_dev->state = state; -} -void __wimax_state_change(struct wimax_dev *, enum wimax_st); - -#ifdef config_debug_fs -void wimax_debugfs_add(struct wimax_dev *); -void wimax_debugfs_rm(struct wimax_dev *); -#else -static inline void wimax_debugfs_add(struct wimax_dev *wimax_dev) {} -static inline void wimax_debugfs_rm(struct wimax_dev *wimax_dev) {} -#endif - -void wimax_id_table_add(struct wimax_dev *); -struct wimax_dev *wimax_dev_get_by_genl_info(struct genl_info *, int); -void wimax_id_table_rm(struct wimax_dev *); -void wimax_id_table_release(void); - -int wimax_rfkill_add(struct wimax_dev *); -void wimax_rfkill_rm(struct wimax_dev *); - -/* generic netlink */ -extern struct genl_family wimax_gnl_family; - -/* ops */ -int wimax_gnl_doit_msg_from_user(struct sk_buff *skb, struct genl_info *info); -int wimax_gnl_doit_reset(struct sk_buff *skb, struct genl_info *info); -int wimax_gnl_doit_rfkill(struct sk_buff *skb, struct genl_info *info); -int wimax_gnl_doit_state_get(struct sk_buff *skb, struct genl_info *info); - -#endif /* #ifdef __kernel__ */ -#endif /* #ifndef __wimax_internal_h__ */
|
Drivers in the Staging area
|
18507b8f63101949f4a931fc904c37ea10407f7c
|
greg kroah hartman
|
drivers
|
staging
|
documentation, i2400m, wimax
|
staging: gasket: remove it from the kernel
|
as none of the proposed things that need to be changed in the gasket drivers have ever been done, and there has not been any forward progress to get this out of staging, it seems totally abandonded so remove the code entirely so that people do not spend their time doing tiny cleanups for code that will never get out of staging.
|
this release includes the landlock security module, which aims to make easier to sandbox applications; support for the clang control flow integrity, which aims to abort the program upon detecting certain forms of undefined behavior; support for randomising the stack address offset in each syscall; support for concurrent tbl flushing; preparatory apple m1 support; support for incoming amd and intel graphics chips; bpf support for calling kernel functions directly; a virtio sound driver for improved sound experience on virtualized guests; io_uring support for multi shot mode and a misc cgroup for miscellaneous resources. as always, there are many other features, new drivers, improvements and fixes.
|
remove it from the kernel
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'security', 'networking', 'architectures arm x86 mips powerpc riscv s390 ia64 xtensa']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'remote processors', 'clock', 'phy ("physical layer" framework)', 'various']
|
['gasket']
|
['c', 'h', 'kconfig', 'todo', 'maintainers', 'makefile']
| 20
| 0
| 6,648
|
--- diff --git a/maintainers b/maintainers --- a/maintainers +++ b/maintainers -gasket driver framework -m: rob springer <rspringer@google.com> -m: todd poynor <toddpoynor@google.com> -m: ben chan <benchan@chromium.org> -m: richard yeh <rcy@google.com> -s: maintained -f: drivers/staging/gasket/ - diff --git a/drivers/staging/kconfig b/drivers/staging/kconfig --- a/drivers/staging/kconfig +++ b/drivers/staging/kconfig -source "drivers/staging/gasket/kconfig" - diff --git a/drivers/staging/makefile b/drivers/staging/makefile --- a/drivers/staging/makefile +++ b/drivers/staging/makefile -obj-$(config_staging_gasket_framework) += gasket/ diff --git a/drivers/staging/gasket/kconfig b/drivers/staging/gasket/kconfig --- a/drivers/staging/gasket/kconfig +++ /dev/null -# spdx-license-identifier: gpl-2.0 -menu "gasket devices" - -config staging_gasket_framework - tristate "gasket framework" - depends on pci && (x86_64 || arm64) - help - this framework supports gasket-compatible devices, such as apex. - it is required for any of the following module(s). - - to compile this driver as a module, choose m here. the module - will be called "gasket". - -config staging_apex_driver - tristate "apex driver" - depends on staging_gasket_framework - help - this driver supports the apex edge tpu device. see - https://cloud.google.com/edge-tpu/ for more information. - say y if you want to include this driver in the kernel. - - to compile this driver as a module, choose m here. the module - will be called "apex". - -endmenu diff --git a/drivers/staging/gasket/makefile b/drivers/staging/gasket/makefile --- a/drivers/staging/gasket/makefile +++ /dev/null -# spdx-license-identifier: gpl-2.0 -# -# makefile for gasket framework and dependent drivers. -# - -obj-$(config_staging_gasket_framework) += gasket.o -obj-$(config_staging_apex_driver) += apex.o - -gasket-objs := gasket_core.o gasket_ioctl.o gasket_interrupt.o gasket_page_table.o gasket_sysfs.o -apex-objs := apex_driver.o diff --git a/drivers/staging/gasket/todo b/drivers/staging/gasket/todo --- a/drivers/staging/gasket/todo +++ /dev/null -this is a list of things that need to be done to get this driver out of the -staging directory. - -- implement the gasket framework's functionality through uio instead of - introducing a new user-space drivers framework that is quite similar. - - uio provides the necessary bits to implement user-space drivers. meanwhile - the gasket apis adds some extra conveniences like pci bar mapping, and - msi interrupts. add these features to the uio subsystem, then re-implement - the apex driver as a basic uio driver instead (include/linux/uio_driver.h) - -- document sysfs files with documentation/abi/ entries. - -- use misc interface instead of major number for driver version description. - -- add descriptions of module_param's - -- apex_get_status() should actually check status. - -- "drivers" should never be dealing with "raw" sysfs calls or mess around with - kobjects at all. the driver core should handle all of this for you - automaically. there should not be a need for raw attribute macros. diff --git a/drivers/staging/gasket/apex.h b/drivers/staging/gasket/apex.h --- a/drivers/staging/gasket/apex.h +++ /dev/null -/* spdx-license-identifier: gpl-2.0 */ -/* - * apex kernel-userspace interface definitions. - * - * copyright (c) 2018 google, inc. - */ -#ifndef __apex_h__ -#define __apex_h__ - -#include <linux/ioctl.h> - -/* clock gating ioctl. */ -struct apex_gate_clock_ioctl { - /* enter or leave clock gated state. */ - u64 enable; - - /* if set, enter clock gating state, regardless of custom block's - * internal idle state - */ - u64 force_idle; -}; - -/* base number for all apex-common ioctls */ -#define apex_ioctl_base 0x7f - -/* enable/disable clock gating. */ -#define apex_ioctl_gate_clock \ - _iow(apex_ioctl_base, 0, struct apex_gate_clock_ioctl) - -#endif /* __apex_h__ */ diff --git a/drivers/staging/gasket/apex_driver.c b/drivers/staging/gasket/apex_driver.c --- a/drivers/staging/gasket/apex_driver.c +++ /dev/null -// spdx-license-identifier: gpl-2.0 -/* - * driver for the apex chip. - * - * copyright (c) 2018 google, inc. - */ - -#include <linux/compiler.h> -#include <linux/delay.h> -#include <linux/device.h> -#include <linux/fs.h> -#include <linux/init.h> -#include <linux/mm.h> -#include <linux/module.h> -#include <linux/moduleparam.h> -#include <linux/pci.h> -#include <linux/printk.h> -#include <linux/sched.h> -#include <linux/uaccess.h> - -#include "apex.h" - -#include "gasket_core.h" -#include "gasket_interrupt.h" -#include "gasket_page_table.h" -#include "gasket_sysfs.h" - -/* constants */ -#define apex_device_name "apex" -#define apex_driver_version "1.0" - -/* csrs are in bar 2. */ -#define apex_bar_index 2 - -#define apex_pci_vendor_id 0x1ac1 -#define apex_pci_device_id 0x089a - -/* bar offsets. */ -#define apex_bar_offset 0 -#define apex_cm_offset 0x1000000 - -/* the sizes of each apex bar 2. */ -#define apex_bar_bytes 0x100000 -#define apex_ch_mem_bytes (page_size * max_num_coherent_pages) - -/* the number of user-mappable memory ranges in bar2 of a apex chip. */ -#define num_regions 3 - -/* the number of nodes in a apex chip. */ -#define num_nodes 1 - -/* - * the total number of entries in the page table. should match the value read - * from the register apex_bar2_reg_kernel_hib_page_table_size. - */ -#define apex_page_table_total_entries 8192 - -#define apex_extended_shift 63 /* extended address bit position. */ - -/* check reset 120 times */ -#define apex_reset_retry 120 -/* wait 100 ms between checks. total 12 sec wait maximum. */ -#define apex_reset_delay 100 - -/* enumeration of the supported sysfs entries. */ -enum sysfs_attribute_type { - attr_kernel_hib_page_table_size, - attr_kernel_hib_simple_page_table_size, - attr_kernel_hib_num_active_pages, -}; - -/* - * register offsets into bar2 memory. - * only values necessary for driver implementation are defined. - */ -enum apex_bar2_regs { - apex_bar2_reg_scu_base = 0x1a300, - apex_bar2_reg_kernel_hib_page_table_size = 0x46000, - apex_bar2_reg_kernel_hib_extended_table = 0x46008, - apex_bar2_reg_kernel_hib_translation_enable = 0x46010, - apex_bar2_reg_kernel_hib_instr_queue_intvecctl = 0x46018, - apex_bar2_reg_kernel_hib_input_actv_queue_intvecctl = 0x46020, - apex_bar2_reg_kernel_hib_param_queue_intvecctl = 0x46028, - apex_bar2_reg_kernel_hib_output_actv_queue_intvecctl = 0x46030, - apex_bar2_reg_kernel_hib_sc_host_intvecctl = 0x46038, - apex_bar2_reg_kernel_hib_top_level_intvecctl = 0x46040, - apex_bar2_reg_kernel_hib_fatal_err_intvecctl = 0x46048, - apex_bar2_reg_kernel_hib_dma_pause = 0x46050, - apex_bar2_reg_kernel_hib_dma_pause_mask = 0x46058, - apex_bar2_reg_kernel_hib_status_block_delay = 0x46060, - apex_bar2_reg_kernel_hib_msix_pending_bit_array0 = 0x46068, - apex_bar2_reg_kernel_hib_msix_pending_bit_array1 = 0x46070, - apex_bar2_reg_kernel_hib_page_table_init = 0x46078, - apex_bar2_reg_kernel_hib_msix_table_init = 0x46080, - apex_bar2_reg_kernel_wire_int_pending_bit_array = 0x48778, - apex_bar2_reg_kernel_wire_int_mask_array = 0x48780, - apex_bar2_reg_user_hib_dma_pause = 0x486d8, - apex_bar2_reg_user_hib_dma_paused = 0x486e0, - apex_bar2_reg_idlegenerator_idlegen_idleregister = 0x4a000, - apex_bar2_reg_kernel_hib_page_table = 0x50000, - - /* error registers - used mostly for debug */ - apex_bar2_reg_user_hib_error_status = 0x86f0, - apex_bar2_reg_scalar_core_error_status = 0x41a0, -}; - -/* addresses for packed registers. */ -#define apex_bar2_reg_axi_quiesce (apex_bar2_reg_scu_base + 0x2c) -#define apex_bar2_reg_gcb_clock_gate (apex_bar2_reg_scu_base + 0x14) -#define apex_bar2_reg_scu_0 (apex_bar2_reg_scu_base + 0xc) -#define apex_bar2_reg_scu_1 (apex_bar2_reg_scu_base + 0x10) -#define apex_bar2_reg_scu_2 (apex_bar2_reg_scu_base + 0x14) -#define apex_bar2_reg_scu_3 (apex_bar2_reg_scu_base + 0x18) -#define apex_bar2_reg_scu_4 (apex_bar2_reg_scu_base + 0x1c) -#define apex_bar2_reg_scu_5 (apex_bar2_reg_scu_base + 0x20) - -#define scu3_rg_pwr_state_ovr_bit_offset 26 -#define scu3_rg_pwr_state_ovr_mask_width 2 -#define scu3_cur_rst_gcb_bit_mask 0x10 -#define scu2_rg_rst_gcb_bit_mask 0xc - -/* configuration for page table. */ -static struct gasket_page_table_config apex_page_table_configs[num_nodes] = { - { - .id = 0, - .mode = gasket_page_table_mode_normal, - .total_entries = apex_page_table_total_entries, - .base_reg = apex_bar2_reg_kernel_hib_page_table, - .extended_reg = apex_bar2_reg_kernel_hib_extended_table, - .extended_bit = apex_extended_shift, - }, -}; - -/* the regions in the bar2 space that can be mapped into user space. */ -static const struct gasket_mappable_region mappable_regions[num_regions] = { - { 0x40000, 0x1000 }, - { 0x44000, 0x1000 }, - { 0x48000, 0x1000 }, -}; - -/* gasket device interrupts enums must be dense (i.e., no empty slots). */ -enum apex_interrupt { - apex_interrupt_instr_queue = 0, - apex_interrupt_input_actv_queue = 1, - apex_interrupt_param_queue = 2, - apex_interrupt_output_actv_queue = 3, - apex_interrupt_sc_host_0 = 4, - apex_interrupt_sc_host_1 = 5, - apex_interrupt_sc_host_2 = 6, - apex_interrupt_sc_host_3 = 7, - apex_interrupt_top_level_0 = 8, - apex_interrupt_top_level_1 = 9, - apex_interrupt_top_level_2 = 10, - apex_interrupt_top_level_3 = 11, - apex_interrupt_fatal_err = 12, - apex_interrupt_count = 13, -}; - -/* interrupt descriptors for apex */ -static struct gasket_interrupt_desc apex_interrupts[] = { - { - apex_interrupt_instr_queue, - apex_bar2_reg_kernel_hib_instr_queue_intvecctl, - unpacked, - }, - { - apex_interrupt_input_actv_queue, - apex_bar2_reg_kernel_hib_input_actv_queue_intvecctl, - unpacked - }, - { - apex_interrupt_param_queue, - apex_bar2_reg_kernel_hib_param_queue_intvecctl, - unpacked - }, - { - apex_interrupt_output_actv_queue, - apex_bar2_reg_kernel_hib_output_actv_queue_intvecctl, - unpacked - }, - { - apex_interrupt_sc_host_0, - apex_bar2_reg_kernel_hib_sc_host_intvecctl, - pack_0 - }, - { - apex_interrupt_sc_host_1, - apex_bar2_reg_kernel_hib_sc_host_intvecctl, - pack_1 - }, - { - apex_interrupt_sc_host_2, - apex_bar2_reg_kernel_hib_sc_host_intvecctl, - pack_2 - }, - { - apex_interrupt_sc_host_3, - apex_bar2_reg_kernel_hib_sc_host_intvecctl, - pack_3 - }, - { - apex_interrupt_top_level_0, - apex_bar2_reg_kernel_hib_top_level_intvecctl, - pack_0 - }, - { - apex_interrupt_top_level_1, - apex_bar2_reg_kernel_hib_top_level_intvecctl, - pack_1 - }, - { - apex_interrupt_top_level_2, - apex_bar2_reg_kernel_hib_top_level_intvecctl, - pack_2 - }, - { - apex_interrupt_top_level_3, - apex_bar2_reg_kernel_hib_top_level_intvecctl, - pack_3 - }, - { - apex_interrupt_fatal_err, - apex_bar2_reg_kernel_hib_fatal_err_intvecctl, - unpacked - }, -}; - -/* allows device to enter power save upon driver close(). */ -static int allow_power_save = 1; - -/* allows sw based clock gating. */ -static int allow_sw_clock_gating; - -/* allows hw based clock gating. */ -/* note: this is not mutual exclusive with sw clock gating. */ -static int allow_hw_clock_gating = 1; - -/* act as if only gcb is instantiated. */ -static int bypass_top_level; - -module_param(allow_power_save, int, 0644); -module_param(allow_sw_clock_gating, int, 0644); -module_param(allow_hw_clock_gating, int, 0644); -module_param(bypass_top_level, int, 0644); - -/* check the device status registers and return device status alive or dead. */ -static int apex_get_status(struct gasket_dev *gasket_dev) -{ - /* todo: check device status. */ - return gasket_status_alive; -} - -/* enter gcb reset state. */ -static int apex_enter_reset(struct gasket_dev *gasket_dev) -{ - if (bypass_top_level) - return 0; - - /* - * software reset: - * enable sleep mode - * - software force gcb idle - * - enable gcb idle - */ - gasket_read_modify_write_64(gasket_dev, apex_bar_index, - apex_bar2_reg_idlegenerator_idlegen_idleregister, - 0x0, 1, 32); - - /* - initiate dma pause */ - gasket_dev_write_64(gasket_dev, 1, apex_bar_index, - apex_bar2_reg_user_hib_dma_pause); - - /* - wait for dma pause complete. */ - if (gasket_wait_with_reschedule(gasket_dev, apex_bar_index, - apex_bar2_reg_user_hib_dma_paused, 1, 1, - apex_reset_delay, apex_reset_retry)) { - dev_err(gasket_dev->dev, - "dmas did not quiesce within timeout (%d ms) ", - apex_reset_retry * apex_reset_delay); - return -etimedout; - } - - /* - enable gcb reset (0x1 to rg_rst_gcb) */ - gasket_read_modify_write_32(gasket_dev, apex_bar_index, - apex_bar2_reg_scu_2, 0x1, 2, 2); - - /* - enable gcb clock gate (0x1 to rg_gated_gcb) */ - gasket_read_modify_write_32(gasket_dev, apex_bar_index, - apex_bar2_reg_scu_2, 0x1, 2, 18); - - /* - enable gcb memory shut down (0x3 to rg_force_ram_sd) */ - gasket_read_modify_write_32(gasket_dev, apex_bar_index, - apex_bar2_reg_scu_3, 0x3, 2, 14); - - /* - wait for ram shutdown. */ - if (gasket_wait_with_reschedule(gasket_dev, apex_bar_index, - apex_bar2_reg_scu_3, bit(6), bit(6), - apex_reset_delay, apex_reset_retry)) { - dev_err(gasket_dev->dev, - "ram did not shut down within timeout (%d ms) ", - apex_reset_retry * apex_reset_delay); - return -etimedout; - } - - return 0; -} - -/* quit gcb reset state. */ -static int apex_quit_reset(struct gasket_dev *gasket_dev) -{ - u32 val0, val1; - - if (bypass_top_level) - return 0; - - /* - * disable sleep mode: - * - disable gcb memory shut down: - * - b00: not forced (hw controlled) - * - b1x: force disable - */ - gasket_read_modify_write_32(gasket_dev, apex_bar_index, - apex_bar2_reg_scu_3, 0x0, 2, 14); - - /* - * - disable software clock gate: - * - b00: not forced (hw controlled) - * - b1x: force disable - */ - gasket_read_modify_write_32(gasket_dev, apex_bar_index, - apex_bar2_reg_scu_2, 0x0, 2, 18); - - /* - * - disable gcb reset (rg_rst_gcb): - * - b00: not forced (hw controlled) - * - b1x: force disable = force not reset - */ - gasket_read_modify_write_32(gasket_dev, apex_bar_index, - apex_bar2_reg_scu_2, 0x2, 2, 2); - - /* - wait for ram enable. */ - if (gasket_wait_with_reschedule(gasket_dev, apex_bar_index, - apex_bar2_reg_scu_3, bit(6), 0, - apex_reset_delay, apex_reset_retry)) { - dev_err(gasket_dev->dev, - "ram did not enable within timeout (%d ms) ", - apex_reset_retry * apex_reset_delay); - return -etimedout; - } - - /* - wait for reset complete. */ - if (gasket_wait_with_reschedule(gasket_dev, apex_bar_index, - apex_bar2_reg_scu_3, - scu3_cur_rst_gcb_bit_mask, 0, - apex_reset_delay, apex_reset_retry)) { - dev_err(gasket_dev->dev, - "gcb did not leave reset within timeout (%d ms) ", - apex_reset_retry * apex_reset_delay); - return -etimedout; - } - - if (!allow_hw_clock_gating) { - val0 = gasket_dev_read_32(gasket_dev, apex_bar_index, - apex_bar2_reg_scu_3); - /* inactive and sleep mode are disabled. */ - gasket_read_modify_write_32(gasket_dev, - apex_bar_index, - apex_bar2_reg_scu_3, 0x3, - scu3_rg_pwr_state_ovr_mask_width, - scu3_rg_pwr_state_ovr_bit_offset); - val1 = gasket_dev_read_32(gasket_dev, apex_bar_index, - apex_bar2_reg_scu_3); - dev_dbg(gasket_dev->dev, - "disallow hw clock gating 0x%x -> 0x%x ", val0, val1); - } else { - val0 = gasket_dev_read_32(gasket_dev, apex_bar_index, - apex_bar2_reg_scu_3); - /* inactive mode enabled - sleep mode disabled. */ - gasket_read_modify_write_32(gasket_dev, apex_bar_index, - apex_bar2_reg_scu_3, 2, - scu3_rg_pwr_state_ovr_mask_width, - scu3_rg_pwr_state_ovr_bit_offset); - val1 = gasket_dev_read_32(gasket_dev, apex_bar_index, - apex_bar2_reg_scu_3); - dev_dbg(gasket_dev->dev, "allow hw clock gating 0x%x -> 0x%x ", - val0, val1); - } - - return 0; -} - -/* reset the apex hardware. called on final close via device_close_cb. */ -static int apex_device_cleanup(struct gasket_dev *gasket_dev) -{ - u64 scalar_error; - u64 hib_error; - int ret = 0; - - hib_error = gasket_dev_read_64(gasket_dev, apex_bar_index, - apex_bar2_reg_user_hib_error_status); - scalar_error = gasket_dev_read_64(gasket_dev, apex_bar_index, - apex_bar2_reg_scalar_core_error_status); - - dev_dbg(gasket_dev->dev, - "%s 0x%p hib_error 0x%llx scalar_error 0x%llx ", - __func__, gasket_dev, hib_error, scalar_error); - - if (allow_power_save) - ret = apex_enter_reset(gasket_dev); - - return ret; -} - -/* determine if gcb is in reset state. */ -static bool is_gcb_in_reset(struct gasket_dev *gasket_dev) -{ - u32 val = gasket_dev_read_32(gasket_dev, apex_bar_index, - apex_bar2_reg_scu_3); - - /* masks rg_rst_gcb bit of scu_ctrl_2 */ - return (val & scu3_cur_rst_gcb_bit_mask); -} - -/* reset the hardware, then quit reset. called on device open. */ -static int apex_reset(struct gasket_dev *gasket_dev) -{ - int ret; - - if (bypass_top_level) - return 0; - - if (!is_gcb_in_reset(gasket_dev)) { - /* we are not in reset - toggle the reset bit so as to force - * re-init of custom block - */ - dev_dbg(gasket_dev->dev, "%s: toggle reset ", __func__); - - ret = apex_enter_reset(gasket_dev); - if (ret) - return ret; - } - return apex_quit_reset(gasket_dev); -} - -/* - * check permissions for apex ioctls. - * returns true if the current user may execute this ioctl, and false otherwise. - */ -static bool apex_ioctl_check_permissions(struct file *filp, uint cmd) -{ - return !!(filp->f_mode & fmode_write); -} - -/* gates or un-gates apex clock. */ -static long apex_clock_gating(struct gasket_dev *gasket_dev, - struct apex_gate_clock_ioctl __user *argp) -{ - struct apex_gate_clock_ioctl ibuf; - - if (bypass_top_level || !allow_sw_clock_gating) - return 0; - - if (copy_from_user(&ibuf, argp, sizeof(ibuf))) - return -efault; - - dev_dbg(gasket_dev->dev, "%s %llu ", __func__, ibuf.enable); - - if (ibuf.enable) { - /* quiesce axi, gate gcb clock. */ - gasket_read_modify_write_32(gasket_dev, apex_bar_index, - apex_bar2_reg_axi_quiesce, 0x1, 1, - 16); - gasket_read_modify_write_32(gasket_dev, apex_bar_index, - apex_bar2_reg_gcb_clock_gate, 0x1, - 2, 18); - } else { - /* un-gate gcb clock, un-quiesce axi. */ - gasket_read_modify_write_32(gasket_dev, apex_bar_index, - apex_bar2_reg_gcb_clock_gate, 0x0, - 2, 18); - gasket_read_modify_write_32(gasket_dev, apex_bar_index, - apex_bar2_reg_axi_quiesce, 0x0, 1, - 16); - } - return 0; -} - -/* apex-specific ioctl handler. */ -static long apex_ioctl(struct file *filp, uint cmd, void __user *argp) -{ - struct gasket_dev *gasket_dev = filp->private_data; - - if (!apex_ioctl_check_permissions(filp, cmd)) - return -eperm; - - switch (cmd) { - case apex_ioctl_gate_clock: - return apex_clock_gating(gasket_dev, argp); - default: - return -enotty; /* unknown command */ - } -} - -/* display driver sysfs entries. */ -static ssize_t sysfs_show(struct device *device, struct device_attribute *attr, - char *buf) -{ - int ret; - struct gasket_dev *gasket_dev; - struct gasket_sysfs_attribute *gasket_attr; - enum sysfs_attribute_type type; - struct gasket_page_table *gpt; - uint val; - - gasket_dev = gasket_sysfs_get_device_data(device); - if (!gasket_dev) { - dev_err(device, "no apex device sysfs mapping found "); - return -enodev; - } - - gasket_attr = gasket_sysfs_get_attr(device, attr); - if (!gasket_attr) { - dev_err(device, "no apex device sysfs attr data found "); - gasket_sysfs_put_device_data(device, gasket_dev); - return -enodev; - } - - type = (enum sysfs_attribute_type)gasket_attr->data.attr_type; - gpt = gasket_dev->page_table[0]; - switch (type) { - case attr_kernel_hib_page_table_size: - val = gasket_page_table_num_entries(gpt); - break; - case attr_kernel_hib_simple_page_table_size: - val = gasket_page_table_num_simple_entries(gpt); - break; - case attr_kernel_hib_num_active_pages: - val = gasket_page_table_num_active_pages(gpt); - break; - default: - dev_dbg(gasket_dev->dev, "unknown attribute: %s ", - attr->attr.name); - ret = 0; - goto exit; - } - ret = scnprintf(buf, page_size, "%u ", val); -exit: - gasket_sysfs_put_attr(device, gasket_attr); - gasket_sysfs_put_device_data(device, gasket_dev); - return ret; -} - -static struct gasket_sysfs_attribute apex_sysfs_attrs[] = { - gasket_sysfs_ro(node_0_page_table_entries, sysfs_show, - attr_kernel_hib_page_table_size), - gasket_sysfs_ro(node_0_simple_page_table_entries, sysfs_show, - attr_kernel_hib_simple_page_table_size), - gasket_sysfs_ro(node_0_num_mapped_pages, sysfs_show, - attr_kernel_hib_num_active_pages), - gasket_end_of_attr_array -}; - -/* on device open, perform a core reinit reset. */ -static int apex_device_open_cb(struct gasket_dev *gasket_dev) -{ - return gasket_reset_nolock(gasket_dev); -} - -static const struct pci_device_id apex_pci_ids[] = { - { pci_device(apex_pci_vendor_id, apex_pci_device_id) }, { 0 } -}; - -static int apex_pci_probe(struct pci_dev *pci_dev, - const struct pci_device_id *id) -{ - int ret; - ulong page_table_ready, msix_table_ready; - int retries = 0; - struct gasket_dev *gasket_dev; - - ret = pci_enable_device(pci_dev); - if (ret) { - dev_err(&pci_dev->dev, "error enabling pci device "); - return ret; - } - - pci_set_master(pci_dev); - - ret = gasket_pci_add_device(pci_dev, &gasket_dev); - if (ret) { - dev_err(&pci_dev->dev, "error adding gasket device "); - pci_disable_device(pci_dev); - return ret; - } - - pci_set_drvdata(pci_dev, gasket_dev); - apex_reset(gasket_dev); - - while (retries < apex_reset_retry) { - page_table_ready = - gasket_dev_read_64(gasket_dev, apex_bar_index, - apex_bar2_reg_kernel_hib_page_table_init); - msix_table_ready = - gasket_dev_read_64(gasket_dev, apex_bar_index, - apex_bar2_reg_kernel_hib_msix_table_init); - if (page_table_ready && msix_table_ready) - break; - schedule_timeout(msecs_to_jiffies(apex_reset_delay)); - retries++; - } - - if (retries == apex_reset_retry) { - if (!page_table_ready) - dev_err(gasket_dev->dev, "page table init timed out "); - if (!msix_table_ready) - dev_err(gasket_dev->dev, "msi-x table init timed out "); - ret = -etimedout; - goto remove_device; - } - - ret = gasket_sysfs_create_entries(gasket_dev->dev_info.device, - apex_sysfs_attrs); - if (ret) - dev_err(&pci_dev->dev, "error creating device sysfs entries "); - - ret = gasket_enable_device(gasket_dev); - if (ret) { - dev_err(&pci_dev->dev, "error enabling gasket device "); - goto remove_device; - } - - /* place device in low power mode until opened */ - if (allow_power_save) - apex_enter_reset(gasket_dev); - - return 0; - -remove_device: - gasket_pci_remove_device(pci_dev); - pci_disable_device(pci_dev); - return ret; -} - -static void apex_pci_remove(struct pci_dev *pci_dev) -{ - struct gasket_dev *gasket_dev = pci_get_drvdata(pci_dev); - - gasket_disable_device(gasket_dev); - gasket_pci_remove_device(pci_dev); - pci_disable_device(pci_dev); -} - -static const struct gasket_driver_desc apex_desc = { - .name = "apex", - .driver_version = apex_driver_version, - .major = 120, - .minor = 0, - .module = this_module, - .pci_id_table = apex_pci_ids, - - .num_page_tables = num_nodes, - .page_table_bar_index = apex_bar_index, - .page_table_configs = apex_page_table_configs, - .page_table_extended_bit = apex_extended_shift, - - .bar_descriptions = { - gasket_unused_bar, - gasket_unused_bar, - { apex_bar_bytes, (vm_write | vm_read), apex_bar_offset, - num_regions, mappable_regions, pci_bar }, - gasket_unused_bar, - gasket_unused_bar, - gasket_unused_bar, - }, - .coherent_buffer_description = { - apex_ch_mem_bytes, - (vm_write | vm_read), - apex_cm_offset, - }, - .interrupt_type = pci_msix, - .interrupt_bar_index = apex_bar_index, - .num_interrupts = apex_interrupt_count, - .interrupts = apex_interrupts, - .interrupt_pack_width = 7, - - .device_open_cb = apex_device_open_cb, - .device_close_cb = apex_device_cleanup, - - .ioctl_handler_cb = apex_ioctl, - .device_status_cb = apex_get_status, - .hardware_revision_cb = null, - .device_reset_cb = apex_reset, -}; - -static struct pci_driver apex_pci_driver = { - .name = "apex", - .probe = apex_pci_probe, - .remove = apex_pci_remove, - .id_table = apex_pci_ids, -}; - -static int __init apex_init(void) -{ - int ret; - - ret = gasket_register_device(&apex_desc); - if (ret) - return ret; - ret = pci_register_driver(&apex_pci_driver); - if (ret) - gasket_unregister_device(&apex_desc); - return ret; -} - -static void apex_exit(void) -{ - pci_unregister_driver(&apex_pci_driver); - gasket_unregister_device(&apex_desc); -} -module_description("google apex driver"); -module_version(apex_driver_version); -module_license("gpl v2"); -module_author("john joseph <jnjoseph@google.com>"); -module_device_table(pci, apex_pci_ids); -module_init(apex_init); -module_exit(apex_exit); diff --git a/drivers/staging/gasket/gasket.h b/drivers/staging/gasket/gasket.h --- a/drivers/staging/gasket/gasket.h +++ /dev/null -/* spdx-license-identifier: gpl-2.0 */ -/* - * common gasket device kernel and user space declarations. - * - * copyright (c) 2018 google, inc. - */ -#ifndef __gasket_h__ -#define __gasket_h__ - -#include <linux/ioctl.h> -#include <linux/types.h> - -/* ioctl structure declarations */ - -/* ioctl structures are padded to a multiple of 64 bits */ -/* and padded to put 64 bit values on 64 bit boundaries. */ -/* unsigned 64 bit integers are used to hold pointers. */ -/* this helps compatibility between 32 and 64 bits. */ - -/* - * common structure for ioctls associating an eventfd with a device interrupt, - * when using the gasket interrupt module. - */ -struct gasket_interrupt_eventfd { - u64 interrupt; - u64 event_fd; -}; - -/* - * common structure for ioctls mapping and unmapping buffers when using the - * gasket page_table module. - */ -struct gasket_page_table_ioctl { - u64 page_table_index; - u64 size; - u64 host_address; - u64 device_address; -}; - -/* - * common structure for ioctls mapping and unmapping buffers when using the - * gasket page_table module. - * dma_address: phys addr start of coherent memory, allocated by kernel - */ -struct gasket_coherent_alloc_config_ioctl { - u64 page_table_index; - u64 enable; - u64 size; - u64 dma_address; -}; - -/* base number for all gasket-common ioctls */ -#define gasket_ioctl_base 0xdc - -/* reset the device. */ -#define gasket_ioctl_reset _io(gasket_ioctl_base, 0) - -/* associate the specified [event]fd with the specified interrupt. */ -#define gasket_ioctl_set_eventfd \ - _iow(gasket_ioctl_base, 1, struct gasket_interrupt_eventfd) - -/* - * clears any eventfd associated with the specified interrupt. the (ulong) - * argument is the interrupt number to clear. - */ -#define gasket_ioctl_clear_eventfd _iow(gasket_ioctl_base, 2, unsigned long) - -/* - * [loopbacks only] requests that the loopback device send the specified - * interrupt to the host. the (ulong) argument is the number of the interrupt to - * send. - */ -#define gasket_ioctl_loopback_interrupt \ - _iow(gasket_ioctl_base, 3, unsigned long) - -/* queries the kernel for the number of page tables supported by the device. */ -#define gasket_ioctl_number_page_tables _ior(gasket_ioctl_base, 4, u64) - -/* - * queries the kernel for the maximum size of the page table. only the size and - * page_table_index fields are used from the struct gasket_page_table_ioctl. - */ -#define gasket_ioctl_page_table_size \ - _iowr(gasket_ioctl_base, 5, struct gasket_page_table_ioctl) - -/* - * queries the kernel for the current simple page table size. only the size and - * page_table_index fields are used from the struct gasket_page_table_ioctl. - */ -#define gasket_ioctl_simple_page_table_size \ - _iowr(gasket_ioctl_base, 6, struct gasket_page_table_ioctl) - -/* - * tells the kernel to change the split between the number of simple and - * extended entries in the given page table. only the size and page_table_index - * fields are used from the struct gasket_page_table_ioctl. - */ -#define gasket_ioctl_partition_page_table \ - _iow(gasket_ioctl_base, 7, struct gasket_page_table_ioctl) - -/* - * tells the kernel to map size bytes at host_address to device_address in - * page_table_index page table. - */ -#define gasket_ioctl_map_buffer \ - _iow(gasket_ioctl_base, 8, struct gasket_page_table_ioctl) - -/* - * tells the kernel to unmap size bytes at host_address from device_address in - * page_table_index page table. - */ -#define gasket_ioctl_unmap_buffer \ - _iow(gasket_ioctl_base, 9, struct gasket_page_table_ioctl) - -/* clear the interrupt counts stored for this device. */ -#define gasket_ioctl_clear_interrupt_counts _io(gasket_ioctl_base, 10) - -/* enable/disable and configure the coherent allocator. */ -#define gasket_ioctl_config_coherent_allocator \ - _iowr(gasket_ioctl_base, 11, struct gasket_coherent_alloc_config_ioctl) - -#endif /* __gasket_h__ */ diff --git a/drivers/staging/gasket/gasket_constants.h b/drivers/staging/gasket/gasket_constants.h --- a/drivers/staging/gasket/gasket_constants.h +++ /dev/null -/* spdx-license-identifier: gpl-2.0 */ -/* copyright (c) 2018 google, inc. */ -#ifndef __gasket_constants_h__ -#define __gasket_constants_h__ - -#define gasket_framework_version "1.1.2" - -/* - * the maximum number of simultaneous device types supported by the framework. - */ -#define gasket_framework_desc_max 2 - -/* the maximum devices per each type. */ -#define gasket_dev_max 256 - -/* the number of supported gasket page tables per device. */ -#define gasket_max_num_page_tables 1 - -/* maximum length of device names (driver name + minor number suffix + null). */ -#define gasket_name_max 32 - -/* device status enumeration. */ -enum gasket_status { - /* - * a device is dead if it has not been initialized or has had an error. - */ - gasket_status_dead = 0, - /* - * a device is lamed if the hardware is healthy but the kernel was - * unable to enable some functionality (e.g. interrupts). - */ - gasket_status_lamed, - - /* a device is alive if it is ready for operation. */ - gasket_status_alive, - - /* - * this status is set when the driver is exiting and waiting for all - * handles to be closed. - */ - gasket_status_driver_exit, -}; - -#endif diff --git a/drivers/staging/gasket/gasket_core.c b/drivers/staging/gasket/gasket_core.c --- a/drivers/staging/gasket/gasket_core.c +++ /dev/null -// spdx-license-identifier: gpl-2.0 -/* - * gasket generic driver framework. this file contains the implementation - * for the gasket generic driver framework - the functionality that is common - * across gasket devices. - * - * copyright (c) 2018 google, inc. - */ - -#define pr_fmt(fmt) kbuild_modname ": " fmt - -#include "gasket_core.h" - -#include "gasket_interrupt.h" -#include "gasket_ioctl.h" -#include "gasket_page_table.h" -#include "gasket_sysfs.h" - -#include <linux/capability.h> -#include <linux/compiler.h> -#include <linux/delay.h> -#include <linux/device.h> -#include <linux/fs.h> -#include <linux/init.h> -#include <linux/of.h> -#include <linux/pid_namespace.h> -#include <linux/printk.h> -#include <linux/sched.h> - -#ifdef gasket_kernel_trace_support -#define create_trace_points -#include <trace/events/gasket_mmap.h> -#else -#define trace_gasket_mmap_exit(x) -#define trace_gasket_mmap_entry(x, ...) -#endif - -/* - * "private" members of gasket_driver_desc. - * - * contains internal per-device type tracking data, i.e., data not appropriate - * as part of the public interface for the generic framework. - */ -struct gasket_internal_desc { - /* device-specific-driver-provided configuration information. */ - const struct gasket_driver_desc *driver_desc; - - /* protects access to per-driver data (i.e. this structure). */ - struct mutex mutex; - - /* kernel-internal device class. */ - struct class *class; - - /* instantiated / present devices of this type. */ - struct gasket_dev *devs[gasket_dev_max]; -}; - -/* do_map_region() needs be able to return more than just true/false. */ -enum do_map_region_status { - /* the region was successfully mapped. */ - do_map_region_success, - - /* attempted to map region and failed. */ - do_map_region_failure, - - /* the requested region to map was not part of a mappable region. */ - do_map_region_invalid, -}; - -/* global data definitions. */ -/* mutex - only for framework-wide data. other data should be protected by - * finer-grained locks. - */ -static define_mutex(g_mutex); - -/* list of all registered device descriptions & their supporting data. */ -static struct gasket_internal_desc g_descs[gasket_framework_desc_max]; - -/* mapping of statuses to human-readable strings. must end with {0,null}. */ -static const struct gasket_num_name gasket_status_name_table[] = { - { gasket_status_dead, "dead" }, - { gasket_status_alive, "alive" }, - { gasket_status_lamed, "lamed" }, - { gasket_status_driver_exit, "driver_exiting" }, - { 0, null }, -}; - -/* enumeration of the automatic gasket framework sysfs nodes. */ -enum gasket_sysfs_attribute_type { - attr_bar_offsets, - attr_bar_sizes, - attr_driver_version, - attr_framework_version, - attr_device_type, - attr_hardware_revision, - attr_pci_address, - attr_status, - attr_is_device_owned, - attr_device_owner, - attr_write_open_count, - attr_reset_count, - attr_user_mem_ranges -}; - -/* perform a standard gasket callback. */ -static inline int -check_and_invoke_callback(struct gasket_dev *gasket_dev, - int (*cb_function)(struct gasket_dev *)) -{ - int ret = 0; - - if (cb_function) { - mutex_lock(&gasket_dev->mutex); - ret = cb_function(gasket_dev); - mutex_unlock(&gasket_dev->mutex); - } - return ret; -} - -/* perform a standard gasket callback without grabbing gasket_dev->mutex. */ -static inline int -gasket_check_and_invoke_callback_nolock(struct gasket_dev *gasket_dev, - int (*cb_function)(struct gasket_dev *)) -{ - int ret = 0; - - if (cb_function) - ret = cb_function(gasket_dev); - return ret; -} - -/* - * return nonzero if the gasket_cdev_info is owned by the current thread group - * id. - */ -static int gasket_owned_by_current_tgid(struct gasket_cdev_info *info) -{ - return (info->ownership.is_owned && - (info->ownership.owner == current->tgid)); -} - -/* - * find the next free gasket_internal_dev slot. - * - * returns the located slot number on success or a negative number on failure. - */ -static int gasket_find_dev_slot(struct gasket_internal_desc *internal_desc, - const char *kobj_name) -{ - int i; - - mutex_lock(&internal_desc->mutex); - - /* search for a previous instance of this device. */ - for (i = 0; i < gasket_dev_max; i++) { - if (internal_desc->devs[i] && - strcmp(internal_desc->devs[i]->kobj_name, kobj_name) == 0) { - pr_err("duplicate device %s ", kobj_name); - mutex_unlock(&internal_desc->mutex); - return -ebusy; - } - } - - /* find a free device slot. */ - for (i = 0; i < gasket_dev_max; i++) { - if (!internal_desc->devs[i]) - break; - } - - if (i == gasket_dev_max) { - pr_err("too many registered devices; max %d ", gasket_dev_max); - mutex_unlock(&internal_desc->mutex); - return -ebusy; - } - - mutex_unlock(&internal_desc->mutex); - return i; -} - -/* - * allocate and initialize a gasket device structure, add the device to the - * device list. - * - * returns 0 if successful, a negative error code otherwise. - */ -static int gasket_alloc_dev(struct gasket_internal_desc *internal_desc, - struct device *parent, struct gasket_dev **pdev) -{ - int dev_idx; - const struct gasket_driver_desc *driver_desc = - internal_desc->driver_desc; - struct gasket_dev *gasket_dev; - struct gasket_cdev_info *dev_info; - const char *parent_name = dev_name(parent); - - pr_debug("allocating a gasket device, parent %s. ", parent_name); - - *pdev = null; - - dev_idx = gasket_find_dev_slot(internal_desc, parent_name); - if (dev_idx < 0) - return dev_idx; - - gasket_dev = *pdev = kzalloc(sizeof(*gasket_dev), gfp_kernel); - if (!gasket_dev) { - pr_err("no memory for device, parent %s ", parent_name); - return -enomem; - } - internal_desc->devs[dev_idx] = gasket_dev; - - mutex_init(&gasket_dev->mutex); - - gasket_dev->internal_desc = internal_desc; - gasket_dev->dev_idx = dev_idx; - snprintf(gasket_dev->kobj_name, gasket_name_max, "%s", parent_name); - gasket_dev->dev = get_device(parent); - /* gasket_bar_data is uninitialized. */ - gasket_dev->num_page_tables = driver_desc->num_page_tables; - /* max_page_table_size and *page table are uninit'ed */ - /* interrupt_data is not initialized. */ - /* status is 0, or gasket_status_dead */ - - dev_info = &gasket_dev->dev_info; - snprintf(dev_info->name, gasket_name_max, "%s_%u", driver_desc->name, - gasket_dev->dev_idx); - dev_info->devt = - mkdev(driver_desc->major, driver_desc->minor + - gasket_dev->dev_idx); - dev_info->device = - device_create(internal_desc->class, parent, dev_info->devt, - gasket_dev, dev_info->name); - - /* cdev has not yet been added; cdev_added is 0 */ - dev_info->gasket_dev_ptr = gasket_dev; - /* ownership is all 0, indicating no owner or opens. */ - - return 0; -} - -/* free a gasket device. */ -static void gasket_free_dev(struct gasket_dev *gasket_dev) -{ - struct gasket_internal_desc *internal_desc = gasket_dev->internal_desc; - - mutex_lock(&internal_desc->mutex); - internal_desc->devs[gasket_dev->dev_idx] = null; - mutex_unlock(&internal_desc->mutex); - put_device(gasket_dev->dev); - kfree(gasket_dev); -} - -/* - * maps the specified bar into kernel space. - * - * returns 0 on success, a negative error code otherwise. - * a zero-sized bar will not be mapped, but is not an error. - */ -static int gasket_map_pci_bar(struct gasket_dev *gasket_dev, int bar_num) -{ - struct gasket_internal_desc *internal_desc = gasket_dev->internal_desc; - const struct gasket_driver_desc *driver_desc = - internal_desc->driver_desc; - ulong desc_bytes = driver_desc->bar_descriptions[bar_num].size; - struct gasket_bar_data *data; - int ret; - - if (desc_bytes == 0) - return 0; - - if (driver_desc->bar_descriptions[bar_num].type != pci_bar) { - /* not pci: skip this entry */ - return 0; - } - - data = &gasket_dev->bar_data[bar_num]; - - /* - * pci_resource_start and pci_resource_len return a "resource_size_t", - * which is safely castable to ulong (which itself is the arg to - * request_mem_region). - */ - data->phys_base = - (ulong)pci_resource_start(gasket_dev->pci_dev, bar_num); - if (!data->phys_base) { - dev_err(gasket_dev->dev, "cannot get bar%u base address ", - bar_num); - return -einval; - } - - data->length_bytes = - (ulong)pci_resource_len(gasket_dev->pci_dev, bar_num); - if (data->length_bytes < desc_bytes) { - dev_err(gasket_dev->dev, - "pci bar %u space is too small: %lu; expected >= %lu ", - bar_num, data->length_bytes, desc_bytes); - return -enomem; - } - - if (!request_mem_region(data->phys_base, data->length_bytes, - gasket_dev->dev_info.name)) { - dev_err(gasket_dev->dev, - "cannot get bar %d memory region %p ", - bar_num, &gasket_dev->pci_dev->resource[bar_num]); - return -einval; - } - - data->virt_base = ioremap(data->phys_base, data->length_bytes); - if (!data->virt_base) { - dev_err(gasket_dev->dev, - "cannot remap bar %d memory region %p ", - bar_num, &gasket_dev->pci_dev->resource[bar_num]); - ret = -enomem; - goto fail; - } - - dma_set_mask(&gasket_dev->pci_dev->dev, dma_bit_mask(64)); - dma_set_coherent_mask(&gasket_dev->pci_dev->dev, dma_bit_mask(64)); - - return 0; - -fail: - iounmap(data->virt_base); - release_mem_region(data->phys_base, data->length_bytes); - return ret; -} - -/* - * releases pci bar mapping. - * - * a zero-sized or not-mapped bar will not be unmapped, but is not an error. - */ -static void gasket_unmap_pci_bar(struct gasket_dev *dev, int bar_num) -{ - ulong base, bytes; - struct gasket_internal_desc *internal_desc = dev->internal_desc; - const struct gasket_driver_desc *driver_desc = - internal_desc->driver_desc; - - if (driver_desc->bar_descriptions[bar_num].size == 0 || - !dev->bar_data[bar_num].virt_base) - return; - - if (driver_desc->bar_descriptions[bar_num].type != pci_bar) - return; - - iounmap(dev->bar_data[bar_num].virt_base); - dev->bar_data[bar_num].virt_base = null; - - base = pci_resource_start(dev->pci_dev, bar_num); - if (!base) { - dev_err(dev->dev, "cannot get pci bar%u base address ", - bar_num); - return; - } - - bytes = pci_resource_len(dev->pci_dev, bar_num); - release_mem_region(base, bytes); -} - -/* - * setup pci memory mapping for the specified device. - * - * reads the bar registers and sets up pointers to the device's memory mapped - * io space. - * - * returns 0 on success and a negative value otherwise. - */ -static int gasket_setup_pci(struct pci_dev *pci_dev, - struct gasket_dev *gasket_dev) -{ - int i, mapped_bars, ret; - - for (i = 0; i < pci_std_num_bars; i++) { - ret = gasket_map_pci_bar(gasket_dev, i); - if (ret) { - mapped_bars = i; - goto fail; - } - } - - return 0; - -fail: - for (i = 0; i < mapped_bars; i++) - gasket_unmap_pci_bar(gasket_dev, i); - - return -enomem; -} - -/* unmaps memory for the specified device. */ -static void gasket_cleanup_pci(struct gasket_dev *gasket_dev) -{ - int i; - - for (i = 0; i < pci_std_num_bars; i++) - gasket_unmap_pci_bar(gasket_dev, i); -} - -/* determine the health of the gasket device. */ -static int gasket_get_hw_status(struct gasket_dev *gasket_dev) -{ - int status; - int i; - const struct gasket_driver_desc *driver_desc = - gasket_dev->internal_desc->driver_desc; - - status = gasket_check_and_invoke_callback_nolock(gasket_dev, - driver_desc->device_status_cb); - if (status != gasket_status_alive) { - dev_dbg(gasket_dev->dev, "hardware reported status %d. ", - status); - return status; - } - - status = gasket_interrupt_system_status(gasket_dev); - if (status != gasket_status_alive) { - dev_dbg(gasket_dev->dev, - "interrupt system reported status %d. ", status); - return status; - } - - for (i = 0; i < driver_desc->num_page_tables; ++i) { - status = gasket_page_table_system_status(gasket_dev->page_table[i]); - if (status != gasket_status_alive) { - dev_dbg(gasket_dev->dev, - "page table %d reported status %d. ", - i, status); - return status; - } - } - - return gasket_status_alive; -} - -static ssize_t -gasket_write_mappable_regions(char *buf, - const struct gasket_driver_desc *driver_desc, - int bar_index) -{ - int i; - ssize_t written; - ssize_t total_written = 0; - ulong min_addr, max_addr; - struct gasket_bar_desc bar_desc = - driver_desc->bar_descriptions[bar_index]; - - if (bar_desc.permissions == gasket_nomap) - return 0; - for (i = 0; - i < bar_desc.num_mappable_regions && total_written < page_size; - i++) { - min_addr = bar_desc.mappable_regions[i].start - - driver_desc->legacy_mmap_address_offset; - max_addr = bar_desc.mappable_regions[i].start - - driver_desc->legacy_mmap_address_offset + - bar_desc.mappable_regions[i].length_bytes; - written = scnprintf(buf, page_size - total_written, - "0x%08lx-0x%08lx ", min_addr, max_addr); - total_written += written; - buf += written; - } - return total_written; -} - -static ssize_t gasket_sysfs_data_show(struct device *device, - struct device_attribute *attr, char *buf) -{ - int i, ret = 0; - ssize_t current_written = 0; - const struct gasket_driver_desc *driver_desc; - struct gasket_dev *gasket_dev; - struct gasket_sysfs_attribute *gasket_attr; - const struct gasket_bar_desc *bar_desc; - enum gasket_sysfs_attribute_type sysfs_type; - - gasket_dev = gasket_sysfs_get_device_data(device); - if (!gasket_dev) { - dev_err(device, "no sysfs mapping found for device "); - return 0; - } - - gasket_attr = gasket_sysfs_get_attr(device, attr); - if (!gasket_attr) { - dev_err(device, "no sysfs attr found for device "); - gasket_sysfs_put_device_data(device, gasket_dev); - return 0; - } - - driver_desc = gasket_dev->internal_desc->driver_desc; - - sysfs_type = - (enum gasket_sysfs_attribute_type)gasket_attr->data.attr_type; - switch (sysfs_type) { - case attr_bar_offsets: - for (i = 0; i < pci_std_num_bars; i++) { - bar_desc = &driver_desc->bar_descriptions[i]; - if (bar_desc->size == 0) - continue; - current_written = - snprintf(buf, page_size - ret, "%d: 0x%lx ", i, - (ulong)bar_desc->base); - buf += current_written; - ret += current_written; - } - break; - case attr_bar_sizes: - for (i = 0; i < pci_std_num_bars; i++) { - bar_desc = &driver_desc->bar_descriptions[i]; - if (bar_desc->size == 0) - continue; - current_written = - snprintf(buf, page_size - ret, "%d: 0x%lx ", i, - (ulong)bar_desc->size); - buf += current_written; - ret += current_written; - } - break; - case attr_driver_version: - ret = snprintf(buf, page_size, "%s ", - gasket_dev->internal_desc->driver_desc->driver_version); - break; - case attr_framework_version: - ret = snprintf(buf, page_size, "%s ", - gasket_framework_version); - break; - case attr_device_type: - ret = snprintf(buf, page_size, "%s ", - gasket_dev->internal_desc->driver_desc->name); - break; - case attr_hardware_revision: - ret = snprintf(buf, page_size, "%d ", - gasket_dev->hardware_revision); - break; - case attr_pci_address: - ret = snprintf(buf, page_size, "%s ", gasket_dev->kobj_name); - break; - case attr_status: - ret = snprintf(buf, page_size, "%s ", - gasket_num_name_lookup(gasket_dev->status, - gasket_status_name_table)); - break; - case attr_is_device_owned: - ret = snprintf(buf, page_size, "%d ", - gasket_dev->dev_info.ownership.is_owned); - break; - case attr_device_owner: - ret = snprintf(buf, page_size, "%d ", - gasket_dev->dev_info.ownership.owner); - break; - case attr_write_open_count: - ret = snprintf(buf, page_size, "%d ", - gasket_dev->dev_info.ownership.write_open_count); - break; - case attr_reset_count: - ret = snprintf(buf, page_size, "%d ", gasket_dev->reset_count); - break; - case attr_user_mem_ranges: - for (i = 0; i < pci_std_num_bars; ++i) { - current_written = - gasket_write_mappable_regions(buf, driver_desc, - i); - buf += current_written; - ret += current_written; - } - break; - default: - dev_dbg(gasket_dev->dev, "unknown attribute: %s ", - attr->attr.name); - ret = 0; - break; - } - - gasket_sysfs_put_attr(device, gasket_attr); - gasket_sysfs_put_device_data(device, gasket_dev); - return ret; -} - -/* these attributes apply to all gasket driver instances. */ -static const struct gasket_sysfs_attribute gasket_sysfs_generic_attrs[] = { - gasket_sysfs_ro(bar_offsets, gasket_sysfs_data_show, attr_bar_offsets), - gasket_sysfs_ro(bar_sizes, gasket_sysfs_data_show, attr_bar_sizes), - gasket_sysfs_ro(driver_version, gasket_sysfs_data_show, - attr_driver_version), - gasket_sysfs_ro(framework_version, gasket_sysfs_data_show, - attr_framework_version), - gasket_sysfs_ro(device_type, gasket_sysfs_data_show, attr_device_type), - gasket_sysfs_ro(revision, gasket_sysfs_data_show, - attr_hardware_revision), - gasket_sysfs_ro(pci_address, gasket_sysfs_data_show, attr_pci_address), - gasket_sysfs_ro(status, gasket_sysfs_data_show, attr_status), - gasket_sysfs_ro(is_device_owned, gasket_sysfs_data_show, - attr_is_device_owned), - gasket_sysfs_ro(device_owner, gasket_sysfs_data_show, - attr_device_owner), - gasket_sysfs_ro(write_open_count, gasket_sysfs_data_show, - attr_write_open_count), - gasket_sysfs_ro(reset_count, gasket_sysfs_data_show, attr_reset_count), - gasket_sysfs_ro(user_mem_ranges, gasket_sysfs_data_show, - attr_user_mem_ranges), - gasket_end_of_attr_array -}; - -/* add a char device and related info. */ -static int gasket_add_cdev(struct gasket_cdev_info *dev_info, - const struct file_operations *file_ops, - struct module *owner) -{ - int ret; - - cdev_init(&dev_info->cdev, file_ops); - dev_info->cdev.owner = owner; - ret = cdev_add(&dev_info->cdev, dev_info->devt, 1); - if (ret) { - dev_err(dev_info->gasket_dev_ptr->dev, - "cannot add char device [ret=%d] ", ret); - return ret; - } - dev_info->cdev_added = 1; - - return 0; -} - -/* disable device operations. */ -void gasket_disable_device(struct gasket_dev *gasket_dev) -{ - const struct gasket_driver_desc *driver_desc = - gasket_dev->internal_desc->driver_desc; - int i; - - /* only delete the device if it has been successfully added. */ - if (gasket_dev->dev_info.cdev_added) - cdev_del(&gasket_dev->dev_info.cdev); - - gasket_dev->status = gasket_status_dead; - - gasket_interrupt_cleanup(gasket_dev); - - for (i = 0; i < driver_desc->num_page_tables; ++i) { - if (gasket_dev->page_table[i]) { - gasket_page_table_reset(gasket_dev->page_table[i]); - gasket_page_table_cleanup(gasket_dev->page_table[i]); - } - } -} -export_symbol(gasket_disable_device); - -/* - * registered driver descriptor lookup for pci devices. - * - * precondition: called with g_mutex held (to avoid a race on return). - * returns null if no matching device was found. - */ -static struct gasket_internal_desc * -lookup_pci_internal_desc(struct pci_dev *pci_dev) -{ - int i; - - __must_hold(&g_mutex); - for (i = 0; i < gasket_framework_desc_max; i++) { - if (g_descs[i].driver_desc && - g_descs[i].driver_desc->pci_id_table && - pci_match_id(g_descs[i].driver_desc->pci_id_table, pci_dev)) - return &g_descs[i]; - } - - return null; -} - -/* - * verifies that the user has permissions to perform the requested mapping and - * that the provided descriptor/range is of adequate size to hold the range to - * be mapped. - */ -static bool gasket_mmap_has_permissions(struct gasket_dev *gasket_dev, - struct vm_area_struct *vma, - int bar_permissions) -{ - int requested_permissions; - /* always allow sysadmin to access. */ - if (capable(cap_sys_admin)) - return true; - - /* never allow non-sysadmins to access to a dead device. */ - if (gasket_dev->status != gasket_status_alive) { - dev_dbg(gasket_dev->dev, "device is dead. "); - return false; - } - - /* make sure that no wrong flags are set. */ - requested_permissions = - (vma->vm_flags & vm_access_flags); - if (requested_permissions & ~(bar_permissions)) { - dev_dbg(gasket_dev->dev, - "attempting to map a region with requested permissions 0x%x, but region has permissions 0x%x. ", - requested_permissions, bar_permissions); - return false; - } - - /* do not allow a non-owner to write. */ - if ((vma->vm_flags & vm_write) && - !gasket_owned_by_current_tgid(&gasket_dev->dev_info)) { - dev_dbg(gasket_dev->dev, - "attempting to mmap a region for write without owning device. "); - return false; - } - - return true; -} - -/* - * verifies that the input address is within the region allocated to coherent - * buffer. - */ -static bool -gasket_is_coherent_region(const struct gasket_driver_desc *driver_desc, - ulong address) -{ - struct gasket_coherent_buffer_desc coh_buff_desc = - driver_desc->coherent_buffer_description; - - if (coh_buff_desc.permissions != gasket_nomap) { - if ((address >= coh_buff_desc.base) && - (address < coh_buff_desc.base + coh_buff_desc.size)) { - return true; - } - } - return false; -} - -static int gasket_get_bar_index(const struct gasket_dev *gasket_dev, - ulong phys_addr) -{ - int i; - const struct gasket_driver_desc *driver_desc; - - driver_desc = gasket_dev->internal_desc->driver_desc; - for (i = 0; i < pci_std_num_bars; ++i) { - struct gasket_bar_desc bar_desc = - driver_desc->bar_descriptions[i]; - - if (bar_desc.permissions != gasket_nomap) { - if (phys_addr >= bar_desc.base && - phys_addr < (bar_desc.base + bar_desc.size)) { - return i; - } - } - } - /* if we haven't found the address by now, it is invalid. */ - return -einval; -} - -/* - * sets the actual bounds to map, given the device's mappable region. - * - * given the device's mappable region, along with the user-requested mapping - * start offset and length of the user region, determine how much of this - * mappable region can be mapped into the user's region (start/end offsets), - * and the physical offset (phys_offset) into the bar where the mapping should - * begin (either the vma's or region lower bound). - * - * in other words, this calculates the overlap between the vma - * (bar_offset, requested_length) and the given gasket_mappable_region. - * - * returns true if there's anything to map, and false otherwise. - */ -static bool -gasket_mm_get_mapping_addrs(const struct gasket_mappable_region *region, - ulong bar_offset, ulong requested_length, - struct gasket_mappable_region *mappable_region, - ulong *virt_offset) -{ - ulong range_start = region->start; - ulong range_length = region->length_bytes; - ulong range_end = range_start + range_length; - - *virt_offset = 0; - if (bar_offset + requested_length < range_start) { - /* - * if the requested region is completely below the range, - * there is nothing to map. - */ - return false; - } else if (bar_offset <= range_start) { - /* if the bar offset is below this range's start - * but the requested length continues into it: - * 1) only map starting from the beginning of this - * range's phys. offset, so we don't map unmappable - * memory. - * 2) the length of the virtual memory to not map is the - * delta between the bar offset and the - * mappable start (and since the mappable start is - * bigger, start - req.) - * 3) the map length is the minimum of the mappable - * requested length (requested_length - virt_offset) - * and the actual mappable length of the range. - */ - mappable_region->start = range_start; - *virt_offset = range_start - bar_offset; - mappable_region->length_bytes = - min(requested_length - *virt_offset, range_length); - return true; - } else if (bar_offset > range_start && - bar_offset < range_end) { - /* - * if the bar offset is within this range: - * 1) map starting from the bar offset. - * 2) because there is no forbidden memory between the - * bar offset and the range start, - * virt_offset is 0. - * 3) the map length is the minimum of the requested - * length and the remaining length in the buffer - * (range_end - bar_offset) - */ - mappable_region->start = bar_offset; - *virt_offset = 0; - mappable_region->length_bytes = - min(requested_length, range_end - bar_offset); - return true; - } - - /* - * if the requested [start] offset is above range_end, - * there's nothing to map. - */ - return false; -} - -/* - * calculates the offset where the vma range begins in its containing bar. - * the offset is written into bar_offset on success. - * returns zero on success, anything else on error. - */ -static int gasket_mm_vma_bar_offset(const struct gasket_dev *gasket_dev, - const struct vm_area_struct *vma, - ulong *bar_offset) -{ - ulong raw_offset; - int bar_index; - const struct gasket_driver_desc *driver_desc = - gasket_dev->internal_desc->driver_desc; - - raw_offset = (vma->vm_pgoff << page_shift) + - driver_desc->legacy_mmap_address_offset; - bar_index = gasket_get_bar_index(gasket_dev, raw_offset); - if (bar_index < 0) { - dev_err(gasket_dev->dev, - "unable to find matching bar for address 0x%lx ", - raw_offset); - trace_gasket_mmap_exit(bar_index); - return bar_index; - } - *bar_offset = - raw_offset - driver_desc->bar_descriptions[bar_index].base; - - return 0; -} - -int gasket_mm_unmap_region(const struct gasket_dev *gasket_dev, - struct vm_area_struct *vma, - const struct gasket_mappable_region *map_region) -{ - ulong bar_offset; - ulong virt_offset; - struct gasket_mappable_region mappable_region; - int ret; - - if (map_region->length_bytes == 0) - return 0; - - ret = gasket_mm_vma_bar_offset(gasket_dev, vma, &bar_offset); - if (ret) - return ret; - - if (!gasket_mm_get_mapping_addrs(map_region, bar_offset, - vma->vm_end - vma->vm_start, - &mappable_region, &virt_offset)) - return 1; - - /* - * the length passed to zap_vma_ptes must be a multiple of - * page_size! trust me. i have the scars. - * - * next multiple of y: ceil_div(x, y) * y - */ - zap_vma_ptes(vma, vma->vm_start + virt_offset, - div_round_up(mappable_region.length_bytes, page_size) * - page_size); - return 0; -} -export_symbol(gasket_mm_unmap_region); - -/* maps a virtual address + range to a physical offset of a bar. */ -static enum do_map_region_status -do_map_region(const struct gasket_dev *gasket_dev, struct vm_area_struct *vma, - struct gasket_mappable_region *mappable_region) -{ - /* maximum size of a single call to io_remap_pfn_range. */ - /* i pulled this number out of thin air. */ - const ulong max_chunk_size = 64 * 1024 * 1024; - ulong chunk_size, mapped_bytes = 0; - - const struct gasket_driver_desc *driver_desc = - gasket_dev->internal_desc->driver_desc; - - ulong bar_offset, virt_offset; - struct gasket_mappable_region region_to_map; - ulong phys_offset, map_length; - ulong virt_base, phys_base; - int bar_index, ret; - - ret = gasket_mm_vma_bar_offset(gasket_dev, vma, &bar_offset); - if (ret) - return do_map_region_invalid; - - if (!gasket_mm_get_mapping_addrs(mappable_region, bar_offset, - vma->vm_end - vma->vm_start, - ®ion_to_map, &virt_offset)) - return do_map_region_invalid; - phys_offset = region_to_map.start; - map_length = region_to_map.length_bytes; - - virt_base = vma->vm_start + virt_offset; - bar_index = - gasket_get_bar_index(gasket_dev, - (vma->vm_pgoff << page_shift) + - driver_desc->legacy_mmap_address_offset); - - if (bar_index < 0) - return do_map_region_invalid; - - phys_base = gasket_dev->bar_data[bar_index].phys_base + phys_offset; - while (mapped_bytes < map_length) { - /* - * io_remap_pfn_range can take a while, so we chunk its - * calls and call cond_resched between each. - */ - chunk_size = min(max_chunk_size, map_length - mapped_bytes); - - cond_resched(); - ret = io_remap_pfn_range(vma, virt_base + mapped_bytes, - (phys_base + mapped_bytes) >> - page_shift, chunk_size, - vma->vm_page_prot); - if (ret) { - dev_err(gasket_dev->dev, - "error remapping pfn range. "); - goto fail; - } - mapped_bytes += chunk_size; - } - - return do_map_region_success; - -fail: - /* unmap the partial chunk we mapped. */ - mappable_region->length_bytes = mapped_bytes; - if (gasket_mm_unmap_region(gasket_dev, vma, mappable_region)) - dev_err(gasket_dev->dev, - "error unmapping partial region 0x%lx (0x%lx bytes) ", - (ulong)virt_offset, - (ulong)mapped_bytes); - - return do_map_region_failure; -} - -/* map a region of coherent memory. */ -static int gasket_mmap_coherent(struct gasket_dev *gasket_dev, - struct vm_area_struct *vma) -{ - const struct gasket_driver_desc *driver_desc = - gasket_dev->internal_desc->driver_desc; - const ulong requested_length = vma->vm_end - vma->vm_start; - int ret; - ulong permissions; - - if (requested_length == 0 || requested_length > - gasket_dev->coherent_buffer.length_bytes) { - trace_gasket_mmap_exit(-einval); - return -einval; - } - - permissions = driver_desc->coherent_buffer_description.permissions; - if (!gasket_mmap_has_permissions(gasket_dev, vma, permissions)) { - dev_err(gasket_dev->dev, "permission checking failed. "); - trace_gasket_mmap_exit(-eperm); - return -eperm; - } - - vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot); - - ret = remap_pfn_range(vma, vma->vm_start, - (gasket_dev->coherent_buffer.phys_base) >> - page_shift, requested_length, vma->vm_page_prot); - if (ret) { - dev_err(gasket_dev->dev, "error remapping pfn range err=%d. ", - ret); - trace_gasket_mmap_exit(ret); - return ret; - } - - /* record the user virtual to dma_address mapping that was - * created by the kernel. - */ - gasket_set_user_virt(gasket_dev, requested_length, - gasket_dev->coherent_buffer.phys_base, - vma->vm_start); - return 0; -} - -/* map a device's bars into user space. */ -static int gasket_mmap(struct file *filp, struct vm_area_struct *vma) -{ - int i, ret; - int bar_index; - int has_mapped_anything = 0; - ulong permissions; - ulong raw_offset, vma_size; - bool is_coherent_region; - const struct gasket_driver_desc *driver_desc; - struct gasket_dev *gasket_dev = (struct gasket_dev *)filp->private_data; - const struct gasket_bar_desc *bar_desc; - struct gasket_mappable_region *map_regions = null; - int num_map_regions = 0; - enum do_map_region_status map_status; - - driver_desc = gasket_dev->internal_desc->driver_desc; - - if (vma->vm_start & ~page_mask) { - dev_err(gasket_dev->dev, - "base address not page-aligned: 0x%lx ", - vma->vm_start); - trace_gasket_mmap_exit(-einval); - return -einval; - } - - /* calculate the offset of this range into physical mem. */ - raw_offset = (vma->vm_pgoff << page_shift) + - driver_desc->legacy_mmap_address_offset; - vma_size = vma->vm_end - vma->vm_start; - trace_gasket_mmap_entry(gasket_dev->dev_info.name, raw_offset, - vma_size); - - /* - * check if the raw offset is within a bar region. if not, check if it - * is a coherent region. - */ - bar_index = gasket_get_bar_index(gasket_dev, raw_offset); - is_coherent_region = gasket_is_coherent_region(driver_desc, raw_offset); - if (bar_index < 0 && !is_coherent_region) { - dev_err(gasket_dev->dev, - "unable to find matching bar for address 0x%lx ", - raw_offset); - trace_gasket_mmap_exit(bar_index); - return bar_index; - } - if (bar_index > 0 && is_coherent_region) { - dev_err(gasket_dev->dev, - "double matching bar and coherent buffers for address 0x%lx ", - raw_offset); - trace_gasket_mmap_exit(bar_index); - return -einval; - } - - vma->vm_private_data = gasket_dev; - - if (is_coherent_region) - return gasket_mmap_coherent(gasket_dev, vma); - - /* everything in the rest of this function is for normal bar mapping. */ - - /* - * subtract the base of the bar from the raw offset to get the - * memory location within the bar to map. - */ - bar_desc = &driver_desc->bar_descriptions[bar_index]; - permissions = bar_desc->permissions; - if (!gasket_mmap_has_permissions(gasket_dev, vma, permissions)) { - dev_err(gasket_dev->dev, "permission checking failed. "); - trace_gasket_mmap_exit(-eperm); - return -eperm; - } - - if (driver_desc->get_mappable_regions_cb) { - ret = driver_desc->get_mappable_regions_cb(gasket_dev, - bar_index, - &map_regions, - &num_map_regions); - if (ret) - return ret; - } else { - if (!gasket_mmap_has_permissions(gasket_dev, vma, - bar_desc->permissions)) { - dev_err(gasket_dev->dev, - "permission checking failed. "); - trace_gasket_mmap_exit(-eperm); - return -eperm; - } - num_map_regions = bar_desc->num_mappable_regions; - map_regions = kcalloc(num_map_regions, - sizeof(*bar_desc->mappable_regions), - gfp_kernel); - if (map_regions) { - memcpy(map_regions, bar_desc->mappable_regions, - num_map_regions * - sizeof(*bar_desc->mappable_regions)); - } - } - - if (!map_regions || num_map_regions == 0) { - dev_err(gasket_dev->dev, "no mappable regions returned! "); - return -einval; - } - - /* marks the vma's pages as uncacheable. */ - vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot); - for (i = 0; i < num_map_regions; i++) { - map_status = do_map_region(gasket_dev, vma, &map_regions[i]); - /* try the next region if this one was not mappable. */ - if (map_status == do_map_region_invalid) - continue; - if (map_status == do_map_region_failure) { - ret = -enomem; - goto fail; - } - - has_mapped_anything = 1; - } - - kfree(map_regions); - - /* if we could not map any memory, the request was invalid. */ - if (!has_mapped_anything) { - dev_err(gasket_dev->dev, - "map request did not contain a valid region. "); - trace_gasket_mmap_exit(-einval); - return -einval; - } - - trace_gasket_mmap_exit(0); - return 0; - -fail: - /* need to unmap any mapped ranges. */ - num_map_regions = i; - for (i = 0; i < num_map_regions; i++) - if (gasket_mm_unmap_region(gasket_dev, vma, - &bar_desc->mappable_regions[i])) - dev_err(gasket_dev->dev, "error unmapping range %d. ", - i); - kfree(map_regions); - - return ret; -} - -/* - * open the char device file. - * - * if the open is for writing, and the device is not owned, this process becomes - * the owner. if the open is for writing and the device is already owned by - * some other process, it is an error. if this process is the owner, increment - * the open count. - * - * returns 0 if successful, a negative error number otherwise. - */ -static int gasket_open(struct inode *inode, struct file *filp) -{ - int ret; - struct gasket_dev *gasket_dev; - const struct gasket_driver_desc *driver_desc; - struct gasket_ownership *ownership; - char task_name[task_comm_len]; - struct gasket_cdev_info *dev_info = - container_of(inode->i_cdev, struct gasket_cdev_info, cdev); - struct pid_namespace *pid_ns = task_active_pid_ns(current); - bool is_root = ns_capable(pid_ns->user_ns, cap_sys_admin); - - gasket_dev = dev_info->gasket_dev_ptr; - driver_desc = gasket_dev->internal_desc->driver_desc; - ownership = &dev_info->ownership; - get_task_comm(task_name, current); - filp->private_data = gasket_dev; - inode->i_size = 0; - - dev_dbg(gasket_dev->dev, - "attempting to open with tgid %u (%s) (f_mode: 0%03o, fmode_write: %d is_root: %u) ", - current->tgid, task_name, filp->f_mode, - (filp->f_mode & fmode_write), is_root); - - /* always allow non-writing accesses. */ - if (!(filp->f_mode & fmode_write)) { - dev_dbg(gasket_dev->dev, "allowing read-only opening. "); - return 0; - } - - mutex_lock(&gasket_dev->mutex); - - dev_dbg(gasket_dev->dev, - "current owner open count (owning tgid %u): %d. ", - ownership->owner, ownership->write_open_count); - - /* opening a node owned by another tgid is an error (unless root) */ - if (ownership->is_owned && ownership->owner != current->tgid && - !is_root) { - dev_err(gasket_dev->dev, - "process %u is opening a node held by %u. ", - current->tgid, ownership->owner); - mutex_unlock(&gasket_dev->mutex); - return -eperm; - } - - /* if the node is not owned, assign it to the current tgid. */ - if (!ownership->is_owned) { - ret = gasket_check_and_invoke_callback_nolock(gasket_dev, - driver_desc->device_open_cb); - if (ret) { - dev_err(gasket_dev->dev, - "error in device open cb: %d ", ret); - mutex_unlock(&gasket_dev->mutex); - return ret; - } - ownership->is_owned = 1; - ownership->owner = current->tgid; - dev_dbg(gasket_dev->dev, "device owner is now tgid %u ", - ownership->owner); - } - - ownership->write_open_count++; - - dev_dbg(gasket_dev->dev, "new open count (owning tgid %u): %d ", - ownership->owner, ownership->write_open_count); - - mutex_unlock(&gasket_dev->mutex); - return 0; -} - -/* - * called on a close of the device file. if this process is the owner, - * decrement the open count. on last close by the owner, free up buffers and - * eventfd contexts, and release ownership. - * - * returns 0 if successful, a negative error number otherwise. - */ -static int gasket_release(struct inode *inode, struct file *file) -{ - int i; - struct gasket_dev *gasket_dev; - struct gasket_ownership *ownership; - const struct gasket_driver_desc *driver_desc; - char task_name[task_comm_len]; - struct gasket_cdev_info *dev_info = - container_of(inode->i_cdev, struct gasket_cdev_info, cdev); - struct pid_namespace *pid_ns = task_active_pid_ns(current); - bool is_root = ns_capable(pid_ns->user_ns, cap_sys_admin); - - gasket_dev = dev_info->gasket_dev_ptr; - driver_desc = gasket_dev->internal_desc->driver_desc; - ownership = &dev_info->ownership; - get_task_comm(task_name, current); - mutex_lock(&gasket_dev->mutex); - - dev_dbg(gasket_dev->dev, - "releasing device node. call origin: tgid %u (%s) (f_mode: 0%03o, fmode_write: %d, is_root: %u) ", - current->tgid, task_name, file->f_mode, - (file->f_mode & fmode_write), is_root); - dev_dbg(gasket_dev->dev, "current open count (owning tgid %u): %d ", - ownership->owner, ownership->write_open_count); - - if (file->f_mode & fmode_write) { - ownership->write_open_count--; - if (ownership->write_open_count == 0) { - dev_dbg(gasket_dev->dev, "device is now free "); - ownership->is_owned = 0; - ownership->owner = 0; - - /* forces chip reset before we unmap the page tables. */ - driver_desc->device_reset_cb(gasket_dev); - - for (i = 0; i < driver_desc->num_page_tables; ++i) { - gasket_page_table_unmap_all(gasket_dev->page_table[i]); - gasket_page_table_garbage_collect(gasket_dev->page_table[i]); - gasket_free_coherent_memory_all(gasket_dev, i); - } - - /* closes device, enters power save. */ - gasket_check_and_invoke_callback_nolock(gasket_dev, - driver_desc->device_close_cb); - } - } - - dev_dbg(gasket_dev->dev, "new open count (owning tgid %u): %d ", - ownership->owner, ownership->write_open_count); - mutex_unlock(&gasket_dev->mutex); - return 0; -} - -/* - * gasket ioctl dispatch function. - * - * check if the ioctl is a generic ioctl. if not, pass the ioctl to the - * ioctl_handler_cb registered in the driver description. - * if the ioctl is a generic ioctl, pass it to gasket_ioctl_handler. - */ -static long gasket_ioctl(struct file *filp, uint cmd, ulong arg) -{ - struct gasket_dev *gasket_dev; - const struct gasket_driver_desc *driver_desc; - void __user *argp = (void __user *)arg; - char path[256]; - - gasket_dev = (struct gasket_dev *)filp->private_data; - driver_desc = gasket_dev->internal_desc->driver_desc; - if (!driver_desc) { - dev_dbg(gasket_dev->dev, - "unable to find device descriptor for file %s ", - d_path(&filp->f_path, path, 256)); - return -enodev; - } - - if (!gasket_is_supported_ioctl(cmd)) { - /* - * the ioctl handler is not a standard gasket callback, since - * it requires different arguments. this means we can't use - * check_and_invoke_callback. - */ - if (driver_desc->ioctl_handler_cb) - return driver_desc->ioctl_handler_cb(filp, cmd, argp); - - dev_dbg(gasket_dev->dev, "received unknown ioctl 0x%x ", cmd); - return -einval; - } - - return gasket_handle_ioctl(filp, cmd, argp); -} - -/* file operations for all gasket devices. */ -static const struct file_operations gasket_file_ops = { - .owner = this_module, - .llseek = no_llseek, - .mmap = gasket_mmap, - .open = gasket_open, - .release = gasket_release, - .unlocked_ioctl = gasket_ioctl, -}; - -/* perform final init and marks the device as active. */ -int gasket_enable_device(struct gasket_dev *gasket_dev) -{ - int tbl_idx; - int ret; - const struct gasket_driver_desc *driver_desc = - gasket_dev->internal_desc->driver_desc; - - ret = gasket_interrupt_init(gasket_dev); - if (ret) { - dev_err(gasket_dev->dev, - "critical failure to allocate interrupts: %d ", ret); - gasket_interrupt_cleanup(gasket_dev); - return ret; - } - - for (tbl_idx = 0; tbl_idx < driver_desc->num_page_tables; tbl_idx++) { - dev_dbg(gasket_dev->dev, "initializing page table %d. ", - tbl_idx); - ret = gasket_page_table_init(&gasket_dev->page_table[tbl_idx], - &gasket_dev->bar_data[driver_desc->page_table_bar_index], - &driver_desc->page_table_configs[tbl_idx], - gasket_dev->dev, - gasket_dev->pci_dev); - if (ret) { - dev_err(gasket_dev->dev, - "couldn't init page table %d: %d ", - tbl_idx, ret); - return ret; - } - /* - * make sure that the page table is clear and set to simple - * addresses. - */ - gasket_page_table_reset(gasket_dev->page_table[tbl_idx]); - } - - /* - * hardware_revision_cb returns a positive integer (the rev) if - * successful.) - */ - ret = check_and_invoke_callback(gasket_dev, - driver_desc->hardware_revision_cb); - if (ret < 0) { - dev_err(gasket_dev->dev, - "error getting hardware revision: %d ", ret); - return ret; - } - gasket_dev->hardware_revision = ret; - - /* device_status_cb returns a device status, not an error code. */ - gasket_dev->status = gasket_get_hw_status(gasket_dev); - if (gasket_dev->status == gasket_status_dead) - dev_err(gasket_dev->dev, "device reported as unhealthy. "); - - ret = gasket_add_cdev(&gasket_dev->dev_info, &gasket_file_ops, - driver_desc->module); - if (ret) - return ret; - - return 0; -} -export_symbol(gasket_enable_device); - -static int __gasket_add_device(struct device *parent_dev, - struct gasket_internal_desc *internal_desc, - struct gasket_dev **gasket_devp) -{ - int ret; - struct gasket_dev *gasket_dev; - const struct gasket_driver_desc *driver_desc = - internal_desc->driver_desc; - - ret = gasket_alloc_dev(internal_desc, parent_dev, &gasket_dev); - if (ret) - return ret; - if (is_err(gasket_dev->dev_info.device)) { - dev_err(parent_dev, "cannot create %s device %s [ret = %ld] ", - driver_desc->name, gasket_dev->dev_info.name, - ptr_err(gasket_dev->dev_info.device)); - ret = -enodev; - goto free_gasket_dev; - } - - ret = gasket_sysfs_create_mapping(gasket_dev->dev_info.device, - gasket_dev); - if (ret) - goto remove_device; - - ret = gasket_sysfs_create_entries(gasket_dev->dev_info.device, - gasket_sysfs_generic_attrs); - if (ret) - goto remove_sysfs_mapping; - - *gasket_devp = gasket_dev; - return 0; - -remove_sysfs_mapping: - gasket_sysfs_remove_mapping(gasket_dev->dev_info.device); -remove_device: - device_destroy(internal_desc->class, gasket_dev->dev_info.devt); -free_gasket_dev: - gasket_free_dev(gasket_dev); - return ret; -} - -static void __gasket_remove_device(struct gasket_internal_desc *internal_desc, - struct gasket_dev *gasket_dev) -{ - gasket_sysfs_remove_mapping(gasket_dev->dev_info.device); - device_destroy(internal_desc->class, gasket_dev->dev_info.devt); - gasket_free_dev(gasket_dev); -} - -/* - * add pci gasket device. - * - * called by gasket device probe function. - * allocates device metadata and maps device memory. the device driver must - * call gasket_enable_device after driver init is complete to place the device - * in active use. - */ -int gasket_pci_add_device(struct pci_dev *pci_dev, - struct gasket_dev **gasket_devp) -{ - int ret; - struct gasket_internal_desc *internal_desc; - struct gasket_dev *gasket_dev; - struct device *parent; - - dev_dbg(&pci_dev->dev, "add pci gasket device "); - - mutex_lock(&g_mutex); - internal_desc = lookup_pci_internal_desc(pci_dev); - mutex_unlock(&g_mutex); - if (!internal_desc) { - dev_err(&pci_dev->dev, - "pci add device called for unknown driver type "); - return -enodev; - } - - parent = &pci_dev->dev; - ret = __gasket_add_device(parent, internal_desc, &gasket_dev); - if (ret) - return ret; - - gasket_dev->pci_dev = pci_dev; - ret = gasket_setup_pci(pci_dev, gasket_dev); - if (ret) - goto cleanup_pci; - - /* - * once we've created the mapping structures successfully, attempt to - * create a symlink to the pci directory of this object. - */ - ret = sysfs_create_link(&gasket_dev->dev_info.device->kobj, - &pci_dev->dev.kobj, dev_name(&pci_dev->dev)); - if (ret) { - dev_err(gasket_dev->dev, - "cannot create sysfs pci link: %d ", ret); - goto cleanup_pci; - } - - *gasket_devp = gasket_dev; - return 0; - -cleanup_pci: - gasket_cleanup_pci(gasket_dev); - __gasket_remove_device(internal_desc, gasket_dev); - return ret; -} -export_symbol(gasket_pci_add_device); - -/* remove a pci gasket device. */ -void gasket_pci_remove_device(struct pci_dev *pci_dev) -{ - int i; - struct gasket_internal_desc *internal_desc; - struct gasket_dev *gasket_dev = null; - /* find the device desc. */ - mutex_lock(&g_mutex); - internal_desc = lookup_pci_internal_desc(pci_dev); - if (!internal_desc) { - mutex_unlock(&g_mutex); - return; - } - mutex_unlock(&g_mutex); - - /* now find the specific device */ - mutex_lock(&internal_desc->mutex); - for (i = 0; i < gasket_dev_max; i++) { - if (internal_desc->devs[i] && - internal_desc->devs[i]->pci_dev == pci_dev) { - gasket_dev = internal_desc->devs[i]; - break; - } - } - mutex_unlock(&internal_desc->mutex); - - if (!gasket_dev) - return; - - dev_dbg(gasket_dev->dev, "remove %s pci gasket device ", - internal_desc->driver_desc->name); - - gasket_cleanup_pci(gasket_dev); - __gasket_remove_device(internal_desc, gasket_dev); -} -export_symbol(gasket_pci_remove_device); - -/** - * lookup a name by number in a num_name table. - * @num: number to lookup. - * @table: array of num_name structures, the table for the lookup. - * - * description: searches for num in the table. if found, the - * corresponding name is returned; otherwise null - * is returned. - * - * the table must have a null name pointer at the end. - */ -const char *gasket_num_name_lookup(uint num, - const struct gasket_num_name *table) -{ - uint i = 0; - - while (table[i].snn_name) { - if (num == table[i].snn_num) - break; - ++i; - } - - return table[i].snn_name; -} -export_symbol(gasket_num_name_lookup); - -int gasket_reset(struct gasket_dev *gasket_dev) -{ - int ret; - - mutex_lock(&gasket_dev->mutex); - ret = gasket_reset_nolock(gasket_dev); - mutex_unlock(&gasket_dev->mutex); - return ret; -} -export_symbol(gasket_reset); - -int gasket_reset_nolock(struct gasket_dev *gasket_dev) -{ - int ret; - int i; - const struct gasket_driver_desc *driver_desc; - - driver_desc = gasket_dev->internal_desc->driver_desc; - if (!driver_desc->device_reset_cb) - return 0; - - ret = driver_desc->device_reset_cb(gasket_dev); - if (ret) { - dev_dbg(gasket_dev->dev, "device reset cb returned %d. ", - ret); - return ret; - } - - /* reinitialize the page tables and interrupt framework. */ - for (i = 0; i < driver_desc->num_page_tables; ++i) - gasket_page_table_reset(gasket_dev->page_table[i]); - - ret = gasket_interrupt_reinit(gasket_dev); - if (ret) { - dev_dbg(gasket_dev->dev, "unable to reinit interrupts: %d. ", - ret); - return ret; - } - - /* get current device health. */ - gasket_dev->status = gasket_get_hw_status(gasket_dev); - if (gasket_dev->status == gasket_status_dead) { - dev_dbg(gasket_dev->dev, "device reported as dead. "); - return -einval; - } - - return 0; -} -export_symbol(gasket_reset_nolock); - -gasket_ioctl_permissions_cb_t -gasket_get_ioctl_permissions_cb(struct gasket_dev *gasket_dev) -{ - return gasket_dev->internal_desc->driver_desc->ioctl_permissions_cb; -} -export_symbol(gasket_get_ioctl_permissions_cb); - -/* get the driver structure for a given gasket_dev. - * @dev: pointer to gasket_dev, implementing the requested driver. - */ -const struct gasket_driver_desc *gasket_get_driver_desc(struct gasket_dev *dev) -{ - return dev->internal_desc->driver_desc; -} - -/* get the device structure for a given gasket_dev. - * @dev: pointer to gasket_dev, implementing the requested driver. - */ -struct device *gasket_get_device(struct gasket_dev *dev) -{ - return dev->dev; -} - -/** - * asynchronously waits on device. - * @gasket_dev: device struct. - * @bar: bar - * @offset: register offset - * @mask: register mask - * @val: expected value - * @max_retries: number of sleep periods - * @delay_ms: timeout in milliseconds - * - * description: busy waits for a specific combination of bits to be set on a - * gasket register. - **/ -int gasket_wait_with_reschedule(struct gasket_dev *gasket_dev, int bar, - u64 offset, u64 mask, u64 val, - uint max_retries, u64 delay_ms) -{ - uint retries = 0; - u64 tmp; - - while (retries < max_retries) { - tmp = gasket_dev_read_64(gasket_dev, bar, offset); - if ((tmp & mask) == val) - return 0; - msleep(delay_ms); - retries++; - } - dev_dbg(gasket_dev->dev, "%s timeout: reg %llx timeout (%llu ms) ", - __func__, offset, max_retries * delay_ms); - return -etimedout; -} -export_symbol(gasket_wait_with_reschedule); - -/* see gasket_core.h for description. */ -int gasket_register_device(const struct gasket_driver_desc *driver_desc) -{ - int i, ret; - int desc_idx = -1; - struct gasket_internal_desc *internal; - - pr_debug("loading %s driver version %s ", driver_desc->name, - driver_desc->driver_version); - /* check for duplicates and find a free slot. */ - mutex_lock(&g_mutex); - - for (i = 0; i < gasket_framework_desc_max; i++) { - if (g_descs[i].driver_desc == driver_desc) { - pr_err("%s driver already loaded/registered ", - driver_desc->name); - mutex_unlock(&g_mutex); - return -ebusy; - } - } - - /* this and the above loop could be combined, but this reads easier. */ - for (i = 0; i < gasket_framework_desc_max; i++) { - if (!g_descs[i].driver_desc) { - g_descs[i].driver_desc = driver_desc; - desc_idx = i; - break; - } - } - mutex_unlock(&g_mutex); - - if (desc_idx == -1) { - pr_err("too many drivers loaded, max %d ", - gasket_framework_desc_max); - return -ebusy; - } - - internal = &g_descs[desc_idx]; - mutex_init(&internal->mutex); - memset(internal->devs, 0, sizeof(struct gasket_dev *) * gasket_dev_max); - internal->class = - class_create(driver_desc->module, driver_desc->name); - - if (is_err(internal->class)) { - pr_err("cannot register %s class [ret=%ld] ", - driver_desc->name, ptr_err(internal->class)); - ret = ptr_err(internal->class); - goto unregister_gasket_driver; - } - - ret = register_chrdev_region(mkdev(driver_desc->major, - driver_desc->minor), gasket_dev_max, - driver_desc->name); - if (ret) { - pr_err("cannot register %s char driver [ret=%d] ", - driver_desc->name, ret); - goto destroy_class; - } - - return 0; - -destroy_class: - class_destroy(internal->class); - -unregister_gasket_driver: - mutex_lock(&g_mutex); - g_descs[desc_idx].driver_desc = null; - mutex_unlock(&g_mutex); - return ret; -} -export_symbol(gasket_register_device); - -/* see gasket_core.h for description. */ -void gasket_unregister_device(const struct gasket_driver_desc *driver_desc) -{ - int i, desc_idx; - struct gasket_internal_desc *internal_desc = null; - - mutex_lock(&g_mutex); - for (i = 0; i < gasket_framework_desc_max; i++) { - if (g_descs[i].driver_desc == driver_desc) { - internal_desc = &g_descs[i]; - desc_idx = i; - break; - } - } - - if (!internal_desc) { - mutex_unlock(&g_mutex); - pr_err("request to unregister unknown desc: %s, %d:%d ", - driver_desc->name, driver_desc->major, - driver_desc->minor); - return; - } - - unregister_chrdev_region(mkdev(driver_desc->major, driver_desc->minor), - gasket_dev_max); - - class_destroy(internal_desc->class); - - /* finally, effectively "remove" the driver. */ - g_descs[desc_idx].driver_desc = null; - mutex_unlock(&g_mutex); - - pr_debug("removed %s driver ", driver_desc->name); -} -export_symbol(gasket_unregister_device); - -static int __init gasket_init(void) -{ - int i; - - mutex_lock(&g_mutex); - for (i = 0; i < gasket_framework_desc_max; i++) { - g_descs[i].driver_desc = null; - mutex_init(&g_descs[i].mutex); - } - - gasket_sysfs_init(); - - mutex_unlock(&g_mutex); - return 0; -} - -module_description("google gasket driver framework"); -module_version(gasket_framework_version); -module_license("gpl v2"); -module_author("rob springer <rspringer@google.com>"); -module_init(gasket_init); diff --git a/drivers/staging/gasket/gasket_core.h b/drivers/staging/gasket/gasket_core.h --- a/drivers/staging/gasket/gasket_core.h +++ /dev/null -/* spdx-license-identifier: gpl-2.0 */ -/* - * gasket generic driver. defines the set of data types and functions necessary - * to define a driver using the gasket generic driver framework. - * - * copyright (c) 2018 google, inc. - */ -#ifndef __gasket_core_h__ -#define __gasket_core_h__ - -#include <linux/cdev.h> -#include <linux/compiler.h> -#include <linux/device.h> -#include <linux/init.h> -#include <linux/module.h> -#include <linux/pci.h> -#include <linux/sched.h> -#include <linux/slab.h> - -#include "gasket_constants.h" - -/** - * struct gasket_num_name - map numbers to names. - * @ein_num: number. - * @ein_name: name associated with the number, a char pointer. - * - * this structure maps numbers to names. it is used to provide printable enum - * names, e.g {0, "dead"} or {1, "alive"}. - */ -struct gasket_num_name { - uint snn_num; - const char *snn_name; -}; - -/* - * register location for packed interrupts. - * each value indicates the location of an interrupt field (in units of - * gasket_driver_desc->interrupt_pack_width) within the containing register. - * in other words, this indicates the shift to use when creating a mask to - * extract/set bits within a register for a given interrupt. - */ -enum gasket_interrupt_packing { - pack_0 = 0, - pack_1 = 1, - pack_2 = 2, - pack_3 = 3, - unpacked = 4, -}; - -/* type of the interrupt supported by the device. */ -enum gasket_interrupt_type { - pci_msix = 0, -}; - -/* - * used to describe a gasket interrupt. contains an interrupt index, a register, - * and packing data for that interrupt. the register and packing data - * fields are relevant only for pci_msix interrupt type and can be - * set to 0 for everything else. - */ -struct gasket_interrupt_desc { - /* device-wide interrupt index/number. */ - int index; - /* the register offset controlling this interrupt. */ - u64 reg; - /* the location of this interrupt inside register reg, if packed. */ - int packing; -}; - -/* - * this enum is used to identify memory regions being part of the physical - * memory that belongs to a device. - */ -enum mappable_area_type { - pci_bar = 0, /* default */ - bus_region, /* for sysbus devices, i.e. axi etc... */ - coherent_memory -}; - -/* - * metadata for each bar mapping. - * this struct is used so as to track pci memory, i/o space, axi and coherent - * memory area... i.e. memory objects which can be referenced in the device's - * mmap function. - */ -struct gasket_bar_data { - /* virtual base address. */ - u8 __iomem *virt_base; - - /* physical base address. */ - ulong phys_base; - - /* length of the mapping. */ - ulong length_bytes; - - /* type of mappable area */ - enum mappable_area_type type; -}; - -/* maintains device open ownership data. */ -struct gasket_ownership { - /* 1 if the device is owned, 0 otherwise. */ - int is_owned; - - /* tgid of the owner. */ - pid_t owner; - - /* count of current device opens in write mode. */ - int write_open_count; -}; - -/* page table modes of operation. */ -enum gasket_page_table_mode { - /* the page table is partitionable as normal, all simple by default. */ - gasket_page_table_mode_normal, - - /* all entries are always simple. */ - gasket_page_table_mode_simple, - - /* all entries are always extended. no extended bit is used. */ - gasket_page_table_mode_extended, -}; - -/* page table configuration. one per table. */ -struct gasket_page_table_config { - /* the identifier/index of this page table. */ - int id; - - /* the operation mode of this page table. */ - enum gasket_page_table_mode mode; - - /* total (first-level) entries in this page table. */ - ulong total_entries; - - /* base register for the page table. */ - int base_reg; - - /* - * register containing the extended page table. this value is unused in - * gasket_page_table_mode_simple and gasket_page_table_mode_extended - * modes. - */ - int extended_reg; - - /* the bit index indicating whether a pt entry is extended. */ - int extended_bit; -}; - -/* maintains information about a device node. */ -struct gasket_cdev_info { - /* the internal name of this device. */ - char name[gasket_name_max]; - - /* device number. */ - dev_t devt; - - /* kernel-internal device structure. */ - struct device *device; - - /* character device for real. */ - struct cdev cdev; - - /* flag indicating if cdev_add has been called for the devices. */ - int cdev_added; - - /* pointer to the overall gasket_dev struct for this device. */ - struct gasket_dev *gasket_dev_ptr; - - /* ownership data for the device in question. */ - struct gasket_ownership ownership; -}; - -/* describes the offset and length of mmapable device bar regions. */ -struct gasket_mappable_region { - u64 start; - u64 length_bytes; -}; - -/* describe the offset, size, and permissions for a device bar. */ -struct gasket_bar_desc { - /* - * the size of each pci bar range, in bytes. if a value is 0, that bar - * will not be mapped into kernel space at all. - * for devices with 64 bit bars, only elements 0, 2, and 4 should be - * populated, and 1, 3, and 5 should be set to 0. - * for example, for a device mapping 1m in each of the first two 64-bit - * bars, this field would be set as { 0x100000, 0, 0x100000, 0, 0, 0 } - * (one number per bar_desc struct.) - */ - u64 size; - /* the permissions for this bar. (should be vm_write/vm_read/vm_exec, - * and can be or'd.) if set to gasket_nomap, the bar will - * not be used for mmapping. - */ - ulong permissions; - /* the memory address corresponding to the base of this bar, if used. */ - u64 base; - /* the number of mappable regions in this bar. */ - int num_mappable_regions; - - /* the mappable subregions of this bar. */ - const struct gasket_mappable_region *mappable_regions; - - /* type of mappable area */ - enum mappable_area_type type; -}; - -/* describes the offset, size, and permissions for a coherent buffer. */ -struct gasket_coherent_buffer_desc { - /* the size of the coherent buffer. */ - u64 size; - - /* the permissions for this bar. (should be vm_write/vm_read/vm_exec, - * and can be or'd.) if set to gasket_nomap, the bar will - * not be used for mmaping. - */ - ulong permissions; - - /* device side address. */ - u64 base; -}; - -/* coherent buffer structure. */ -struct gasket_coherent_buffer { - /* virtual base address. */ - u8 *virt_base; - - /* physical base address. */ - ulong phys_base; - - /* length of the mapping. */ - ulong length_bytes; -}; - -/* description of gasket-specific permissions in the mmap field. */ -enum gasket_mapping_options { gasket_nomap = 0 }; - -/* this struct represents an undefined bar that should never be mapped. */ -#define gasket_unused_bar \ - { \ - 0, gasket_nomap, 0, 0, null, 0 \ - } - -/* internal data for a gasket device. see gasket_core.c for more information. */ -struct gasket_internal_desc; - -#define max_num_coherent_pages 16 - -/* - * device data for gasket device instances. - * - * this structure contains the data required to manage a gasket device. - */ -struct gasket_dev { - /* pointer to the internal driver description for this device. */ - struct gasket_internal_desc *internal_desc; - - /* device info */ - struct device *dev; - - /* pci subsystem metadata. */ - struct pci_dev *pci_dev; - - /* this device's index into internal_desc->devs. */ - int dev_idx; - - /* the name of this device, as reported by the kernel. */ - char kobj_name[gasket_name_max]; - - /* virtual address of mapped bar memory range. */ - struct gasket_bar_data bar_data[pci_std_num_bars]; - - /* coherent buffer. */ - struct gasket_coherent_buffer coherent_buffer; - - /* number of page tables for this device. */ - int num_page_tables; - - /* address translations. page tables have a private implementation. */ - struct gasket_page_table *page_table[gasket_max_num_page_tables]; - - /* interrupt data for this device. */ - struct gasket_interrupt_data *interrupt_data; - - /* status for this device - gasket_status_alive or _dead. */ - uint status; - - /* number of times this device has been reset. */ - uint reset_count; - - /* dev information for the cdev node. */ - struct gasket_cdev_info dev_info; - - /* hardware revision value for this device. */ - int hardware_revision; - - /* protects access to per-device data (i.e. this structure). */ - struct mutex mutex; - - /* cdev hash tracking/membership structure, accel and legacy. */ - /* unused until accel is upstreamed. */ - struct hlist_node hlist_node; - struct hlist_node legacy_hlist_node; -}; - -/* type of the ioctl handler callback. */ -typedef long (*gasket_ioctl_handler_cb_t)(struct file *file, uint cmd, - void __user *argp); -/* type of the ioctl permissions check callback. see below. */ -typedef int (*gasket_ioctl_permissions_cb_t)(struct file *filp, uint cmd, - void __user *argp); - -/* - * device type descriptor. - * - * this structure contains device-specific data needed to identify and address a - * type of device to be administered via the gasket generic driver. - * - * device ids are per-driver. in other words, two drivers using the gasket - * framework will each have a distinct device 0 (for example). - */ -struct gasket_driver_desc { - /* the name of this device type. */ - const char *name; - - /* the name of this specific device model. */ - const char *chip_model; - - /* the version of the chip specified in chip_model. */ - const char *chip_version; - - /* the version of this driver: "1.0.0", "2.1.3", etc. */ - const char *driver_version; - - /* - * non-zero if we should create "legacy" (device and device-class- - * specific) character devices and sysfs nodes. - */ - /* unused until accel is upstreamed. */ - int legacy_support; - - /* major and minor numbers identifying the device. */ - int major, minor; - - /* module structure for this driver. */ - struct module *module; - - /* pci id table. */ - const struct pci_device_id *pci_id_table; - - /* the number of page tables handled by this driver. */ - int num_page_tables; - - /* the index of the bar containing the page tables. */ - int page_table_bar_index; - - /* registers used to control each page table. */ - const struct gasket_page_table_config *page_table_configs; - - /* the bit index indicating whether a pt entry is extended. */ - int page_table_extended_bit; - - /* - * legacy mmap address adjusment for legacy devices only. should be 0 - * for any new device. - */ - ulong legacy_mmap_address_offset; - - /* set of 6 bar descriptions that describe all pcie bars. - * note that bus/axi devices (i.e. non pci devices) use those. - */ - struct gasket_bar_desc bar_descriptions[pci_std_num_bars]; - - /* - * coherent buffer description. - */ - struct gasket_coherent_buffer_desc coherent_buffer_description; - - /* interrupt type. (one of gasket_interrupt_type). */ - int interrupt_type; - - /* index of the bar containing the interrupt registers to program. */ - int interrupt_bar_index; - - /* number of interrupts in the gasket_interrupt_desc array */ - int num_interrupts; - - /* description of the interrupts for this device. */ - const struct gasket_interrupt_desc *interrupts; - - /* - * if this device packs multiple interrupt->msi-x mappings into a - * single register (i.e., "uses packed interrupts"), only a single bit - * width is supported for each interrupt mapping (unpacked/"full-width" - * interrupts are always supported). this value specifies that width. if - * packed interrupts are not used, this value is ignored. - */ - int interrupt_pack_width; - - /* driver callback functions - all may be null */ - /* - * device_open_cb: callback for when a device node is opened in write - * mode. - * @dev: the gasket_dev struct for this driver instance. - * - * this callback should perform device-specific setup that needs to - * occur only once when a device is first opened. - */ - int (*device_open_cb)(struct gasket_dev *dev); - - /* - * device_release_cb: callback when a device is closed. - * @gasket_dev: the gasket_dev struct for this driver instance. - * - * this callback is called whenever a device node fd is closed, as - * opposed to device_close_cb, which is called when the _last_ - * descriptor for an open file is closed. this call is intended to - * handle any per-user or per-fd cleanup. - */ - int (*device_release_cb)(struct gasket_dev *gasket_dev, - struct file *file); - - /* - * device_close_cb: callback for when a device node is closed for the - * last time. - * @dev: the gasket_dev struct for this driver instance. - * - * this callback should perform device-specific cleanup that only - * needs to occur when the last reference to a device node is closed. - * - * this call is intended to handle and device-wide cleanup, as opposed - * to per-fd cleanup (which should be handled by device_release_cb). - */ - int (*device_close_cb)(struct gasket_dev *dev); - - /* - * get_mappable_regions_cb: get descriptors of mappable device memory. - * @gasket_dev: pointer to the struct gasket_dev for this device. - * @bar_index: bar for which to retrieve memory ranges. - * @mappable_regions: out-pointer to the list of mappable regions on the - * device/bar for this process. - * @num_mappable_regions: out-pointer for the size of mappable_regions. - * - * called when handling mmap(), this callback is used to determine which - * regions of device memory may be mapped by the current process. this - * information is then compared to mmap request to determine which - * regions to actually map. - */ - int (*get_mappable_regions_cb)(struct gasket_dev *gasket_dev, - int bar_index, - struct gasket_mappable_region **mappable_regions, - int *num_mappable_regions); - - /* - * ioctl_permissions_cb: check permissions for generic ioctls. - * @filp: file structure pointer describing this node usage session. - * @cmd: ioctl number to handle. - * @arg: ioctl-specific data pointer. - * - * returns 1 if the ioctl may be executed, 0 otherwise. if this callback - * isn't specified a default routine will be used, that only allows the - * original device opener (i.e, the "owner") to execute state-affecting - * ioctls. - */ - gasket_ioctl_permissions_cb_t ioctl_permissions_cb; - - /* - * ioctl_handler_cb: callback to handle device-specific ioctls. - * @filp: file structure pointer describing this node usage session. - * @cmd: ioctl number to handle. - * @arg: ioctl-specific data pointer. - * - * invoked whenever an ioctl is called that the generic gasket - * framework doesn't support. if no cb is registered, unknown ioctls - * return -einval. should return an error status (either -einval or - * the error result of the ioctl being handled). - */ - gasket_ioctl_handler_cb_t ioctl_handler_cb; - - /* - * device_status_cb: callback to determine device health. - * @dev: pointer to the gasket_dev struct for this device. - * - * called to determine if the device is healthy or not. should return - * a member of the gasket_status_type enum. - * - */ - int (*device_status_cb)(struct gasket_dev *dev); - - /* - * hardware_revision_cb: get the device's hardware revision. - * @dev: pointer to the gasket_dev struct for this device. - * - * called to determine the reported rev of the physical hardware. - * revision should be >0. a negative return value is an error. - */ - int (*hardware_revision_cb)(struct gasket_dev *dev); - - /* - * device_reset_cb: reset the hardware in question. - * @dev: pointer to the gasket_dev structure for this device. - * - * called by reset ioctls. this function should not - * lock the gasket_dev mutex. it should return 0 on success - * and an error on failure. - */ - int (*device_reset_cb)(struct gasket_dev *dev); -}; - -/* - * register the specified device type with the framework. - * @desc: populated/initialized device type descriptor. - * - * this function does _not_ take ownership of desc; the underlying struct must - * exist until the matching call to gasket_unregister_device. - * this function should be called from your driver's module_init function. - */ -int gasket_register_device(const struct gasket_driver_desc *desc); - -/* - * remove the specified device type from the framework. - * @desc: descriptor for the device type to unregister; it should have been - * passed to gasket_register_device in a previous call. - * - * this function should be called from your driver's module_exit function. - */ -void gasket_unregister_device(const struct gasket_driver_desc *desc); - -/* add a pci gasket device. */ -int gasket_pci_add_device(struct pci_dev *pci_dev, - struct gasket_dev **gasket_devp); -/* remove a pci gasket device. */ -void gasket_pci_remove_device(struct pci_dev *pci_dev); - -/* enable a gasket device. */ -int gasket_enable_device(struct gasket_dev *gasket_dev); - -/* disable a gasket device. */ -void gasket_disable_device(struct gasket_dev *gasket_dev); - -/* - * reset the gasket device. - * @gasket_dev: gasket device struct. - * - * calls device_reset_cb. returns 0 on success and an error code othewrise. - * gasket_reset_nolock will not lock the mutex, gasket_reset will. - * - */ -int gasket_reset(struct gasket_dev *gasket_dev); -int gasket_reset_nolock(struct gasket_dev *gasket_dev); - -/* - * memory management functions. these will likely be spun off into their own - * file in the future. - */ - -/* unmaps the specified mappable region from a vma. */ -int gasket_mm_unmap_region(const struct gasket_dev *gasket_dev, - struct vm_area_struct *vma, - const struct gasket_mappable_region *map_region); - -/* - * get the ioctl permissions callback. - * @gasket_dev: gasket device structure. - */ -gasket_ioctl_permissions_cb_t -gasket_get_ioctl_permissions_cb(struct gasket_dev *gasket_dev); - -/** - * lookup a name by number in a num_name table. - * @num: number to lookup. - * @table: array of num_name structures, the table for the lookup. - * - */ -const char *gasket_num_name_lookup(uint num, - const struct gasket_num_name *table); - -/* handy inlines */ -static inline ulong gasket_dev_read_64(struct gasket_dev *gasket_dev, int bar, - ulong location) -{ - return readq_relaxed(&gasket_dev->bar_data[bar].virt_base[location]); -} - -static inline void gasket_dev_write_64(struct gasket_dev *dev, u64 value, - int bar, ulong location) -{ - writeq_relaxed(value, &dev->bar_data[bar].virt_base[location]); -} - -static inline void gasket_dev_write_32(struct gasket_dev *dev, u32 value, - int bar, ulong location) -{ - writel_relaxed(value, &dev->bar_data[bar].virt_base[location]); -} - -static inline u32 gasket_dev_read_32(struct gasket_dev *dev, int bar, - ulong location) -{ - return readl_relaxed(&dev->bar_data[bar].virt_base[location]); -} - -static inline void gasket_read_modify_write_64(struct gasket_dev *dev, int bar, - ulong location, u64 value, - u64 mask_width, u64 mask_shift) -{ - u64 mask, tmp; - - tmp = gasket_dev_read_64(dev, bar, location); - mask = ((1ull << mask_width) - 1) << mask_shift; - tmp = (tmp & ~mask) | (value << mask_shift); - gasket_dev_write_64(dev, tmp, bar, location); -} - -static inline void gasket_read_modify_write_32(struct gasket_dev *dev, int bar, - ulong location, u32 value, - u32 mask_width, u32 mask_shift) -{ - u32 mask, tmp; - - tmp = gasket_dev_read_32(dev, bar, location); - mask = ((1 << mask_width) - 1) << mask_shift; - tmp = (tmp & ~mask) | (value << mask_shift); - gasket_dev_write_32(dev, tmp, bar, location); -} - -/* get the gasket driver structure for a given device. */ -const struct gasket_driver_desc *gasket_get_driver_desc(struct gasket_dev *dev); - -/* get the device structure for a given device. */ -struct device *gasket_get_device(struct gasket_dev *dev); - -/* helper function, asynchronous waits on a given set of bits. */ -int gasket_wait_with_reschedule(struct gasket_dev *gasket_dev, int bar, - u64 offset, u64 mask, u64 val, - uint max_retries, u64 delay_ms); - -#endif /* __gasket_core_h__ */ diff --git a/drivers/staging/gasket/gasket_interrupt.c b/drivers/staging/gasket/gasket_interrupt.c --- a/drivers/staging/gasket/gasket_interrupt.c +++ /dev/null -// spdx-license-identifier: gpl-2.0 -/* copyright (c) 2018 google, inc. */ - -#include "gasket_interrupt.h" - -#include "gasket_constants.h" -#include "gasket_core.h" -#include "gasket_sysfs.h" -#include <linux/device.h> -#include <linux/interrupt.h> -#include <linux/printk.h> -#ifdef gasket_kernel_trace_support -#define create_trace_points -#include <trace/events/gasket_interrupt.h> -#else -#define trace_gasket_interrupt_event(x, ...) -#endif -/* retry attempts if the requested number of interrupts aren't available. */ -#define msix_retry_count 3 - -/* instance interrupt management data. */ -struct gasket_interrupt_data { - /* the name associated with this interrupt data. */ - const char *name; - - /* interrupt type. see gasket_interrupt_type in gasket_core.h */ - int type; - - /* the pci device [if any] associated with the owning device. */ - struct pci_dev *pci_dev; - - /* set to 1 if msi-x has successfully been configred, 0 otherwise. */ - int msix_configured; - - /* the number of interrupts requested by the owning device. */ - int num_interrupts; - - /* a pointer to the interrupt descriptor struct for this device. */ - const struct gasket_interrupt_desc *interrupts; - - /* the index of the bar into which interrupts should be mapped. */ - int interrupt_bar_index; - - /* the width of a single interrupt in a packed interrupt register. */ - int pack_width; - - /* - * design-wise, these elements should be bundled together, but - * pci_enable_msix's interface requires that they be managed - * individually (requires array of struct msix_entry). - */ - - /* the number of successfully configured interrupts. */ - int num_configured; - - /* the msi-x data for each requested/configured interrupt. */ - struct msix_entry *msix_entries; - - /* the eventfd "callback" data for each interrupt. */ - struct eventfd_ctx **eventfd_ctxs; - - /* the number of times each interrupt has been called. */ - ulong *interrupt_counts; - - /* linux irq number. */ - int irq; -}; - -/* structures to display interrupt counts in sysfs. */ -enum interrupt_sysfs_attribute_type { - attr_interrupt_counts, -}; - -/* set up device registers for interrupt handling. */ -static void gasket_interrupt_setup(struct gasket_dev *gasket_dev) -{ - int i; - int pack_shift; - ulong mask; - ulong value; - struct gasket_interrupt_data *interrupt_data = - gasket_dev->interrupt_data; - - if (!interrupt_data) { - dev_dbg(gasket_dev->dev, "interrupt data is not initialized "); - return; - } - - dev_dbg(gasket_dev->dev, "running interrupt setup "); - - /* setup the msix table. */ - - for (i = 0; i < interrupt_data->num_interrupts; i++) { - /* - * if the interrupt is not packed, we can write the index into - * the register directly. if not, we need to deal with a read- - * modify-write and shift based on the packing index. - */ - dev_dbg(gasket_dev->dev, - "setting up interrupt index %d with index 0x%llx and packing %d ", - interrupt_data->interrupts[i].index, - interrupt_data->interrupts[i].reg, - interrupt_data->interrupts[i].packing); - if (interrupt_data->interrupts[i].packing == unpacked) { - value = interrupt_data->interrupts[i].index; - } else { - switch (interrupt_data->interrupts[i].packing) { - case pack_0: - pack_shift = 0; - break; - case pack_1: - pack_shift = interrupt_data->pack_width; - break; - case pack_2: - pack_shift = 2 * interrupt_data->pack_width; - break; - case pack_3: - pack_shift = 3 * interrupt_data->pack_width; - break; - default: - dev_dbg(gasket_dev->dev, - "found interrupt description with unknown enum %d ", - interrupt_data->interrupts[i].packing); - return; - } - - mask = ~(0xffff << pack_shift); - value = gasket_dev_read_64(gasket_dev, - interrupt_data->interrupt_bar_index, - interrupt_data->interrupts[i].reg); - value &= mask; - value |= interrupt_data->interrupts[i].index - << pack_shift; - } - gasket_dev_write_64(gasket_dev, value, - interrupt_data->interrupt_bar_index, - interrupt_data->interrupts[i].reg); - } -} - -static void -gasket_handle_interrupt(struct gasket_interrupt_data *interrupt_data, - int interrupt_index) -{ - struct eventfd_ctx *ctx; - - trace_gasket_interrupt_event(interrupt_data->name, interrupt_index); - ctx = interrupt_data->eventfd_ctxs[interrupt_index]; - if (ctx) - eventfd_signal(ctx, 1); - - ++(interrupt_data->interrupt_counts[interrupt_index]); -} - -static irqreturn_t gasket_msix_interrupt_handler(int irq, void *dev_id) -{ - struct gasket_interrupt_data *interrupt_data = dev_id; - int interrupt = -1; - int i; - - /* if this linear lookup is a problem, we can maintain a map/hash. */ - for (i = 0; i < interrupt_data->num_interrupts; i++) { - if (interrupt_data->msix_entries[i].vector == irq) { - interrupt = interrupt_data->msix_entries[i].entry; - break; - } - } - if (interrupt == -1) { - pr_err("received unknown irq %d ", irq); - return irq_handled; - } - gasket_handle_interrupt(interrupt_data, interrupt); - return irq_handled; -} - -static int -gasket_interrupt_msix_init(struct gasket_interrupt_data *interrupt_data) -{ - int ret = 1; - int i; - - interrupt_data->msix_entries = - kcalloc(interrupt_data->num_interrupts, - sizeof(*interrupt_data->msix_entries), gfp_kernel); - if (!interrupt_data->msix_entries) - return -enomem; - - for (i = 0; i < interrupt_data->num_interrupts; i++) { - interrupt_data->msix_entries[i].entry = i; - interrupt_data->msix_entries[i].vector = 0; - interrupt_data->eventfd_ctxs[i] = null; - } - - /* retry msix_retry_count times if not enough irqs are available. */ - for (i = 0; i < msix_retry_count && ret > 0; i++) - ret = pci_enable_msix_exact(interrupt_data->pci_dev, - interrupt_data->msix_entries, - interrupt_data->num_interrupts); - - if (ret) - return ret > 0 ? -ebusy : ret; - interrupt_data->msix_configured = 1; - - for (i = 0; i < interrupt_data->num_interrupts; i++) { - ret = request_irq(interrupt_data->msix_entries[i].vector, - gasket_msix_interrupt_handler, 0, - interrupt_data->name, interrupt_data); - - if (ret) { - dev_err(&interrupt_data->pci_dev->dev, - "cannot get irq for interrupt %d, vector %d; " - "%d ", - i, interrupt_data->msix_entries[i].vector, ret); - return ret; - } - - interrupt_data->num_configured++; - } - - return 0; -} - -/* - * on qcm dragonboard, we exit gasket_interrupt_msix_init() and kernel interrupt - * setup code with msix vectors masked. this is wrong because nothing else in - * the driver will normally touch the msix vectors. - * - * as a temporary hack, force unmasking there. - * - * todo: figure out why qcm kernel doesn't unmask the msix vectors, after - * gasket_interrupt_msix_init(), and remove this code. - */ -static void force_msix_interrupt_unmasking(struct gasket_dev *gasket_dev) -{ - int i; -#define msix_vector_size 16 -#define msix_mask_bit_offset 12 -#define apex_bar2_reg_kernel_hib_msix_table 0x46800 - for (i = 0; i < gasket_dev->interrupt_data->num_configured; i++) { - /* check if the msix vector is unmasked */ - ulong location = apex_bar2_reg_kernel_hib_msix_table + - msix_mask_bit_offset + i * msix_vector_size; - u32 mask = - gasket_dev_read_32(gasket_dev, - gasket_dev->interrupt_data->interrupt_bar_index, - location); - if (!(mask & 1)) - continue; - /* unmask the msix vector (clear 32 bits) */ - gasket_dev_write_32(gasket_dev, 0, - gasket_dev->interrupt_data->interrupt_bar_index, - location); - } -#undef msix_vector_size -#undef msix_mask_bit_offset -#undef apex_bar2_reg_kernel_hib_msix_table -} - -static ssize_t interrupt_sysfs_show(struct device *device, - struct device_attribute *attr, char *buf) -{ - int i, ret; - ssize_t written = 0, total_written = 0; - struct gasket_interrupt_data *interrupt_data; - struct gasket_dev *gasket_dev; - struct gasket_sysfs_attribute *gasket_attr; - enum interrupt_sysfs_attribute_type sysfs_type; - - gasket_dev = gasket_sysfs_get_device_data(device); - if (!gasket_dev) { - dev_dbg(device, "no sysfs mapping found for device "); - return 0; - } - - gasket_attr = gasket_sysfs_get_attr(device, attr); - if (!gasket_attr) { - dev_dbg(device, "no sysfs attr data found for device "); - gasket_sysfs_put_device_data(device, gasket_dev); - return 0; - } - - sysfs_type = (enum interrupt_sysfs_attribute_type) - gasket_attr->data.attr_type; - interrupt_data = gasket_dev->interrupt_data; - switch (sysfs_type) { - case attr_interrupt_counts: - for (i = 0; i < interrupt_data->num_interrupts; ++i) { - written = - scnprintf(buf, page_size - total_written, - "0x%02x: %ld ", i, - interrupt_data->interrupt_counts[i]); - total_written += written; - buf += written; - } - ret = total_written; - break; - default: - dev_dbg(gasket_dev->dev, "unknown attribute: %s ", - attr->attr.name); - ret = 0; - break; - } - - gasket_sysfs_put_attr(device, gasket_attr); - gasket_sysfs_put_device_data(device, gasket_dev); - return ret; -} - -static struct gasket_sysfs_attribute interrupt_sysfs_attrs[] = { - gasket_sysfs_ro(interrupt_counts, interrupt_sysfs_show, - attr_interrupt_counts), - gasket_end_of_attr_array, -}; - -int gasket_interrupt_init(struct gasket_dev *gasket_dev) -{ - int ret; - struct gasket_interrupt_data *interrupt_data; - const struct gasket_driver_desc *driver_desc = - gasket_get_driver_desc(gasket_dev); - - interrupt_data = kzalloc(sizeof(*interrupt_data), gfp_kernel); - if (!interrupt_data) - return -enomem; - gasket_dev->interrupt_data = interrupt_data; - interrupt_data->name = driver_desc->name; - interrupt_data->type = driver_desc->interrupt_type; - interrupt_data->pci_dev = gasket_dev->pci_dev; - interrupt_data->num_interrupts = driver_desc->num_interrupts; - interrupt_data->interrupts = driver_desc->interrupts; - interrupt_data->interrupt_bar_index = driver_desc->interrupt_bar_index; - interrupt_data->pack_width = driver_desc->interrupt_pack_width; - interrupt_data->num_configured = 0; - - interrupt_data->eventfd_ctxs = - kcalloc(driver_desc->num_interrupts, - sizeof(*interrupt_data->eventfd_ctxs), gfp_kernel); - if (!interrupt_data->eventfd_ctxs) { - kfree(interrupt_data); - return -enomem; - } - - interrupt_data->interrupt_counts = - kcalloc(driver_desc->num_interrupts, - sizeof(*interrupt_data->interrupt_counts), gfp_kernel); - if (!interrupt_data->interrupt_counts) { - kfree(interrupt_data->eventfd_ctxs); - kfree(interrupt_data); - return -enomem; - } - - switch (interrupt_data->type) { - case pci_msix: - ret = gasket_interrupt_msix_init(interrupt_data); - if (ret) - break; - force_msix_interrupt_unmasking(gasket_dev); - break; - - default: - ret = -einval; - } - - if (ret) { - /* failing to setup interrupts will cause the device to report - * gasket_status_lamed. but it is not fatal. - */ - dev_warn(gasket_dev->dev, - "couldn't initialize interrupts: %d ", ret); - return 0; - } - - gasket_interrupt_setup(gasket_dev); - gasket_sysfs_create_entries(gasket_dev->dev_info.device, - interrupt_sysfs_attrs); - - return 0; -} - -static void -gasket_interrupt_msix_cleanup(struct gasket_interrupt_data *interrupt_data) -{ - int i; - - for (i = 0; i < interrupt_data->num_configured; i++) - free_irq(interrupt_data->msix_entries[i].vector, - interrupt_data); - interrupt_data->num_configured = 0; - - if (interrupt_data->msix_configured) - pci_disable_msix(interrupt_data->pci_dev); - interrupt_data->msix_configured = 0; - kfree(interrupt_data->msix_entries); -} - -int gasket_interrupt_reinit(struct gasket_dev *gasket_dev) -{ - int ret; - - if (!gasket_dev->interrupt_data) { - dev_dbg(gasket_dev->dev, - "attempted to reinit uninitialized interrupt data "); - return -einval; - } - - switch (gasket_dev->interrupt_data->type) { - case pci_msix: - gasket_interrupt_msix_cleanup(gasket_dev->interrupt_data); - ret = gasket_interrupt_msix_init(gasket_dev->interrupt_data); - if (ret) - break; - force_msix_interrupt_unmasking(gasket_dev); - break; - - default: - ret = -einval; - } - - if (ret) { - /* failing to setup interrupts will cause the device - * to report gasket_status_lamed, but is not fatal. - */ - dev_warn(gasket_dev->dev, "couldn't reinit interrupts: %d ", - ret); - return 0; - } - - gasket_interrupt_setup(gasket_dev); - - return 0; -} - -/* see gasket_interrupt.h for description. */ -int gasket_interrupt_reset_counts(struct gasket_dev *gasket_dev) -{ - dev_dbg(gasket_dev->dev, "clearing interrupt counts "); - memset(gasket_dev->interrupt_data->interrupt_counts, 0, - gasket_dev->interrupt_data->num_interrupts * - sizeof(*gasket_dev->interrupt_data->interrupt_counts)); - return 0; -} - -/* see gasket_interrupt.h for description. */ -void gasket_interrupt_cleanup(struct gasket_dev *gasket_dev) -{ - struct gasket_interrupt_data *interrupt_data = - gasket_dev->interrupt_data; - /* - * it is possible to get an error code from gasket_interrupt_init - * before interrupt_data has been allocated, so check it. - */ - if (!interrupt_data) - return; - - switch (interrupt_data->type) { - case pci_msix: - gasket_interrupt_msix_cleanup(interrupt_data); - break; - - default: - break; - } - - kfree(interrupt_data->interrupt_counts); - kfree(interrupt_data->eventfd_ctxs); - kfree(interrupt_data); - gasket_dev->interrupt_data = null; -} - -int gasket_interrupt_system_status(struct gasket_dev *gasket_dev) -{ - if (!gasket_dev->interrupt_data) { - dev_dbg(gasket_dev->dev, "interrupt data is null "); - return gasket_status_dead; - } - - if (gasket_dev->interrupt_data->num_configured != - gasket_dev->interrupt_data->num_interrupts) { - dev_dbg(gasket_dev->dev, - "not all interrupts were configured "); - return gasket_status_lamed; - } - - return gasket_status_alive; -} - -int gasket_interrupt_set_eventfd(struct gasket_interrupt_data *interrupt_data, - int interrupt, int event_fd) -{ - struct eventfd_ctx *ctx; - - if (interrupt < 0 || interrupt >= interrupt_data->num_interrupts) - return -einval; - - ctx = eventfd_ctx_fdget(event_fd); - - if (is_err(ctx)) - return ptr_err(ctx); - - interrupt_data->eventfd_ctxs[interrupt] = ctx; - return 0; -} - -int gasket_interrupt_clear_eventfd(struct gasket_interrupt_data *interrupt_data, - int interrupt) -{ - if (interrupt < 0 || interrupt >= interrupt_data->num_interrupts) - return -einval; - - if (interrupt_data->eventfd_ctxs[interrupt]) { - eventfd_ctx_put(interrupt_data->eventfd_ctxs[interrupt]); - interrupt_data->eventfd_ctxs[interrupt] = null; - } - return 0; -} diff --git a/drivers/staging/gasket/gasket_interrupt.h b/drivers/staging/gasket/gasket_interrupt.h --- a/drivers/staging/gasket/gasket_interrupt.h +++ /dev/null -/* spdx-license-identifier: gpl-2.0 */ -/* - * gasket common interrupt module. defines functions for enabling - * eventfd-triggered interrupts between a gasket device and a host process. - * - * copyright (c) 2018 google, inc. - */ -#ifndef __gasket_interrupt_h__ -#define __gasket_interrupt_h__ - -#include <linux/eventfd.h> -#include <linux/pci.h> - -#include "gasket_core.h" - -/* note that this currently assumes that device interrupts are a dense set, - * numbered from 0 - (num_interrupts - 1). should this have to change, these - * apis will have to be updated. - */ - -/* opaque type used to hold interrupt subsystem data. */ -struct gasket_interrupt_data; - -/* - * initialize the interrupt module. - * @gasket_dev: the gasket device structure for the device to be initted. - */ -int gasket_interrupt_init(struct gasket_dev *gasket_dev); - -/* - * clean up a device's interrupt structure. - * @gasket_dev: the gasket information structure for this device. - * - * cleans up the device's interrupts and deallocates data. - */ -void gasket_interrupt_cleanup(struct gasket_dev *gasket_dev); - -/* - * clean up and re-initialize the msi-x subsystem. - * @gasket_dev: the gasket information structure for this device. - * - * performs a teardown of the msi-x subsystem and re-initializes it. does not - * free the underlying data structures. returns 0 on success and an error code - * on error. - */ -int gasket_interrupt_reinit(struct gasket_dev *gasket_dev); - -/* - * reset the counts stored in the interrupt subsystem. - * @gasket_dev: the gasket information structure for this device. - * - * sets the counts of all interrupts in the subsystem to 0. - */ -int gasket_interrupt_reset_counts(struct gasket_dev *gasket_dev); - -/* - * associates an eventfd with a device interrupt. - * @data: pointer to device interrupt data. - * @interrupt: the device interrupt to configure. - * @event_fd: the eventfd to associate with the interrupt. - * - * prepares the host to receive notification of device interrupts by associating - * event_fd with interrupt. upon receipt of a device interrupt, event_fd will be - * signaled, after successful configuration. - * - * returns 0 on success, a negative error code otherwise. - */ -int gasket_interrupt_set_eventfd(struct gasket_interrupt_data *interrupt_data, - int interrupt, int event_fd); - -/* - * removes an interrupt-eventfd association. - * @data: pointer to device interrupt data. - * @interrupt: the device interrupt to de-associate. - * - * removes any eventfd associated with the specified interrupt, if any. - */ -int gasket_interrupt_clear_eventfd(struct gasket_interrupt_data *interrupt_data, - int interrupt); - -/* - * the below functions exist for backwards compatibility. - * no new uses should be written. - */ -/* - * get the health of the interrupt subsystem. - * @gasket_dev: the gasket device struct. - * - * returns dead if not set up, lamed if initialization failed, and alive - * otherwise. - */ - -int gasket_interrupt_system_status(struct gasket_dev *gasket_dev); - -#endif diff --git a/drivers/staging/gasket/gasket_ioctl.c b/drivers/staging/gasket/gasket_ioctl.c --- a/drivers/staging/gasket/gasket_ioctl.c +++ /dev/null -// spdx-license-identifier: gpl-2.0 -/* copyright (c) 2018 google, inc. */ -#include "gasket.h" -#include "gasket_ioctl.h" -#include "gasket_constants.h" -#include "gasket_core.h" -#include "gasket_interrupt.h" -#include "gasket_page_table.h" -#include <linux/compiler.h> -#include <linux/device.h> -#include <linux/fs.h> -#include <linux/uaccess.h> - -#ifdef gasket_kernel_trace_support -#define create_trace_points -#include <trace/events/gasket_ioctl.h> -#else -#define trace_gasket_ioctl_entry(x, ...) -#define trace_gasket_ioctl_exit(x) -#define trace_gasket_ioctl_integer_data(x) -#define trace_gasket_ioctl_eventfd_data(x, ...) -#define trace_gasket_ioctl_page_table_data(x, ...) -#define trace_gasket_ioctl_config_coherent_allocator(x, ...) -#endif - -/* associate an eventfd with an interrupt. */ -static int gasket_set_event_fd(struct gasket_dev *gasket_dev, - struct gasket_interrupt_eventfd __user *argp) -{ - struct gasket_interrupt_eventfd die; - - if (copy_from_user(&die, argp, sizeof(struct gasket_interrupt_eventfd))) - return -efault; - - trace_gasket_ioctl_eventfd_data(die.interrupt, die.event_fd); - - return gasket_interrupt_set_eventfd(gasket_dev->interrupt_data, - die.interrupt, die.event_fd); -} - -/* read the size of the page table. */ -static int gasket_read_page_table_size(struct gasket_dev *gasket_dev, - struct gasket_page_table_ioctl __user *argp) -{ - int ret = 0; - struct gasket_page_table_ioctl ibuf; - struct gasket_page_table *table; - - if (copy_from_user(&ibuf, argp, sizeof(struct gasket_page_table_ioctl))) - return -efault; - - if (ibuf.page_table_index >= gasket_dev->num_page_tables) - return -efault; - - table = gasket_dev->page_table[ibuf.page_table_index]; - ibuf.size = gasket_page_table_num_entries(table); - - trace_gasket_ioctl_page_table_data(ibuf.page_table_index, ibuf.size, - ibuf.host_address, - ibuf.device_address); - - if (copy_to_user(argp, &ibuf, sizeof(ibuf))) - return -efault; - - return ret; -} - -/* read the size of the simple page table. */ -static int gasket_read_simple_page_table_size(struct gasket_dev *gasket_dev, - struct gasket_page_table_ioctl __user *argp) -{ - int ret = 0; - struct gasket_page_table_ioctl ibuf; - struct gasket_page_table *table; - - if (copy_from_user(&ibuf, argp, sizeof(struct gasket_page_table_ioctl))) - return -efault; - - if (ibuf.page_table_index >= gasket_dev->num_page_tables) - return -efault; - - table = gasket_dev->page_table[ibuf.page_table_index]; - ibuf.size = gasket_page_table_num_simple_entries(table); - - trace_gasket_ioctl_page_table_data(ibuf.page_table_index, ibuf.size, - ibuf.host_address, - ibuf.device_address); - - if (copy_to_user(argp, &ibuf, sizeof(ibuf))) - return -efault; - - return ret; -} - -/* set the boundary between the simple and extended page tables. */ -static int gasket_partition_page_table(struct gasket_dev *gasket_dev, - struct gasket_page_table_ioctl __user *argp) -{ - int ret; - struct gasket_page_table_ioctl ibuf; - uint max_page_table_size; - struct gasket_page_table *table; - - if (copy_from_user(&ibuf, argp, sizeof(struct gasket_page_table_ioctl))) - return -efault; - - trace_gasket_ioctl_page_table_data(ibuf.page_table_index, ibuf.size, - ibuf.host_address, - ibuf.device_address); - - if (ibuf.page_table_index >= gasket_dev->num_page_tables) - return -efault; - table = gasket_dev->page_table[ibuf.page_table_index]; - max_page_table_size = gasket_page_table_max_size(table); - - if (ibuf.size > max_page_table_size) { - dev_dbg(gasket_dev->dev, - "partition request 0x%llx too large, max is 0x%x ", - ibuf.size, max_page_table_size); - return -einval; - } - - mutex_lock(&gasket_dev->mutex); - - ret = gasket_page_table_partition(table, ibuf.size); - mutex_unlock(&gasket_dev->mutex); - - return ret; -} - -/* map a userspace buffer to a device virtual address. */ -static int gasket_map_buffers(struct gasket_dev *gasket_dev, - struct gasket_page_table_ioctl __user *argp) -{ - struct gasket_page_table_ioctl ibuf; - struct gasket_page_table *table; - - if (copy_from_user(&ibuf, argp, sizeof(struct gasket_page_table_ioctl))) - return -efault; - - trace_gasket_ioctl_page_table_data(ibuf.page_table_index, ibuf.size, - ibuf.host_address, - ibuf.device_address); - - if (ibuf.page_table_index >= gasket_dev->num_page_tables) - return -efault; - - table = gasket_dev->page_table[ibuf.page_table_index]; - if (gasket_page_table_are_addrs_bad(table, ibuf.host_address, - ibuf.device_address, ibuf.size)) - return -einval; - - return gasket_page_table_map(table, ibuf.host_address, ibuf.device_address, - ibuf.size / page_size); -} - -/* unmap a userspace buffer from a device virtual address. */ -static int gasket_unmap_buffers(struct gasket_dev *gasket_dev, - struct gasket_page_table_ioctl __user *argp) -{ - struct gasket_page_table_ioctl ibuf; - struct gasket_page_table *table; - - if (copy_from_user(&ibuf, argp, sizeof(struct gasket_page_table_ioctl))) - return -efault; - - trace_gasket_ioctl_page_table_data(ibuf.page_table_index, ibuf.size, - ibuf.host_address, - ibuf.device_address); - - if (ibuf.page_table_index >= gasket_dev->num_page_tables) - return -efault; - - table = gasket_dev->page_table[ibuf.page_table_index]; - if (gasket_page_table_is_dev_addr_bad(table, ibuf.device_address, ibuf.size)) - return -einval; - - gasket_page_table_unmap(table, ibuf.device_address, ibuf.size / page_size); - - return 0; -} - -/* - * reserve structures for coherent allocation, and allocate or free the - * corresponding memory. - */ -static int gasket_config_coherent_allocator(struct gasket_dev *gasket_dev, - struct gasket_coherent_alloc_config_ioctl __user *argp) -{ - int ret; - struct gasket_coherent_alloc_config_ioctl ibuf; - - if (copy_from_user(&ibuf, argp, - sizeof(struct gasket_coherent_alloc_config_ioctl))) - return -efault; - - trace_gasket_ioctl_config_coherent_allocator(ibuf.enable, ibuf.size, - ibuf.dma_address); - - if (ibuf.page_table_index >= gasket_dev->num_page_tables) - return -efault; - - if (ibuf.size > page_size * max_num_coherent_pages) - return -enomem; - - if (ibuf.enable == 0) { - ret = gasket_free_coherent_memory(gasket_dev, ibuf.size, - ibuf.dma_address, - ibuf.page_table_index); - } else { - ret = gasket_alloc_coherent_memory(gasket_dev, ibuf.size, - &ibuf.dma_address, - ibuf.page_table_index); - } - if (ret) - return ret; - if (copy_to_user(argp, &ibuf, sizeof(ibuf))) - return -efault; - - return 0; -} - -/* check permissions for gasket ioctls. */ -static bool gasket_ioctl_check_permissions(struct file *filp, uint cmd) -{ - bool alive; - bool read, write; - struct gasket_dev *gasket_dev = (struct gasket_dev *)filp->private_data; - - alive = (gasket_dev->status == gasket_status_alive); - if (!alive) - dev_dbg(gasket_dev->dev, "%s alive %d status %d ", - __func__, alive, gasket_dev->status); - - read = !!(filp->f_mode & fmode_read); - write = !!(filp->f_mode & fmode_write); - - switch (cmd) { - case gasket_ioctl_reset: - case gasket_ioctl_clear_interrupt_counts: - return write; - - case gasket_ioctl_page_table_size: - case gasket_ioctl_simple_page_table_size: - case gasket_ioctl_number_page_tables: - return read; - - case gasket_ioctl_partition_page_table: - case gasket_ioctl_config_coherent_allocator: - return alive && write; - - case gasket_ioctl_map_buffer: - case gasket_ioctl_unmap_buffer: - return alive && write; - - case gasket_ioctl_clear_eventfd: - case gasket_ioctl_set_eventfd: - return alive && write; - } - - return false; /* unknown permissions */ -} - -/* - * standard ioctl dispatch function. - * @filp: file structure pointer describing this node usage session. - * @cmd: ioctl number to handle. - * @argp: ioctl-specific data pointer. - * - * standard ioctl dispatcher; forwards operations to individual handlers. - */ -long gasket_handle_ioctl(struct file *filp, uint cmd, void __user *argp) -{ - struct gasket_dev *gasket_dev; - unsigned long arg = (unsigned long)argp; - gasket_ioctl_permissions_cb_t ioctl_permissions_cb; - int retval; - - gasket_dev = (struct gasket_dev *)filp->private_data; - trace_gasket_ioctl_entry(gasket_dev->dev_info.name, cmd); - - ioctl_permissions_cb = gasket_get_ioctl_permissions_cb(gasket_dev); - if (ioctl_permissions_cb) { - retval = ioctl_permissions_cb(filp, cmd, argp); - if (retval < 0) { - trace_gasket_ioctl_exit(retval); - return retval; - } else if (retval == 0) { - trace_gasket_ioctl_exit(-eperm); - return -eperm; - } - } else if (!gasket_ioctl_check_permissions(filp, cmd)) { - trace_gasket_ioctl_exit(-eperm); - dev_dbg(gasket_dev->dev, "ioctl cmd=%x noperm ", cmd); - return -eperm; - } - - /* tracing happens in this switch statement for all ioctls with - * an integer argrument, but ioctls with a struct argument - * that needs copying and decoding, that tracing is done within - * the handler call. - */ - switch (cmd) { - case gasket_ioctl_reset: - retval = gasket_reset(gasket_dev); - break; - case gasket_ioctl_set_eventfd: - retval = gasket_set_event_fd(gasket_dev, argp); - break; - case gasket_ioctl_clear_eventfd: - trace_gasket_ioctl_integer_data(arg); - retval = - gasket_interrupt_clear_eventfd(gasket_dev->interrupt_data, - (int)arg); - break; - case gasket_ioctl_partition_page_table: - trace_gasket_ioctl_integer_data(arg); - retval = gasket_partition_page_table(gasket_dev, argp); - break; - case gasket_ioctl_number_page_tables: - trace_gasket_ioctl_integer_data(gasket_dev->num_page_tables); - if (copy_to_user(argp, &gasket_dev->num_page_tables, - sizeof(uint64_t))) - retval = -efault; - else - retval = 0; - break; - case gasket_ioctl_page_table_size: - retval = gasket_read_page_table_size(gasket_dev, argp); - break; - case gasket_ioctl_simple_page_table_size: - retval = gasket_read_simple_page_table_size(gasket_dev, argp); - break; - case gasket_ioctl_map_buffer: - retval = gasket_map_buffers(gasket_dev, argp); - break; - case gasket_ioctl_config_coherent_allocator: - retval = gasket_config_coherent_allocator(gasket_dev, argp); - break; - case gasket_ioctl_unmap_buffer: - retval = gasket_unmap_buffers(gasket_dev, argp); - break; - case gasket_ioctl_clear_interrupt_counts: - /* clear interrupt counts doesn't take an arg, so use 0. */ - trace_gasket_ioctl_integer_data(0); - retval = gasket_interrupt_reset_counts(gasket_dev); - break; - default: - /* if we don't understand the ioctl, the best we can do is trace - * the arg. - */ - trace_gasket_ioctl_integer_data(arg); - dev_dbg(gasket_dev->dev, - "unknown ioctl cmd=0x%x not caught by gasket_is_supported_ioctl ", - cmd); - retval = -einval; - break; - } - - trace_gasket_ioctl_exit(retval); - return retval; -} - -/* - * determines if an ioctl is part of the standard gasket framework. - * @cmd: the ioctl number to handle. - * - * returns 1 if the ioctl is supported and 0 otherwise. - */ -long gasket_is_supported_ioctl(uint cmd) -{ - switch (cmd) { - case gasket_ioctl_reset: - case gasket_ioctl_set_eventfd: - case gasket_ioctl_clear_eventfd: - case gasket_ioctl_partition_page_table: - case gasket_ioctl_number_page_tables: - case gasket_ioctl_page_table_size: - case gasket_ioctl_simple_page_table_size: - case gasket_ioctl_map_buffer: - case gasket_ioctl_unmap_buffer: - case gasket_ioctl_clear_interrupt_counts: - case gasket_ioctl_config_coherent_allocator: - return 1; - default: - return 0; - } -} diff --git a/drivers/staging/gasket/gasket_ioctl.h b/drivers/staging/gasket/gasket_ioctl.h --- a/drivers/staging/gasket/gasket_ioctl.h +++ /dev/null -/* spdx-license-identifier: gpl-2.0 */ -/* copyright (c) 2018 google, inc. */ -#ifndef __gasket_ioctl_h__ -#define __gasket_ioctl_h__ - -#include "gasket_core.h" - -#include <linux/compiler.h> - -/* - * handle gasket common ioctls. - * @filp: pointer to the ioctl's file. - * @cmd: ioctl command. - * @arg: ioctl argument pointer. - * - * returns 0 on success and nonzero on failure. - */ -long gasket_handle_ioctl(struct file *filp, uint cmd, void __user *argp); - -/* - * determines if an ioctl is part of the standard gasket framework. - * @cmd: the ioctl number to handle. - * - * returns 1 if the ioctl is supported and 0 otherwise. - */ -long gasket_is_supported_ioctl(uint cmd); - -#endif diff --git a/drivers/staging/gasket/gasket_page_table.c b/drivers/staging/gasket/gasket_page_table.c --- a/drivers/staging/gasket/gasket_page_table.c +++ /dev/null -// spdx-license-identifier: gpl-2.0 -/* - * implementation of gasket page table support. - * - * copyright (c) 2018 google, inc. - */ - -/* - * implementation of gasket page table support. - * - * this file assumes 4kb pages throughout; can be factored out when necessary. - * - * there is a configurable number of page table entries, as well as a - * configurable bit index for the extended address flag. both of these are - * specified in gasket_page_table_init through the page_table_config parameter. - * - * the following example assumes: - * page_table_config->total_entries = 8192 - * page_table_config->extended_bit = 63 - * - * address format: - * simple addresses - those whose containing pages are directly placed in the - * device's address translation registers - are laid out as: - * [ 63 - 25: 0 | 24 - 12: page index | 11 - 0: page offset ] - * page index: the index of the containing page in the device's address - * translation registers. - * page offset: the index of the address into the containing page. - * - * extended address - those whose containing pages are contained in a second- - * level page table whose address is present in the device's address translation - * registers - are laid out as: - * [ 63: flag | 62 - 34: 0 | 33 - 21: dev/level 0 index | - * 20 - 12: host/level 1 index | 11 - 0: page offset ] - * flag: marker indicating that this is an extended address. always 1. - * dev index: the index of the first-level page in the device's extended - * address translation registers. - * host index: the index of the containing page in the [host-resident] second- - * level page table. - * page offset: the index of the address into the containing [second-level] - * page. - */ -#include "gasket_page_table.h" - -#include <linux/device.h> -#include <linux/file.h> -#include <linux/init.h> -#include <linux/kernel.h> -#include <linux/module.h> -#include <linux/moduleparam.h> -#include <linux/pagemap.h> -#include <linux/vmalloc.h> - -#include "gasket_constants.h" -#include "gasket_core.h" - -/* constants & utility macros */ -/* the number of pages that can be mapped into each second-level page table. */ -#define gasket_pages_per_subtable 512 - -/* the starting position of the page index in a simple virtual address. */ -#define gasket_simple_page_shift 12 - -/* flag indicating that a [device] slot is valid for use. */ -#define gasket_valid_slot_flag 1 - -/* - * the starting position of the level 0 page index (i.e., the entry in the - * device's extended address registers) in an extended address. - * also can be thought of as (log2(page_size) + log2(pages_per_subtable)), - * or (12 + 9). - */ -#define gasket_extended_lvl0_shift 21 - -/* - * number of first level pages that gasket chips support. equivalent to - * log2(num_lvl0_page_tables) - * - * at a maximum, allowing for a 34 bits address space (or 16gb) - * = gasket_extended_lvl0_width + (log2(page_size) + log2(pages_per_subtable) - * or, = 13 + 9 + 12 - */ -#define gasket_extended_lvl0_width 13 - -/* - * the starting position of the level 1 page index (i.e., the entry in the - * host second-level/sub- table) in an extended address. - */ -#define gasket_extended_lvl1_shift 12 - -/* type declarations */ -/* valid states for a struct gasket_page_table_entry. */ -enum pte_status { - pte_free, - pte_inuse, -}; - -/* - * mapping metadata for a single page. - * - * in this file, host-side page table entries are referred to as that (or ptes). - * where device vs. host entries are differentiated, device-side or -visible - * entries are called "slots". a slot may be either an entry in the device's - * address translation table registers or an entry in a second-level page - * table ("subtable"). - * - * the full data in this structure is visible on the host [of course]. only - * the address contained in dma_addr is communicated to the device; that points - * to the actual page mapped and described by this structure. - */ -struct gasket_page_table_entry { - /* the status of this entry/slot: free or in use. */ - enum pte_status status; - - /* - * index for alignment into host vaddrs. - * when a user specifies a host address for a mapping, that address may - * not be page-aligned. offset is the index into the containing page of - * the host address (i.e., host_vaddr & (page_size - 1)). - * this is necessary for translating between user-specified addresses - * and page-aligned addresses. - */ - int offset; - - /* address of the page in dma space. */ - dma_addr_t dma_addr; - - /* linux page descriptor for the page described by this structure. */ - struct page *page; - - /* - * if this is an extended and first-level entry, sublevel points - * to the second-level entries underneath this entry. - */ - struct gasket_page_table_entry *sublevel; -}; - -/* - * maintains virtual to physical address mapping for a coherent page that is - * allocated by this module for a given device. - * note that coherent pages mappings virt mapping cannot be tracked by the - * linux kernel, and coherent pages don't have a struct page associated, - * hence linux kernel cannot perform a get_user_page_xx() on a phys address - * that was allocated coherent. - * this structure trivially implements this mechanism. - */ -struct gasket_coherent_page_entry { - /* phys address, dma'able by the owner device */ - dma_addr_t paddr; - - /* kernel virtual address */ - u64 user_virt; - - /* user virtual address that was mapped by the mmap kernel subsystem */ - u64 kernel_virt; - - /* - * whether this page has been mapped into a user land process virtual - * space - */ - u32 in_use; -}; - -/* - * [host-side] page table descriptor. - * - * this structure tracks the metadata necessary to manage both simple and - * extended page tables. - */ -struct gasket_page_table { - /* the config used to create this page table. */ - struct gasket_page_table_config config; - - /* the number of simple (single-level) entries in the page table. */ - uint num_simple_entries; - - /* the number of extended (two-level) entries in the page table. */ - uint num_extended_entries; - - /* array of [host-side] page table entries. */ - struct gasket_page_table_entry *entries; - - /* number of actively mapped kernel pages in this table. */ - uint num_active_pages; - - /* device register: base of/first slot in the page table. */ - u64 __iomem *base_slot; - - /* device register: holds the offset indicating the start of the - * extended address region of the device's address translation table. - */ - u64 __iomem *extended_offset_reg; - - /* device structure for the underlying device. only used for logging. */ - struct device *device; - - /* pci system descriptor for the underlying device. */ - struct pci_dev *pci_dev; - - /* location of the extended address bit for this gasket device. */ - u64 extended_flag; - - /* mutex to protect page table internals. */ - struct mutex mutex; - - /* number of coherent pages accessible thru by this page table */ - int num_coherent_pages; - - /* - * list of coherent memory (physical) allocated for a device. - * - * this structure also remembers the user virtual mapping, this is - * hacky, but we need to do this because the kernel doesn't keep track - * of the user coherent pages (pfn pages), and virt to coherent page - * mapping. - * todo: use find_vma() apis to convert host address to vm_area, to - * dma_addr_t instead of storing user virtu address in - * gasket_coherent_page_entry - * - * note that the user virtual mapping is created by the driver, in - * gasket_mmap function, so user_virt belongs in the driver anyhow. - */ - struct gasket_coherent_page_entry *coherent_pages; -}; - -/* see gasket_page_table.h for description. */ -int gasket_page_table_init(struct gasket_page_table **ppg_tbl, - const struct gasket_bar_data *bar_data, - const struct gasket_page_table_config *page_table_config, - struct device *device, struct pci_dev *pci_dev) -{ - ulong bytes; - struct gasket_page_table *pg_tbl; - ulong total_entries = page_table_config->total_entries; - - /* - * todo: verify config->total_entries against value read from the - * hardware register that contains the page table size. - */ - if (total_entries == ulong_max) { - dev_dbg(device, - "error reading page table size. initializing page table with size 0 "); - total_entries = 0; - } - - dev_dbg(device, - "attempting to initialize page table of size 0x%lx ", - total_entries); - - dev_dbg(device, - "table has base reg 0x%x, extended offset reg 0x%x ", - page_table_config->base_reg, - page_table_config->extended_reg); - - *ppg_tbl = kzalloc(sizeof(**ppg_tbl), gfp_kernel); - if (!*ppg_tbl) { - dev_dbg(device, "no memory for page table "); - return -enomem; - } - - pg_tbl = *ppg_tbl; - bytes = total_entries * sizeof(struct gasket_page_table_entry); - if (bytes != 0) { - pg_tbl->entries = vzalloc(bytes); - if (!pg_tbl->entries) { - kfree(pg_tbl); - *ppg_tbl = null; - return -enomem; - } - } - - mutex_init(&pg_tbl->mutex); - memcpy(&pg_tbl->config, page_table_config, sizeof(*page_table_config)); - if (pg_tbl->config.mode == gasket_page_table_mode_normal || - pg_tbl->config.mode == gasket_page_table_mode_simple) { - pg_tbl->num_simple_entries = total_entries; - pg_tbl->num_extended_entries = 0; - pg_tbl->extended_flag = 1ull << page_table_config->extended_bit; - } else { - pg_tbl->num_simple_entries = 0; - pg_tbl->num_extended_entries = total_entries; - pg_tbl->extended_flag = 0; - } - pg_tbl->num_active_pages = 0; - pg_tbl->base_slot = - (u64 __iomem *)&bar_data->virt_base[page_table_config->base_reg]; - pg_tbl->extended_offset_reg = - (u64 __iomem *)&bar_data->virt_base[page_table_config->extended_reg]; - pg_tbl->device = get_device(device); - pg_tbl->pci_dev = pci_dev; - - dev_dbg(device, "page table initialized successfully "); - - return 0; -} - -/* - * check if a range of ptes is free. - * the page table mutex must be held by the caller. - */ -static bool gasket_is_pte_range_free(struct gasket_page_table_entry *ptes, - uint num_entries) -{ - int i; - - for (i = 0; i < num_entries; i++) { - if (ptes[i].status != pte_free) - return false; - } - - return true; -} - -/* - * free a second level page [sub]table. - * the page table mutex must be held before this call. - */ -static void gasket_free_extended_subtable(struct gasket_page_table *pg_tbl, - struct gasket_page_table_entry *pte, - u64 __iomem *slot) -{ - /* release the page table from the driver */ - pte->status = pte_free; - - /* release the page table from the device */ - writeq(0, slot); - - if (pte->dma_addr) - dma_unmap_page(pg_tbl->device, pte->dma_addr, page_size, - dma_to_device); - - vfree(pte->sublevel); - - if (pte->page) - free_page((ulong)page_address(pte->page)); - - memset(pte, 0, sizeof(struct gasket_page_table_entry)); -} - -/* - * actually perform collection. - * the page table mutex must be held by the caller. - */ -static void -gasket_page_table_garbage_collect_nolock(struct gasket_page_table *pg_tbl) -{ - struct gasket_page_table_entry *pte; - u64 __iomem *slot; - - /* xxx fix me xxx -- more efficient to keep a usage count */ - /* rather than scanning the second level page tables */ - - for (pte = pg_tbl->entries + pg_tbl->num_simple_entries, - slot = pg_tbl->base_slot + pg_tbl->num_simple_entries; - pte < pg_tbl->entries + pg_tbl->config.total_entries; - pte++, slot++) { - if (pte->status == pte_inuse) { - if (gasket_is_pte_range_free(pte->sublevel, - gasket_pages_per_subtable)) - gasket_free_extended_subtable(pg_tbl, pte, - slot); - } - } -} - -/* see gasket_page_table.h for description. */ -void gasket_page_table_garbage_collect(struct gasket_page_table *pg_tbl) -{ - mutex_lock(&pg_tbl->mutex); - gasket_page_table_garbage_collect_nolock(pg_tbl); - mutex_unlock(&pg_tbl->mutex); -} - -/* see gasket_page_table.h for description. */ -void gasket_page_table_cleanup(struct gasket_page_table *pg_tbl) -{ - /* deallocate free second-level tables. */ - gasket_page_table_garbage_collect(pg_tbl); - - /* todo: check that all ptes have been freed? */ - - vfree(pg_tbl->entries); - pg_tbl->entries = null; - - put_device(pg_tbl->device); - kfree(pg_tbl); -} - -/* see gasket_page_table.h for description. */ -int gasket_page_table_partition(struct gasket_page_table *pg_tbl, - uint num_simple_entries) -{ - int i, start; - - mutex_lock(&pg_tbl->mutex); - if (num_simple_entries > pg_tbl->config.total_entries) { - mutex_unlock(&pg_tbl->mutex); - return -einval; - } - - gasket_page_table_garbage_collect_nolock(pg_tbl); - - start = min(pg_tbl->num_simple_entries, num_simple_entries); - - for (i = start; i < pg_tbl->config.total_entries; i++) { - if (pg_tbl->entries[i].status != pte_free) { - dev_err(pg_tbl->device, "entry %d is not free ", i); - mutex_unlock(&pg_tbl->mutex); - return -ebusy; - } - } - - pg_tbl->num_simple_entries = num_simple_entries; - pg_tbl->num_extended_entries = - pg_tbl->config.total_entries - num_simple_entries; - writeq(num_simple_entries, pg_tbl->extended_offset_reg); - - mutex_unlock(&pg_tbl->mutex); - return 0; -} -export_symbol(gasket_page_table_partition); - -/* - * return whether a host buffer was mapped as coherent memory. - * - * a gasket page_table currently support one contiguous dma range, mapped to one - * contiguous virtual memory range. check if the host_addr is within that range. - */ -static int is_coherent(struct gasket_page_table *pg_tbl, ulong host_addr) -{ - u64 min, max; - - /* whether the host address is within user virt range */ - if (!pg_tbl->coherent_pages) - return 0; - - min = (u64)pg_tbl->coherent_pages[0].user_virt; - max = min + page_size * pg_tbl->num_coherent_pages; - - return min <= host_addr && host_addr < max; -} - -/* safely return a page to the os. */ -static bool gasket_release_page(struct page *page) -{ - if (!page) - return false; - - if (!pagereserved(page)) - setpagedirty(page); - unpin_user_page(page); - - return true; -} - -/* - * get and map last level page table buffers. - * - * slots is the location(s) to write device-mapped page address. if this is a - * simple mapping, these will be address translation registers. if this is - * an extended mapping, these will be within a second-level page table - * allocated by the host and so must have their __iomem attribute casted away. - */ -static int gasket_perform_mapping(struct gasket_page_table *pg_tbl, - struct gasket_page_table_entry *ptes, - u64 __iomem *slots, ulong host_addr, - uint num_pages, int is_simple_mapping) -{ - int ret; - ulong offset; - struct page *page; - dma_addr_t dma_addr; - ulong page_addr; - int i; - - for (i = 0; i < num_pages; i++) { - page_addr = host_addr + i * page_size; - offset = page_addr & (page_size - 1); - if (is_coherent(pg_tbl, host_addr)) { - u64 off = - (u64)host_addr - - (u64)pg_tbl->coherent_pages[0].user_virt; - ptes[i].page = null; - ptes[i].offset = offset; - ptes[i].dma_addr = pg_tbl->coherent_pages[0].paddr + - off + i * page_size; - } else { - ret = pin_user_pages_fast(page_addr - offset, 1, - foll_write, &page); - - if (ret <= 0) { - dev_err(pg_tbl->device, - "pin user pages failed for addr=0x%lx, offset=0x%lx [ret=%d] ", - page_addr, offset, ret); - return ret ? ret : -enomem; - } - ++pg_tbl->num_active_pages; - - ptes[i].page = page; - ptes[i].offset = offset; - - /* map the page into dma space. */ - ptes[i].dma_addr = - dma_map_page(pg_tbl->device, page, 0, page_size, - dma_bidirectional); - - if (dma_mapping_error(pg_tbl->device, - ptes[i].dma_addr)) { - if (gasket_release_page(ptes[i].page)) - --pg_tbl->num_active_pages; - - memset(&ptes[i], 0, - sizeof(struct gasket_page_table_entry)); - return -einval; - } - } - - /* make the dma-space address available to the device. */ - dma_addr = (ptes[i].dma_addr + offset) | gasket_valid_slot_flag; - - if (is_simple_mapping) { - writeq(dma_addr, &slots[i]); - } else { - ((u64 __force *)slots)[i] = dma_addr; - /* extended page table vectors are in dram, - * and so need to be synced each time they are updated. - */ - dma_map_single(pg_tbl->device, - (void *)&((u64 __force *)slots)[i], - sizeof(u64), dma_to_device); - } - ptes[i].status = pte_inuse; - } - return 0; -} - -/* - * return the index of the page for the address in the simple table. - * does not perform validity checking. - */ -static int gasket_simple_page_idx(struct gasket_page_table *pg_tbl, - ulong dev_addr) -{ - return (dev_addr >> gasket_simple_page_shift) & - (pg_tbl->config.total_entries - 1); -} - -/* - * return the level 0 page index for the given address. - * does not perform validity checking. - */ -static ulong gasket_extended_lvl0_page_idx(struct gasket_page_table *pg_tbl, - ulong dev_addr) -{ - return (dev_addr >> gasket_extended_lvl0_shift) & - (pg_tbl->config.total_entries - 1); -} - -/* - * return the level 1 page index for the given address. - * does not perform validity checking. - */ -static ulong gasket_extended_lvl1_page_idx(struct gasket_page_table *pg_tbl, - ulong dev_addr) -{ - return (dev_addr >> gasket_extended_lvl1_shift) & - (gasket_pages_per_subtable - 1); -} - -/* - * allocate page table entries in a simple table. - * the page table mutex must be held by the caller. - */ -static int gasket_alloc_simple_entries(struct gasket_page_table *pg_tbl, - ulong dev_addr, uint num_pages) -{ - if (!gasket_is_pte_range_free(pg_tbl->entries + - gasket_simple_page_idx(pg_tbl, dev_addr), - num_pages)) - return -ebusy; - - return 0; -} - -/* - * unmap and release mapped pages. - * the page table mutex must be held by the caller. - */ -static void gasket_perform_unmapping(struct gasket_page_table *pg_tbl, - struct gasket_page_table_entry *ptes, - u64 __iomem *slots, uint num_pages, - int is_simple_mapping) -{ - int i; - /* - * for each page table entry and corresponding entry in the device's - * address translation table: - */ - for (i = 0; i < num_pages; i++) { - /* release the address from the device, */ - if (is_simple_mapping || ptes[i].status == pte_inuse) { - writeq(0, &slots[i]); - } else { - ((u64 __force *)slots)[i] = 0; - /* sync above pte update before updating mappings */ - wmb(); - } - - /* release the address from the driver, */ - if (ptes[i].status == pte_inuse) { - if (ptes[i].page && ptes[i].dma_addr) { - dma_unmap_page(pg_tbl->device, ptes[i].dma_addr, - page_size, dma_bidirectional); - } - if (gasket_release_page(ptes[i].page)) - --pg_tbl->num_active_pages; - } - - /* and clear the pte. */ - memset(&ptes[i], 0, sizeof(struct gasket_page_table_entry)); - } -} - -/* - * unmap and release pages mapped to simple addresses. - * the page table mutex must be held by the caller. - */ -static void gasket_unmap_simple_pages(struct gasket_page_table *pg_tbl, - ulong dev_addr, uint num_pages) -{ - uint slot = gasket_simple_page_idx(pg_tbl, dev_addr); - - gasket_perform_unmapping(pg_tbl, pg_tbl->entries + slot, - pg_tbl->base_slot + slot, num_pages, 1); -} - -/* - * unmap and release buffers to extended addresses. - * the page table mutex must be held by the caller. - */ -static void gasket_unmap_extended_pages(struct gasket_page_table *pg_tbl, - ulong dev_addr, uint num_pages) -{ - uint slot_idx, remain, len; - struct gasket_page_table_entry *pte; - u64 __iomem *slot_base; - - remain = num_pages; - slot_idx = gasket_extended_lvl1_page_idx(pg_tbl, dev_addr); - pte = pg_tbl->entries + pg_tbl->num_simple_entries + - gasket_extended_lvl0_page_idx(pg_tbl, dev_addr); - - while (remain > 0) { - /* todo: add check to ensure pte remains valid? */ - len = min(remain, gasket_pages_per_subtable - slot_idx); - - if (pte->status == pte_inuse) { - slot_base = (u64 __iomem *)(page_address(pte->page) + - pte->offset); - gasket_perform_unmapping(pg_tbl, - pte->sublevel + slot_idx, - slot_base + slot_idx, len, 0); - } - - remain -= len; - slot_idx = 0; - pte++; - } -} - -/* evaluates to nonzero if the specified virtual address is simple. */ -static inline bool gasket_addr_is_simple(struct gasket_page_table *pg_tbl, - ulong addr) -{ - return !((addr) & (pg_tbl)->extended_flag); -} - -/* - * convert (simple, page, offset) into a device address. - * examples: - * simple page 0, offset 32: - * input (1, 0, 32), output 0x20 - * simple page 1000, offset 511: - * input (1, 1000, 511), output 0x3e81ff - * extended page 0, offset 32: - * input (0, 0, 32), output 0x8000000020 - * extended page 1000, offset 511: - * input (0, 1000, 511), output 0x8003e81ff - */ -static ulong gasket_components_to_dev_address(struct gasket_page_table *pg_tbl, - int is_simple, uint page_index, - uint offset) -{ - ulong dev_addr = (page_index << gasket_simple_page_shift) | offset; - - return is_simple ? dev_addr : (pg_tbl->extended_flag | dev_addr); -} - -/* - * validity checking for simple addresses. - * - * verify that address translation commutes (from address to/from page + offset) - * and that the requested page range starts and ends within the set of - * currently-partitioned simple pages. - */ -static bool gasket_is_simple_dev_addr_bad(struct gasket_page_table *pg_tbl, - ulong dev_addr, uint num_pages) -{ - ulong page_offset = dev_addr & (page_size - 1); - ulong page_index = - (dev_addr / page_size) & (pg_tbl->config.total_entries - 1); - - if (gasket_components_to_dev_address(pg_tbl, 1, page_index, - page_offset) != dev_addr) { - dev_err(pg_tbl->device, "address is invalid, 0x%lx ", - dev_addr); - return true; - } - - if (page_index >= pg_tbl->num_simple_entries) { - dev_err(pg_tbl->device, - "starting slot at %lu is too large, max is < %u ", - page_index, pg_tbl->num_simple_entries); - return true; - } - - if (page_index + num_pages > pg_tbl->num_simple_entries) { - dev_err(pg_tbl->device, - "ending slot at %lu is too large, max is <= %u ", - page_index + num_pages, pg_tbl->num_simple_entries); - return true; - } - - return false; -} - -/* - * validity checking for extended addresses. - * - * verify that address translation commutes (from address to/from page + - * offset) and that the requested page range starts and ends within the set of - * currently-partitioned extended pages. - */ -static bool gasket_is_extended_dev_addr_bad(struct gasket_page_table *pg_tbl, - ulong dev_addr, uint num_pages) -{ - /* starting byte index of dev_addr into the first mapped page */ - ulong page_offset = dev_addr & (page_size - 1); - ulong page_global_idx, page_lvl0_idx; - ulong num_lvl0_pages; - ulong addr; - - /* check if the device address is out of bound */ - addr = dev_addr & ~((pg_tbl)->extended_flag); - if (addr >> (gasket_extended_lvl0_width + gasket_extended_lvl0_shift)) { - dev_err(pg_tbl->device, "device address out of bounds: 0x%lx ", - dev_addr); - return true; - } - - /* find the starting sub-page index in the space of all sub-pages. */ - page_global_idx = (dev_addr / page_size) & - (pg_tbl->config.total_entries * gasket_pages_per_subtable - 1); - - /* find the starting level 0 index. */ - page_lvl0_idx = gasket_extended_lvl0_page_idx(pg_tbl, dev_addr); - - /* get the count of affected level 0 pages. */ - num_lvl0_pages = div_round_up(num_pages, gasket_pages_per_subtable); - - if (gasket_components_to_dev_address(pg_tbl, 0, page_global_idx, - page_offset) != dev_addr) { - dev_err(pg_tbl->device, "address is invalid: 0x%lx ", - dev_addr); - return true; - } - - if (page_lvl0_idx >= pg_tbl->num_extended_entries) { - dev_err(pg_tbl->device, - "starting level 0 slot at %lu is too large, max is < %u ", - page_lvl0_idx, pg_tbl->num_extended_entries); - return true; - } - - if (page_lvl0_idx + num_lvl0_pages > pg_tbl->num_extended_entries) { - dev_err(pg_tbl->device, - "ending level 0 slot at %lu is too large, max is <= %u ", - page_lvl0_idx + num_lvl0_pages, - pg_tbl->num_extended_entries); - return true; - } - - return false; -} - -/* - * non-locking entry to unmapping routines. - * the page table mutex must be held by the caller. - */ -static void gasket_page_table_unmap_nolock(struct gasket_page_table *pg_tbl, - ulong dev_addr, uint num_pages) -{ - if (!num_pages) - return; - - if (gasket_addr_is_simple(pg_tbl, dev_addr)) - gasket_unmap_simple_pages(pg_tbl, dev_addr, num_pages); - else - gasket_unmap_extended_pages(pg_tbl, dev_addr, num_pages); -} - -/* - * allocate and map pages to simple addresses. - * if there is an error, no pages are mapped. - */ -static int gasket_map_simple_pages(struct gasket_page_table *pg_tbl, - ulong host_addr, ulong dev_addr, - uint num_pages) -{ - int ret; - uint slot_idx = gasket_simple_page_idx(pg_tbl, dev_addr); - - ret = gasket_alloc_simple_entries(pg_tbl, dev_addr, num_pages); - if (ret) { - dev_err(pg_tbl->device, - "page table slots %u (@ 0x%lx) to %u are not available ", - slot_idx, dev_addr, slot_idx + num_pages - 1); - return ret; - } - - ret = gasket_perform_mapping(pg_tbl, pg_tbl->entries + slot_idx, - pg_tbl->base_slot + slot_idx, host_addr, - num_pages, 1); - - if (ret) { - gasket_page_table_unmap_nolock(pg_tbl, dev_addr, num_pages); - dev_err(pg_tbl->device, "gasket_perform_mapping %d ", ret); - } - return ret; -} - -/* - * allocate a second level page table. - * the page table mutex must be held by the caller. - */ -static int gasket_alloc_extended_subtable(struct gasket_page_table *pg_tbl, - struct gasket_page_table_entry *pte, - u64 __iomem *slot) -{ - ulong page_addr, subtable_bytes; - dma_addr_t dma_addr; - - /* xxx fix me xxx this is inefficient for non-4k page sizes */ - - /* gfp_dma flag must be passed to architectures for which - * part of the memory range is not considered dma'able. - * this seems to be the case for juno board with 4.5.0 linaro kernel - */ - page_addr = get_zeroed_page(gfp_kernel | gfp_dma); - if (!page_addr) - return -enomem; - pte->page = virt_to_page((void *)page_addr); - pte->offset = 0; - - subtable_bytes = sizeof(struct gasket_page_table_entry) * - gasket_pages_per_subtable; - pte->sublevel = vzalloc(subtable_bytes); - if (!pte->sublevel) { - free_page(page_addr); - memset(pte, 0, sizeof(struct gasket_page_table_entry)); - return -enomem; - } - - /* map the page into dma space. */ - pte->dma_addr = dma_map_page(pg_tbl->device, pte->page, 0, page_size, - dma_to_device); - if (dma_mapping_error(pg_tbl->device, pte->dma_addr)) { - free_page(page_addr); - vfree(pte->sublevel); - memset(pte, 0, sizeof(struct gasket_page_table_entry)); - return -enomem; - } - - /* make the addresses available to the device */ - dma_addr = (pte->dma_addr + pte->offset) | gasket_valid_slot_flag; - writeq(dma_addr, slot); - - pte->status = pte_inuse; - - return 0; -} - -/* - * allocate slots in an extended page table. check to see if a range of page - * table slots are available. if necessary, memory is allocated for second level - * page tables. - * - * note that memory for second level page tables is allocated as needed, but - * that memory is only freed on the final close of the device file, when the - * page tables are repartitioned, or the device is removed. if there is an - * error or if the full range of slots is not available, any memory - * allocated for second level page tables remains allocated until final close, - * repartition, or device removal. - * - * the page table mutex must be held by the caller. - */ -static int gasket_alloc_extended_entries(struct gasket_page_table *pg_tbl, - ulong dev_addr, uint num_entries) -{ - int ret = 0; - uint remain, subtable_slot_idx, len; - struct gasket_page_table_entry *pte; - u64 __iomem *slot; - - remain = num_entries; - subtable_slot_idx = gasket_extended_lvl1_page_idx(pg_tbl, dev_addr); - pte = pg_tbl->entries + pg_tbl->num_simple_entries + - gasket_extended_lvl0_page_idx(pg_tbl, dev_addr); - slot = pg_tbl->base_slot + pg_tbl->num_simple_entries + - gasket_extended_lvl0_page_idx(pg_tbl, dev_addr); - - while (remain > 0) { - len = min(remain, - gasket_pages_per_subtable - subtable_slot_idx); - - if (pte->status == pte_free) { - ret = gasket_alloc_extended_subtable(pg_tbl, pte, slot); - if (ret) { - dev_err(pg_tbl->device, - "no memory for extended addr subtable "); - return ret; - } - } else { - if (!gasket_is_pte_range_free(pte->sublevel + - subtable_slot_idx, len)) - return -ebusy; - } - - remain -= len; - subtable_slot_idx = 0; - pte++; - slot++; - } - - return 0; -} - -/* - * gasket_map_extended_pages - get and map buffers to extended addresses. - * if there is an error, no pages are mapped. - */ -static int gasket_map_extended_pages(struct gasket_page_table *pg_tbl, - ulong host_addr, ulong dev_addr, - uint num_pages) -{ - int ret; - ulong dev_addr_end; - uint slot_idx, remain, len; - struct gasket_page_table_entry *pte; - u64 __iomem *slot_base; - - ret = gasket_alloc_extended_entries(pg_tbl, dev_addr, num_pages); - if (ret) { - dev_addr_end = dev_addr + (num_pages / page_size) - 1; - dev_err(pg_tbl->device, - "page table slots (%lu,%lu) (@ 0x%lx) to (%lu,%lu) are not available ", - gasket_extended_lvl0_page_idx(pg_tbl, dev_addr), - dev_addr, - gasket_extended_lvl1_page_idx(pg_tbl, dev_addr), - gasket_extended_lvl0_page_idx(pg_tbl, dev_addr_end), - gasket_extended_lvl1_page_idx(pg_tbl, dev_addr_end)); - return ret; - } - - remain = num_pages; - slot_idx = gasket_extended_lvl1_page_idx(pg_tbl, dev_addr); - pte = pg_tbl->entries + pg_tbl->num_simple_entries + - gasket_extended_lvl0_page_idx(pg_tbl, dev_addr); - - while (remain > 0) { - len = min(remain, gasket_pages_per_subtable - slot_idx); - - slot_base = - (u64 __iomem *)(page_address(pte->page) + pte->offset); - ret = gasket_perform_mapping(pg_tbl, pte->sublevel + slot_idx, - slot_base + slot_idx, host_addr, - len, 0); - if (ret) { - gasket_page_table_unmap_nolock(pg_tbl, dev_addr, - num_pages); - return ret; - } - - remain -= len; - slot_idx = 0; - pte++; - host_addr += len * page_size; - } - - return 0; -} - -/* - * see gasket_page_table.h for general description. - * - * gasket_page_table_map calls either gasket_map_simple_pages() or - * gasket_map_extended_pages() to actually perform the mapping. - * - * the page table mutex is held for the entire operation. - */ -int gasket_page_table_map(struct gasket_page_table *pg_tbl, ulong host_addr, - ulong dev_addr, uint num_pages) -{ - int ret; - - if (!num_pages) - return 0; - - mutex_lock(&pg_tbl->mutex); - - if (gasket_addr_is_simple(pg_tbl, dev_addr)) { - ret = gasket_map_simple_pages(pg_tbl, host_addr, dev_addr, - num_pages); - } else { - ret = gasket_map_extended_pages(pg_tbl, host_addr, dev_addr, - num_pages); - } - - mutex_unlock(&pg_tbl->mutex); - return ret; -} -export_symbol(gasket_page_table_map); - -/* - * see gasket_page_table.h for general description. - * - * gasket_page_table_unmap takes the page table lock and calls either - * gasket_unmap_simple_pages() or gasket_unmap_extended_pages() to - * actually unmap the pages from device space. - * - * the page table mutex is held for the entire operation. - */ -void gasket_page_table_unmap(struct gasket_page_table *pg_tbl, ulong dev_addr, - uint num_pages) -{ - if (!num_pages) - return; - - mutex_lock(&pg_tbl->mutex); - gasket_page_table_unmap_nolock(pg_tbl, dev_addr, num_pages); - mutex_unlock(&pg_tbl->mutex); -} -export_symbol(gasket_page_table_unmap); - -static void gasket_page_table_unmap_all_nolock(struct gasket_page_table *pg_tbl) -{ - gasket_unmap_simple_pages(pg_tbl, - gasket_components_to_dev_address(pg_tbl, 1, 0, - 0), - pg_tbl->num_simple_entries); - gasket_unmap_extended_pages(pg_tbl, - gasket_components_to_dev_address(pg_tbl, 0, - 0, 0), - pg_tbl->num_extended_entries * - gasket_pages_per_subtable); -} - -/* see gasket_page_table.h for description. */ -void gasket_page_table_unmap_all(struct gasket_page_table *pg_tbl) -{ - mutex_lock(&pg_tbl->mutex); - gasket_page_table_unmap_all_nolock(pg_tbl); - mutex_unlock(&pg_tbl->mutex); -} -export_symbol(gasket_page_table_unmap_all); - -/* see gasket_page_table.h for description. */ -void gasket_page_table_reset(struct gasket_page_table *pg_tbl) -{ - mutex_lock(&pg_tbl->mutex); - gasket_page_table_unmap_all_nolock(pg_tbl); - writeq(pg_tbl->config.total_entries, pg_tbl->extended_offset_reg); - mutex_unlock(&pg_tbl->mutex); -} - -/* see gasket_page_table.h for description. */ -int gasket_page_table_lookup_page(struct gasket_page_table *pg_tbl, - ulong dev_addr, struct page **ppage, - ulong *poffset) -{ - uint page_num; - struct gasket_page_table_entry *pte; - - mutex_lock(&pg_tbl->mutex); - if (gasket_addr_is_simple(pg_tbl, dev_addr)) { - page_num = gasket_simple_page_idx(pg_tbl, dev_addr); - if (page_num >= pg_tbl->num_simple_entries) - goto fail; - - pte = pg_tbl->entries + page_num; - if (pte->status != pte_inuse) - goto fail; - } else { - /* find the level 0 entry, */ - page_num = gasket_extended_lvl0_page_idx(pg_tbl, dev_addr); - if (page_num >= pg_tbl->num_extended_entries) - goto fail; - - pte = pg_tbl->entries + pg_tbl->num_simple_entries + page_num; - if (pte->status != pte_inuse) - goto fail; - - /* and its contained level 1 entry. */ - page_num = gasket_extended_lvl1_page_idx(pg_tbl, dev_addr); - pte = pte->sublevel + page_num; - if (pte->status != pte_inuse) - goto fail; - } - - *ppage = pte->page; - *poffset = pte->offset; - mutex_unlock(&pg_tbl->mutex); - return 0; - -fail: - *ppage = null; - *poffset = 0; - mutex_unlock(&pg_tbl->mutex); - return -einval; -} - -/* see gasket_page_table.h for description. */ -bool gasket_page_table_are_addrs_bad(struct gasket_page_table *pg_tbl, - ulong host_addr, ulong dev_addr, - ulong bytes) -{ - if (host_addr & (page_size - 1)) { - dev_err(pg_tbl->device, - "host mapping address 0x%lx must be page aligned ", - host_addr); - return true; - } - - return gasket_page_table_is_dev_addr_bad(pg_tbl, dev_addr, bytes); -} -export_symbol(gasket_page_table_are_addrs_bad); - -/* see gasket_page_table.h for description. */ -bool gasket_page_table_is_dev_addr_bad(struct gasket_page_table *pg_tbl, - ulong dev_addr, ulong bytes) -{ - uint num_pages = bytes / page_size; - - if (bytes & (page_size - 1)) { - dev_err(pg_tbl->device, - "mapping size 0x%lx must be page aligned ", bytes); - return true; - } - - if (num_pages == 0) { - dev_err(pg_tbl->device, - "requested mapping is less than one page: %lu / %lu ", - bytes, page_size); - return true; - } - - if (gasket_addr_is_simple(pg_tbl, dev_addr)) - return gasket_is_simple_dev_addr_bad(pg_tbl, dev_addr, - num_pages); - return gasket_is_extended_dev_addr_bad(pg_tbl, dev_addr, num_pages); -} -export_symbol(gasket_page_table_is_dev_addr_bad); - -/* see gasket_page_table.h for description. */ -uint gasket_page_table_max_size(struct gasket_page_table *page_table) -{ - if (!page_table) - return 0; - return page_table->config.total_entries; -} -export_symbol(gasket_page_table_max_size); - -/* see gasket_page_table.h for description. */ -uint gasket_page_table_num_entries(struct gasket_page_table *pg_tbl) -{ - if (!pg_tbl) - return 0; - return pg_tbl->num_simple_entries + pg_tbl->num_extended_entries; -} -export_symbol(gasket_page_table_num_entries); - -/* see gasket_page_table.h for description. */ -uint gasket_page_table_num_simple_entries(struct gasket_page_table *pg_tbl) -{ - if (!pg_tbl) - return 0; - return pg_tbl->num_simple_entries; -} -export_symbol(gasket_page_table_num_simple_entries); - -/* see gasket_page_table.h for description. */ -uint gasket_page_table_num_active_pages(struct gasket_page_table *pg_tbl) -{ - if (!pg_tbl) - return 0; - return pg_tbl->num_active_pages; -} -export_symbol(gasket_page_table_num_active_pages); - -/* see gasket_page_table.h */ -int gasket_page_table_system_status(struct gasket_page_table *page_table) -{ - if (!page_table) - return gasket_status_lamed; - - if (gasket_page_table_num_entries(page_table) == 0) { - dev_dbg(page_table->device, "page table size is 0 "); - return gasket_status_lamed; - } - - return gasket_status_alive; -} - -/* record the host_addr to coherent dma memory mapping. */ -int gasket_set_user_virt(struct gasket_dev *gasket_dev, u64 size, - dma_addr_t dma_address, ulong vma) -{ - int j; - struct gasket_page_table *pg_tbl; - - unsigned int num_pages = size / page_size; - - /* - * todo: for future chipset, better handling of the case where multiple - * page tables are supported on a given device - */ - pg_tbl = gasket_dev->page_table[0]; - if (!pg_tbl) { - dev_dbg(gasket_dev->dev, "%s: invalid page table index ", - __func__); - return 0; - } - for (j = 0; j < num_pages; j++) { - pg_tbl->coherent_pages[j].user_virt = - (u64)vma + j * page_size; - } - return 0; -} - -/* allocate a block of coherent memory. */ -int gasket_alloc_coherent_memory(struct gasket_dev *gasket_dev, u64 size, - dma_addr_t *dma_address, u64 index) -{ - dma_addr_t handle; - void *mem; - int j; - unsigned int num_pages = div_round_up(size, page_size); - const struct gasket_driver_desc *driver_desc = - gasket_get_driver_desc(gasket_dev); - - if (!gasket_dev->page_table[index]) - return -efault; - - if (num_pages == 0) - return -einval; - - mem = dma_alloc_coherent(gasket_get_device(gasket_dev), - num_pages * page_size, &handle, gfp_kernel); - if (!mem) - goto nomem; - - gasket_dev->page_table[index]->num_coherent_pages = num_pages; - - /* allocate the physical memory block */ - gasket_dev->page_table[index]->coherent_pages = - kcalloc(num_pages, - sizeof(*gasket_dev->page_table[index]->coherent_pages), - gfp_kernel); - if (!gasket_dev->page_table[index]->coherent_pages) - goto nomem; - - gasket_dev->coherent_buffer.length_bytes = - page_size * (num_pages); - gasket_dev->coherent_buffer.phys_base = handle; - gasket_dev->coherent_buffer.virt_base = mem; - - *dma_address = driver_desc->coherent_buffer_description.base; - for (j = 0; j < num_pages; j++) { - gasket_dev->page_table[index]->coherent_pages[j].paddr = - handle + j * page_size; - gasket_dev->page_table[index]->coherent_pages[j].kernel_virt = - (u64)mem + j * page_size; - } - - return 0; - -nomem: - if (mem) { - dma_free_coherent(gasket_get_device(gasket_dev), - num_pages * page_size, mem, handle); - gasket_dev->coherent_buffer.length_bytes = 0; - gasket_dev->coherent_buffer.virt_base = null; - gasket_dev->coherent_buffer.phys_base = 0; - } - - kfree(gasket_dev->page_table[index]->coherent_pages); - gasket_dev->page_table[index]->coherent_pages = null; - gasket_dev->page_table[index]->num_coherent_pages = 0; - return -enomem; -} - -/* free a block of coherent memory. */ -int gasket_free_coherent_memory(struct gasket_dev *gasket_dev, u64 size, - dma_addr_t dma_address, u64 index) -{ - const struct gasket_driver_desc *driver_desc; - - if (!gasket_dev->page_table[index]) - return -efault; - - driver_desc = gasket_get_driver_desc(gasket_dev); - - if (driver_desc->coherent_buffer_description.base != dma_address) - return -eaddrnotavail; - - if (gasket_dev->coherent_buffer.length_bytes) { - dma_free_coherent(gasket_get_device(gasket_dev), - gasket_dev->coherent_buffer.length_bytes, - gasket_dev->coherent_buffer.virt_base, - gasket_dev->coherent_buffer.phys_base); - gasket_dev->coherent_buffer.length_bytes = 0; - gasket_dev->coherent_buffer.virt_base = null; - gasket_dev->coherent_buffer.phys_base = 0; - } - - kfree(gasket_dev->page_table[index]->coherent_pages); - gasket_dev->page_table[index]->coherent_pages = null; - gasket_dev->page_table[index]->num_coherent_pages = 0; - - return 0; -} - -/* release all coherent memory. */ -void gasket_free_coherent_memory_all(struct gasket_dev *gasket_dev, u64 index) -{ - if (!gasket_dev->page_table[index]) - return; - - if (gasket_dev->coherent_buffer.length_bytes) { - dma_free_coherent(gasket_get_device(gasket_dev), - gasket_dev->coherent_buffer.length_bytes, - gasket_dev->coherent_buffer.virt_base, - gasket_dev->coherent_buffer.phys_base); - gasket_dev->coherent_buffer.length_bytes = 0; - gasket_dev->coherent_buffer.virt_base = null; - gasket_dev->coherent_buffer.phys_base = 0; - } -} diff --git a/drivers/staging/gasket/gasket_page_table.h b/drivers/staging/gasket/gasket_page_table.h --- a/drivers/staging/gasket/gasket_page_table.h +++ /dev/null -/* spdx-license-identifier: gpl-2.0 */ -/* - * gasket page table functionality. this file describes the address - * translation/paging functionality supported by the gasket driver framework. - * as much as possible, internal details are hidden to simplify use - - * all calls are thread-safe (protected by an internal mutex) except where - * indicated otherwise. - * - * copyright (c) 2018 google, inc. - */ - -#ifndef __gasket_page_table_h__ -#define __gasket_page_table_h__ - -#include <linux/pci.h> -#include <linux/types.h> - -#include "gasket_constants.h" -#include "gasket_core.h" - -/* - * structure used for managing address translation on a device. all details are - * internal to the implementation. - */ -struct gasket_page_table; - -/* - * allocate and init address translation data. - * @ppage_table: pointer to gasket page table pointer. set by this call. - * @att_base_reg: [mapped] pointer to the first entry in the device's address - * translation table. - * @extended_offset_reg: [mapped] pointer to the device's register containing - * the starting index of the extended translation table. - * @extended_bit_location: the index of the bit indicating whether an address - * is extended. - * @total_entries: the total number of entries in the device's address - * translation table. - * @device: device structure for the underlying device. only used for logging. - * @pci_dev: pci system descriptor for the underlying device. - * whether the driver will supply its own. - * - * description: allocates and initializes data to track address translation - - * simple and extended page table metadata. initially, the page table is - * partitioned such that all addresses are "simple" (single-level lookup). - * gasket_partition_page_table can be called to change this paritioning. - * - * returns 0 on success, a negative error code otherwise. - */ -int gasket_page_table_init(struct gasket_page_table **ppg_tbl, - const struct gasket_bar_data *bar_data, - const struct gasket_page_table_config *page_table_config, - struct device *device, struct pci_dev *pci_dev); - -/* - * deallocate and cleanup page table data. - * @page_table: gasket page table pointer. - * - * description: the inverse of gasket_init; frees page_table and its contained - * elements. - * - * because this call destroys the page table, it cannot be - * thread-safe (mutex-protected)! - */ -void gasket_page_table_cleanup(struct gasket_page_table *page_table); - -/* - * sets the size of the simple page table. - * @page_table: gasket page table pointer. - * @num_simple_entries: desired size of the simple page table (in entries). - * - * description: gasket_partition_page_table checks to see if the simple page - * size can be changed (i.e., if there are no active extended - * mappings in the new simple size range), and, if so, - * sets the new simple and extended page table sizes. - * - * returns 0 if successful, or non-zero if the page table entries - * are not free. - */ -int gasket_page_table_partition(struct gasket_page_table *page_table, - uint num_simple_entries); - -/* - * get and map [host] user space pages into device memory. - * @page_table: gasket page table pointer. - * @host_addr: starting host virtual memory address of the pages. - * @dev_addr: starting device address of the pages. - * @num_pages: number of [4kb] pages to map. - * - * description: maps the "num_pages" pages of host memory pointed to by - * host_addr to the address "dev_addr" in device memory. - * - * the caller is responsible for checking the addresses ranges. - * - * returns 0 if successful or a non-zero error number otherwise. - * if there is an error, no pages are mapped. - */ -int gasket_page_table_map(struct gasket_page_table *page_table, ulong host_addr, - ulong dev_addr, uint num_pages); - -/* - * un-map host pages from device memory. - * @page_table: gasket page table pointer. - * @dev_addr: starting device address of the pages to unmap. - * @num_pages: the number of [4kb] pages to unmap. - * - * description: the inverse of gasket_map_pages. unmaps pages from the device. - */ -void gasket_page_table_unmap(struct gasket_page_table *page_table, - ulong dev_addr, uint num_pages); - -/* - * unmap all host pages from device memory. - * @page_table: gasket page table pointer. - */ -void gasket_page_table_unmap_all(struct gasket_page_table *page_table); - -/* - * unmap all host pages from device memory and reset the table to fully simple - * addressing. - * @page_table: gasket page table pointer. - */ -void gasket_page_table_reset(struct gasket_page_table *page_table); - -/* - * reclaims unused page table memory. - * @page_table: gasket page table pointer. - * - * description: examines the page table and frees any currently-unused - * allocations. called internally on gasket_cleanup(). - */ -void gasket_page_table_garbage_collect(struct gasket_page_table *page_table); - -/* - * retrieve the backing page for a device address. - * @page_table: gasket page table pointer. - * @dev_addr: gasket device address. - * @ppage: pointer to a page pointer for the returned page. - * @poffset: pointer to an unsigned long for the returned offset. - * - * description: interprets the address and looks up the corresponding page - * in the page table and the offset in that page. (we need an - * offset because the host page may be larger than the gasket chip - * page it contains.) - * - * returns 0 if successful, -1 for an error. the page pointer - * and offset are returned through the pointers, if successful. - */ -int gasket_page_table_lookup_page(struct gasket_page_table *page_table, - ulong dev_addr, struct page **page, - ulong *poffset); - -/* - * checks validity for input addrs and size. - * @page_table: gasket page table pointer. - * @host_addr: host address to check. - * @dev_addr: gasket device address. - * @bytes: size of the range to check (in bytes). - * - * description: this call performs a number of checks to verify that the ranges - * specified by both addresses and the size are valid for mapping pages into - * device memory. - * - * returns true if the mapping is bad, false otherwise. - */ -bool gasket_page_table_are_addrs_bad(struct gasket_page_table *page_table, - ulong host_addr, ulong dev_addr, - ulong bytes); - -/* - * checks validity for input dev addr and size. - * @page_table: gasket page table pointer. - * @dev_addr: gasket device address. - * @bytes: size of the range to check (in bytes). - * - * description: this call performs a number of checks to verify that the range - * specified by the device address and the size is valid for mapping pages into - * device memory. - * - * returns true if the address is bad, false otherwise. - */ -bool gasket_page_table_is_dev_addr_bad(struct gasket_page_table *page_table, - ulong dev_addr, ulong bytes); - -/* - * gets maximum size for the given page table. - * @page_table: gasket page table pointer. - */ -uint gasket_page_table_max_size(struct gasket_page_table *page_table); - -/* - * gets the total number of entries in the arg. - * @page_table: gasket page table pointer. - */ -uint gasket_page_table_num_entries(struct gasket_page_table *page_table); - -/* - * gets the number of simple entries. - * @page_table: gasket page table pointer. - */ -uint gasket_page_table_num_simple_entries(struct gasket_page_table *page_table); - -/* - * gets the number of actively pinned pages. - * @page_table: gasket page table pointer. - */ -uint gasket_page_table_num_active_pages(struct gasket_page_table *page_table); - -/* - * get status of page table managed by @page_table. - * @page_table: gasket page table pointer. - */ -int gasket_page_table_system_status(struct gasket_page_table *page_table); - -/* - * allocate a block of coherent memory. - * @gasket_dev: gasket device. - * @size: size of the memory block. - * @dma_address: dma address allocated by the kernel. - * @index: index of the gasket_page_table within this gasket device - * - * description: allocate a contiguous coherent memory block, dma'ble - * by this device. - */ -int gasket_alloc_coherent_memory(struct gasket_dev *gasket_dev, uint64_t size, - dma_addr_t *dma_address, uint64_t index); -/* release a block of contiguous coherent memory, in use by a device. */ -int gasket_free_coherent_memory(struct gasket_dev *gasket_dev, uint64_t size, - dma_addr_t dma_address, uint64_t index); - -/* release all coherent memory. */ -void gasket_free_coherent_memory_all(struct gasket_dev *gasket_dev, - uint64_t index); - -/* - * records the host_addr to coherent dma memory mapping. - * @gasket_dev: gasket device. - * @size: size of the virtual address range to map. - * @dma_address: dma address within the coherent memory range. - * @vma: virtual address we wish to map to coherent memory. - * - * description: for each page in the virtual address range, record the - * coherent page mapping. - * - * does not perform validity checking. - */ -int gasket_set_user_virt(struct gasket_dev *gasket_dev, uint64_t size, - dma_addr_t dma_address, ulong vma); - -#endif /* __gasket_page_table_h__ */ diff --git a/drivers/staging/gasket/gasket_sysfs.c b/drivers/staging/gasket/gasket_sysfs.c --- a/drivers/staging/gasket/gasket_sysfs.c +++ /dev/null -// spdx-license-identifier: gpl-2.0 -/* copyright (c) 2018 google, inc. */ -#include "gasket_sysfs.h" - -#include "gasket_core.h" - -#include <linux/device.h> -#include <linux/printk.h> - -/* - * pair of kernel device and user-specified pointer. used in lookups in sysfs - * "show" functions to return user data. - */ - -struct gasket_sysfs_mapping { - /* - * the device bound to this mapping. if this is null, then this mapping - * is free. - */ - struct device *device; - - /* the gasket descriptor for this device. */ - struct gasket_dev *gasket_dev; - - /* this device's set of sysfs attributes/nodes. */ - struct gasket_sysfs_attribute *attributes; - - /* the number of live elements in "attributes". */ - int attribute_count; - - /* protects structure from simultaneous access. */ - struct mutex mutex; - - /* tracks active users of this mapping. */ - struct kref refcount; -}; - -/* - * data needed to manage users of this sysfs utility. - * currently has a fixed size; if space is a concern, this can be dynamically - * allocated. - */ -/* - * 'global' (file-scoped) list of mappings between devices and gasket_data - * pointers. this removes the requirement to have a gasket_sysfs_data - * handle in all files. - */ -static struct gasket_sysfs_mapping dev_mappings[gasket_sysfs_num_mappings]; - -/* callback when a mapping's refcount goes to zero. */ -static void release_entry(struct kref *ref) -{ - /* all work is done after the return from kref_put. */ -} - -/* look up mapping information for the given device. */ -static struct gasket_sysfs_mapping *get_mapping(struct device *device) -{ - int i; - - for (i = 0; i < gasket_sysfs_num_mappings; i++) { - mutex_lock(&dev_mappings[i].mutex); - if (dev_mappings[i].device == device) { - kref_get(&dev_mappings[i].refcount); - mutex_unlock(&dev_mappings[i].mutex); - return &dev_mappings[i]; - } - mutex_unlock(&dev_mappings[i].mutex); - } - - dev_dbg(device, "%s: mapping to device %s not found ", - __func__, device->kobj.name); - return null; -} - -/* put a reference to a mapping. */ -static void put_mapping(struct gasket_sysfs_mapping *mapping) -{ - int i; - int num_files_to_remove = 0; - struct device_attribute *files_to_remove; - struct device *device; - - if (!mapping) { - pr_debug("%s: mapping should not be null ", __func__); - return; - } - - mutex_lock(&mapping->mutex); - if (kref_put(&mapping->refcount, release_entry)) { - dev_dbg(mapping->device, "removing gasket sysfs mapping "); - /* - * we can't remove the sysfs nodes in the kref callback, since - * device_remove_file() blocks until the node is free. - * readers/writers of sysfs nodes, though, will be blocked on - * the mapping mutex, resulting in deadlock. to fix this, the - * sysfs nodes are removed outside the lock. - */ - device = mapping->device; - num_files_to_remove = mapping->attribute_count; - files_to_remove = kcalloc(num_files_to_remove, - sizeof(*files_to_remove), - gfp_kernel); - if (files_to_remove) - for (i = 0; i < num_files_to_remove; i++) - files_to_remove[i] = - mapping->attributes[i].attr; - else - num_files_to_remove = 0; - - kfree(mapping->attributes); - mapping->attributes = null; - mapping->attribute_count = 0; - put_device(mapping->device); - mapping->device = null; - mapping->gasket_dev = null; - } - mutex_unlock(&mapping->mutex); - - if (num_files_to_remove != 0) { - for (i = 0; i < num_files_to_remove; ++i) - device_remove_file(device, &files_to_remove[i]); - kfree(files_to_remove); - } -} - -/* - * put a reference to a mapping n times. - * - * in higher-level resource acquire/release function pairs, the release function - * will need to release a mapping 2x - once for the refcount taken in the - * release function itself, and once for the count taken in the acquire call. - */ -static void put_mapping_n(struct gasket_sysfs_mapping *mapping, int times) -{ - int i; - - for (i = 0; i < times; i++) - put_mapping(mapping); -} - -void gasket_sysfs_init(void) -{ - int i; - - for (i = 0; i < gasket_sysfs_num_mappings; i++) { - dev_mappings[i].device = null; - mutex_init(&dev_mappings[i].mutex); - } -} - -int gasket_sysfs_create_mapping(struct device *device, - struct gasket_dev *gasket_dev) -{ - struct gasket_sysfs_mapping *mapping; - int map_idx = -1; - - /* - * we need a function-level mutex to protect against the same device - * being added [multiple times] simultaneously. - */ - static define_mutex(function_mutex); - - mutex_lock(&function_mutex); - dev_dbg(device, "creating sysfs entries for device "); - - /* check that the device we're adding hasn't already been added. */ - mapping = get_mapping(device); - if (mapping) { - dev_err(device, - "attempting to re-initialize sysfs mapping for device "); - put_mapping(mapping); - mutex_unlock(&function_mutex); - return -ebusy; - } - - /* find the first empty entry in the array. */ - for (map_idx = 0; map_idx < gasket_sysfs_num_mappings; ++map_idx) { - mutex_lock(&dev_mappings[map_idx].mutex); - if (!dev_mappings[map_idx].device) - /* break with the mutex held! */ - break; - mutex_unlock(&dev_mappings[map_idx].mutex); - } - - if (map_idx == gasket_sysfs_num_mappings) { - dev_err(device, "all mappings have been exhausted "); - mutex_unlock(&function_mutex); - return -enomem; - } - - dev_dbg(device, "creating sysfs mapping for device %s ", - device->kobj.name); - - mapping = &dev_mappings[map_idx]; - mapping->attributes = kcalloc(gasket_sysfs_max_nodes, - sizeof(*mapping->attributes), - gfp_kernel); - if (!mapping->attributes) { - dev_dbg(device, "unable to allocate sysfs attribute array "); - mutex_unlock(&mapping->mutex); - mutex_unlock(&function_mutex); - return -enomem; - } - - kref_init(&mapping->refcount); - mapping->device = get_device(device); - mapping->gasket_dev = gasket_dev; - mapping->attribute_count = 0; - mutex_unlock(&mapping->mutex); - mutex_unlock(&function_mutex); - - /* don't decrement the refcount here! one open count keeps it alive! */ - return 0; -} - -int gasket_sysfs_create_entries(struct device *device, - const struct gasket_sysfs_attribute *attrs) -{ - int i; - int ret; - struct gasket_sysfs_mapping *mapping = get_mapping(device); - - if (!mapping) { - dev_dbg(device, - "creating entries for device without first initializing mapping "); - return -einval; - } - - mutex_lock(&mapping->mutex); - for (i = 0; attrs[i].attr.attr.name; i++) { - if (mapping->attribute_count == gasket_sysfs_max_nodes) { - dev_err(device, - "maximum number of sysfs nodes reached for device "); - mutex_unlock(&mapping->mutex); - put_mapping(mapping); - return -enomem; - } - - ret = device_create_file(device, &attrs[i].attr); - if (ret) { - dev_dbg(device, "unable to create device entries "); - mutex_unlock(&mapping->mutex); - put_mapping(mapping); - return ret; - } - - mapping->attributes[mapping->attribute_count] = attrs[i]; - ++mapping->attribute_count; - } - - mutex_unlock(&mapping->mutex); - put_mapping(mapping); - return 0; -} -export_symbol(gasket_sysfs_create_entries); - -void gasket_sysfs_remove_mapping(struct device *device) -{ - struct gasket_sysfs_mapping *mapping = get_mapping(device); - - if (!mapping) { - dev_err(device, - "attempted to remove non-existent sysfs mapping to device "); - return; - } - - put_mapping_n(mapping, 2); -} - -struct gasket_dev *gasket_sysfs_get_device_data(struct device *device) -{ - struct gasket_sysfs_mapping *mapping = get_mapping(device); - - if (!mapping) { - dev_err(device, "device not registered "); - return null; - } - - return mapping->gasket_dev; -} -export_symbol(gasket_sysfs_get_device_data); - -void gasket_sysfs_put_device_data(struct device *device, struct gasket_dev *dev) -{ - struct gasket_sysfs_mapping *mapping = get_mapping(device); - - if (!mapping) - return; - - /* see comment of put_mapping_n() for why the '2' is necessary. */ - put_mapping_n(mapping, 2); -} -export_symbol(gasket_sysfs_put_device_data); - -struct gasket_sysfs_attribute * -gasket_sysfs_get_attr(struct device *device, struct device_attribute *attr) -{ - int i; - int num_attrs; - struct gasket_sysfs_mapping *mapping = get_mapping(device); - struct gasket_sysfs_attribute *attrs = null; - - if (!mapping) - return null; - - attrs = mapping->attributes; - num_attrs = mapping->attribute_count; - for (i = 0; i < num_attrs; ++i) { - if (!strcmp(attrs[i].attr.attr.name, attr->attr.name)) - return &attrs[i]; - } - - dev_err(device, "unable to find match for device_attribute %s ", - attr->attr.name); - return null; -} -export_symbol(gasket_sysfs_get_attr); - -void gasket_sysfs_put_attr(struct device *device, - struct gasket_sysfs_attribute *attr) -{ - int i; - int num_attrs; - struct gasket_sysfs_mapping *mapping = get_mapping(device); - struct gasket_sysfs_attribute *attrs = null; - - if (!mapping) - return; - - attrs = mapping->attributes; - num_attrs = mapping->attribute_count; - for (i = 0; i < num_attrs; ++i) { - if (&attrs[i] == attr) { - put_mapping_n(mapping, 2); - return; - } - } - - dev_err(device, "unable to put unknown attribute: %s ", - attr->attr.attr.name); - put_mapping(mapping); -} -export_symbol(gasket_sysfs_put_attr); - -ssize_t gasket_sysfs_register_store(struct device *device, - struct device_attribute *attr, - const char *buf, size_t count) -{ - ulong parsed_value = 0; - struct gasket_sysfs_mapping *mapping; - struct gasket_dev *gasket_dev; - struct gasket_sysfs_attribute *gasket_attr; - - if (count < 3 || buf[0] != '0' || buf[1] != 'x') { - dev_err(device, - "sysfs register write format: "0x<hex value>" "); - return -einval; - } - - if (kstrtoul(buf, 16, &parsed_value) != 0) { - dev_err(device, - "unable to parse input as 64-bit hex value: %s ", buf); - return -einval; - } - - mapping = get_mapping(device); - if (!mapping) { - dev_err(device, "device driver may have been removed "); - return 0; - } - - gasket_dev = mapping->gasket_dev; - if (!gasket_dev) { - dev_err(device, "device driver may have been removed "); - put_mapping(mapping); - return 0; - } - - gasket_attr = gasket_sysfs_get_attr(device, attr); - if (!gasket_attr) { - put_mapping(mapping); - return count; - } - - gasket_dev_write_64(gasket_dev, parsed_value, - gasket_attr->data.bar_address.bar, - gasket_attr->data.bar_address.offset); - - if (gasket_attr->write_callback) - gasket_attr->write_callback(gasket_dev, gasket_attr, - parsed_value); - - gasket_sysfs_put_attr(device, gasket_attr); - put_mapping(mapping); - return count; -} -export_symbol(gasket_sysfs_register_store); diff --git a/drivers/staging/gasket/gasket_sysfs.h b/drivers/staging/gasket/gasket_sysfs.h --- a/drivers/staging/gasket/gasket_sysfs.h +++ /dev/null -/* spdx-license-identifier: gpl-2.0 */ -/* - * set of common sysfs utilities. - * - * copyright (c) 2018 google, inc. - */ - -/* the functions described here are a set of utilities to allow each file in the - * gasket driver framework to manage their own set of sysfs entries, instead of - * centralizing all that work in one file. - * - * the goal of these utilities is to allow for sysfs entries to be easily - * created without causing a proliferation of sysfs "show" functions. this - * requires o(n) string lookups during show function execution, but as reading - * sysfs entries is rarely performance-critical, this is likely acceptible. - */ -#ifndef __gasket_sysfs_h__ -#define __gasket_sysfs_h__ - -#include "gasket_constants.h" -#include "gasket_core.h" -#include <linux/device.h> -#include <linux/stringify.h> -#include <linux/sysfs.h> - -/* the maximum number of mappings/devices a driver needs to support. */ -#define gasket_sysfs_num_mappings (gasket_framework_desc_max * gasket_dev_max) - -/* the maximum number of sysfs nodes in a directory. - */ -#define gasket_sysfs_max_nodes 196 - -/* - * terminator struct for a gasket_sysfs_attr array. must be at the end of - * all gasket_sysfs_attribute arrays. - */ -#define gasket_end_of_attr_array \ - { \ - .attr = __attr_null, \ - .data.attr_type = 0, \ - } - -/* - * pairing of sysfs attribute and user data. - * used in lookups in sysfs "show" functions to return attribute metadata. - */ -struct gasket_sysfs_attribute { - /* the underlying sysfs device attribute associated with this data. */ - struct device_attribute attr; - - /* user-specified data to associate with the attribute. */ - union { - struct bar_address_ { - ulong bar; - ulong offset; - } bar_address; - uint attr_type; - } data; - - /* - * function pointer to a callback to be invoked when this attribute is - * written (if so configured). the arguments are to the gasket device - * pointer, the enclosing gasket_attr structure, and the value written. - * the callback should perform any logging necessary, as errors cannot - * be returned from the callback. - */ - void (*write_callback)(struct gasket_dev *dev, - struct gasket_sysfs_attribute *attr, - ulong value); -}; - -#define gasket_sysfs_ro(_name, _show_function, _attr_type) \ - { \ - .attr = __attr(_name, 0444, _show_function, null), \ - .data.attr_type = _attr_type \ - } - -/* initializes the gasket sysfs subsystem. - * - * description: performs one-time initialization. must be called before usage - * at [gasket] module load time. - */ -void gasket_sysfs_init(void); - -/* - * create an entry in mapping_data between a device and a gasket device. - * @device: device struct to map to. - * @gasket_dev: the dev struct associated with the driver controlling @device. - * - * description: this function maps a gasket_dev* to a device*. this mapping can - * be used in sysfs_show functions to get a handle to the gasket_dev struct - * controlling the device node. - * - * if this function is not called before gasket_sysfs_create_entries, a warning - * will be logged. - */ -int gasket_sysfs_create_mapping(struct device *device, - struct gasket_dev *gasket_dev); - -/* - * creates bulk entries in sysfs. - * @device: kernel device structure. - * @attrs: list of attributes/sysfs entries to create. - * - * description: creates each sysfs entry described in "attrs". can be called - * multiple times for a given @device. if the gasket_dev specified in - * gasket_sysfs_create_mapping had a legacy device, the entries will be created - * for it, as well. - */ -int gasket_sysfs_create_entries(struct device *device, - const struct gasket_sysfs_attribute *attrs); - -/* - * removes a device mapping from the global table. - * @device: device to unmap. - * - * description: removes the device->gasket device mapping from the internal - * table. - */ -void gasket_sysfs_remove_mapping(struct device *device); - -/* - * user data lookup based on kernel device structure. - * @device: kernel device structure. - * - * description: returns the user data associated with "device" in a prior call - * to gasket_sysfs_create_entries. returns null if no mapping can be found. - * upon success, this call take a reference to internal sysfs data that must be - * released with gasket_sysfs_put_device_data. while this reference is held, the - * underlying device sysfs information/structure will remain valid/will not be - * deleted. - */ -struct gasket_dev *gasket_sysfs_get_device_data(struct device *device); - -/* - * releases a references to internal data. - * @device: kernel device structure. - * @dev: gasket device descriptor (returned by gasket_sysfs_get_device_data). - */ -void gasket_sysfs_put_device_data(struct device *device, - struct gasket_dev *gasket_dev); - -/* - * gasket-specific attribute lookup. - * @device: kernel device structure. - * @attr: device attribute to look up. - * - * returns the gasket sysfs attribute associated with the kernel device - * attribute and device structure itself. upon success, this call will take a - * reference to internal sysfs data that must be released with a call to - * gasket_sysfs_put_attr. while this reference is held, the underlying device - * sysfs information/structure will remain valid/will not be deleted. - */ -struct gasket_sysfs_attribute * -gasket_sysfs_get_attr(struct device *device, struct device_attribute *attr); - -/* - * releases a references to internal data. - * @device: kernel device structure. - * @attr: gasket sysfs attribute descriptor (returned by - * gasket_sysfs_get_attr). - */ -void gasket_sysfs_put_attr(struct device *device, - struct gasket_sysfs_attribute *attr); - -/* - * write to a register sysfs node. - * @buf: null-terminated data being written. - * @count: number of bytes in the "buf" argument. - */ -ssize_t gasket_sysfs_register_store(struct device *device, - struct device_attribute *attr, - const char *buf, size_t count); - -#endif /* __gasket_sysfs_h__ */
|
Drivers in the Staging area
|
918ce05bbe52df43849a803010b4d2bcd31ea69c
|
greg kroah hartman
|
drivers
|
staging
|
gasket
|
xsysace: remove sysace driver
|
sysace ip is no longer used on xilinx powerpc 405/440 and microblaze systems. the driver is not regularly tested and very likely not working for quite a long time that's why remove it.
|
this release includes the landlock security module, which aims to make easier to sandbox applications; support for the clang control flow integrity, which aims to abort the program upon detecting certain forms of undefined behavior; support for randomising the stack address offset in each syscall; support for concurrent tbl flushing; preparatory apple m1 support; support for incoming amd and intel graphics chips; bpf support for calling kernel functions directly; a virtio sound driver for improved sound experience on virtualized guests; io_uring support for multi shot mode and a misc cgroup for miscellaneous resources. as always, there are many other features, new drivers, improvements and fixes.
|
remove sysace driver
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'security', 'networking', 'architectures arm x86 mips powerpc riscv s390 ia64 xtensa']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'remote processors', 'clock', 'phy ("physical layer" framework)', 'various']
|
['xsysace']
|
['c', 'dts', 'kconfig', 'icon_defconfig', 'maintainers', 'makefile']
| 7
| 0
| 1,297
|
--- diff --git a/maintainers b/maintainers --- a/maintainers +++ b/maintainers -f: drivers/block/xsysace.c diff --git a/arch/microblaze/boot/dts/system.dts b/arch/microblaze/boot/dts/system.dts --- a/arch/microblaze/boot/dts/system.dts +++ b/arch/microblaze/boot/dts/system.dts - sysace_compactflash: sysace@83600000 { - compatible = "xlnx,xps-sysace-1.00.a"; - interrupt-parent = <&xps_intc_0>; - interrupts = < 4 2 >; - reg = < 0x83600000 0x10000 >; - xlnx,family = "virtex5"; - xlnx,mem-width = <0x10>; - } ; diff --git a/arch/powerpc/boot/dts/icon.dts b/arch/powerpc/boot/dts/icon.dts --- a/arch/powerpc/boot/dts/icon.dts +++ b/arch/powerpc/boot/dts/icon.dts - - sysace_compactflash: sysace@1,0 { - compatible = "xlnx,sysace"; - interrupt-parent = <&uic2>; - interrupts = <24 0x4>; - reg = <0x00000001 0x00000000 0x10000>; - }; diff --git a/arch/powerpc/configs/44x/icon_defconfig b/arch/powerpc/configs/44x/icon_defconfig --- a/arch/powerpc/configs/44x/icon_defconfig +++ b/arch/powerpc/configs/44x/icon_defconfig -config_xilinx_sysace=y diff --git a/drivers/block/kconfig b/drivers/block/kconfig --- a/drivers/block/kconfig +++ b/drivers/block/kconfig -config xilinx_sysace - tristate "xilinx systemace support" - depends on 4xx || microblaze - help - include support for the xilinx systemace compactflash interface - diff --git a/drivers/block/makefile b/drivers/block/makefile --- a/drivers/block/makefile +++ b/drivers/block/makefile -obj-$(config_xilinx_sysace) += xsysace.o diff --git a/drivers/block/xsysace.c b/drivers/block/xsysace.c --- a/drivers/block/xsysace.c +++ /dev/null -// spdx-license-identifier: gpl-2.0-only -/* - * xilinx systemace device driver - * - * copyright 2007 secret lab technologies ltd. - */ - -/* - * the systemace chip is designed to configure fpgas by loading an fpga - * bitstream from a file on a cf card and squirting it into fpgas connected - * to the systemace jtag chain. it also has the advantage of providing an - * mpu interface which can be used to control the fpga configuration process - * and to use the attached cf card for general purpose storage. - * - * this driver is a block device driver for the systemace. - * - * initialization: - * the driver registers itself as a platform_device driver at module - * load time. the platform bus will take care of calling the - * ace_probe() method for all systemace instances in the system. any - * number of systemace instances are supported. ace_probe() calls - * ace_setup() which initialized all data structures, reads the cf - * id structure and registers the device. - * - * processing: - * just about all of the heavy lifting in this driver is performed by - * a finite state machine (fsm). the driver needs to wait on a number - * of events; some raised by interrupts, some which need to be polled - * for. describing all of the behaviour in a fsm seems to be the - * easiest way to keep the complexity low and make it easy to - * understand what the driver is doing. if the block ops or the - * request function need to interact with the hardware, then they - * simply need to flag the request and kick of fsm processing. - * - * the fsm itself is atomic-safe code which can be run from any - * context. the general process flow is: - * 1. obtain the ace->lock spinlock. - * 2. loop on ace_fsm_dostate() until the ace->fsm_continue flag is - * cleared. - * 3. release the lock. - * - * individual states do not sleep in any way. if a condition needs to - * be waited for then the state much clear the fsm_continue flag and - * either schedule the fsm to be run again at a later time, or expect - * an interrupt to call the fsm when the desired condition is met. - * - * in normal operation, the fsm is processed at interrupt context - * either when the driver's tasklet is scheduled, or when an irq is - * raised by the hardware. the tasklet can be scheduled at any time. - * the request method in particular schedules the tasklet when a new - * request has been indicated by the block layer. once started, the - * fsm proceeds as far as it can processing the request until it - * needs on a hardware event. at this point, it must yield execution. - * - * a state has two options when yielding execution: - * 1. ace_fsm_yield() - * - call if need to poll for event. - * - clears the fsm_continue flag to exit the processing loop - * - reschedules the tasklet to run again as soon as possible - * 2. ace_fsm_yieldirq() - * - call if an irq is expected from the hw - * - clears the fsm_continue flag to exit the processing loop - * - does not reschedule the tasklet so the fsm will not be processed - * again until an irq is received. - * after calling a yield function, the state must return control back - * to the fsm main loop. - * - * additionally, the driver maintains a kernel timer which can process - * the fsm. if the fsm gets stalled, typically due to a missed - * interrupt, then the kernel timer will expire and the driver can - * continue where it left off. - * - * to do: - * - add fpga configuration control interface. - * - request major number from lanana - */ - -#undef debug - -#include <linux/module.h> -#include <linux/ctype.h> -#include <linux/init.h> -#include <linux/interrupt.h> -#include <linux/errno.h> -#include <linux/kernel.h> -#include <linux/delay.h> -#include <linux/slab.h> -#include <linux/blk-mq.h> -#include <linux/mutex.h> -#include <linux/ata.h> -#include <linux/hdreg.h> -#include <linux/platform_device.h> -#if defined(config_of) -#include <linux/of_address.h> -#include <linux/of_device.h> -#include <linux/of_platform.h> -#endif - -module_author("grant likely <grant.likely@secretlab.ca>"); -module_description("xilinx systemace device driver"); -module_license("gpl"); - -/* systemace register definitions */ -#define ace_busmode (0x00) - -#define ace_status (0x04) -#define ace_status_cfglock (0x00000001) -#define ace_status_mpulock (0x00000002) -#define ace_status_cfgerror (0x00000004) /* config controller error */ -#define ace_status_cfcerror (0x00000008) /* cf controller error */ -#define ace_status_cfdetect (0x00000010) -#define ace_status_databufrdy (0x00000020) -#define ace_status_databufmode (0x00000040) -#define ace_status_cfgdone (0x00000080) -#define ace_status_rdyforcfcmd (0x00000100) -#define ace_status_cfgmodepin (0x00000200) -#define ace_status_cfgaddr_mask (0x0000e000) -#define ace_status_cfbsy (0x00020000) -#define ace_status_cfrdy (0x00040000) -#define ace_status_cfdwf (0x00080000) -#define ace_status_cfdsc (0x00100000) -#define ace_status_cfdrq (0x00200000) -#define ace_status_cfcorr (0x00400000) -#define ace_status_cferr (0x00800000) - -#define ace_error (0x08) -#define ace_cfglba (0x0c) -#define ace_mpulba (0x10) - -#define ace_seccntcmd (0x14) -#define ace_seccntcmd_reset (0x0100) -#define ace_seccntcmd_identify (0x0200) -#define ace_seccntcmd_read_data (0x0300) -#define ace_seccntcmd_write_data (0x0400) -#define ace_seccntcmd_abort (0x0600) - -#define ace_version (0x16) -#define ace_version_revision_mask (0x00ff) -#define ace_version_minor_mask (0x0f00) -#define ace_version_major_mask (0xf000) - -#define ace_ctrl (0x18) -#define ace_ctrl_forcelockreq (0x0001) -#define ace_ctrl_lockreq (0x0002) -#define ace_ctrl_forcecfgaddr (0x0004) -#define ace_ctrl_forcecfgmode (0x0008) -#define ace_ctrl_cfgmode (0x0010) -#define ace_ctrl_cfgstart (0x0020) -#define ace_ctrl_cfgsel (0x0040) -#define ace_ctrl_cfgreset (0x0080) -#define ace_ctrl_databufrdyirq (0x0100) -#define ace_ctrl_errorirq (0x0200) -#define ace_ctrl_cfgdoneirq (0x0400) -#define ace_ctrl_resetirq (0x0800) -#define ace_ctrl_cfgprog (0x1000) -#define ace_ctrl_cfgaddr_mask (0xe000) - -#define ace_fatstat (0x1c) - -#define ace_num_minors 16 -#define ace_sector_size (512) -#define ace_fifo_size (32) -#define ace_buf_per_sector (ace_sector_size / ace_fifo_size) - -#define ace_bus_width_8 0 -#define ace_bus_width_16 1 - -struct ace_reg_ops; - -struct ace_device { - /* driver state data */ - int id; - int media_change; - int users; - struct list_head list; - - /* finite state machine data */ - struct tasklet_struct fsm_tasklet; - uint fsm_task; /* current activity (ace_task_*) */ - uint fsm_state; /* current state (ace_fsm_state_*) */ - uint fsm_continue_flag; /* cleared to exit fsm mainloop */ - uint fsm_iter_num; - struct timer_list stall_timer; - - /* transfer state/result, use for both id and block request */ - struct request *req; /* request being processed */ - void *data_ptr; /* pointer to i/o buffer */ - int data_count; /* number of buffers remaining */ - int data_result; /* result of transfer; 0 := success */ - - int id_req_count; /* count of id requests */ - int id_result; - struct completion id_completion; /* used when id req finishes */ - int in_irq; - - /* details of hardware device */ - resource_size_t physaddr; - void __iomem *baseaddr; - int irq; - int bus_width; /* 0 := 8 bit; 1 := 16 bit */ - struct ace_reg_ops *reg_ops; - int lock_count; - - /* block device data structures */ - spinlock_t lock; - struct device *dev; - struct request_queue *queue; - struct gendisk *gd; - struct blk_mq_tag_set tag_set; - struct list_head rq_list; - - /* inserted cf card parameters */ - u16 cf_id[ata_id_words]; -}; - -static define_mutex(xsysace_mutex); -static int ace_major; - -/* --------------------------------------------------------------------- - * low level register access - */ - -struct ace_reg_ops { - u16(*in) (struct ace_device * ace, int reg); - void (*out) (struct ace_device * ace, int reg, u16 val); - void (*datain) (struct ace_device * ace); - void (*dataout) (struct ace_device * ace); -}; - -/* 8 bit bus width */ -static u16 ace_in_8(struct ace_device *ace, int reg) -{ - void __iomem *r = ace->baseaddr + reg; - return in_8(r) | (in_8(r + 1) << 8); -} - -static void ace_out_8(struct ace_device *ace, int reg, u16 val) -{ - void __iomem *r = ace->baseaddr + reg; - out_8(r, val); - out_8(r + 1, val >> 8); -} - -static void ace_datain_8(struct ace_device *ace) -{ - void __iomem *r = ace->baseaddr + 0x40; - u8 *dst = ace->data_ptr; - int i = ace_fifo_size; - while (i--) - *dst++ = in_8(r++); - ace->data_ptr = dst; -} - -static void ace_dataout_8(struct ace_device *ace) -{ - void __iomem *r = ace->baseaddr + 0x40; - u8 *src = ace->data_ptr; - int i = ace_fifo_size; - while (i--) - out_8(r++, *src++); - ace->data_ptr = src; -} - -static struct ace_reg_ops ace_reg_8_ops = { - .in = ace_in_8, - .out = ace_out_8, - .datain = ace_datain_8, - .dataout = ace_dataout_8, -}; - -/* 16 bit big endian bus attachment */ -static u16 ace_in_be16(struct ace_device *ace, int reg) -{ - return in_be16(ace->baseaddr + reg); -} - -static void ace_out_be16(struct ace_device *ace, int reg, u16 val) -{ - out_be16(ace->baseaddr + reg, val); -} - -static void ace_datain_be16(struct ace_device *ace) -{ - int i = ace_fifo_size / 2; - u16 *dst = ace->data_ptr; - while (i--) - *dst++ = in_le16(ace->baseaddr + 0x40); - ace->data_ptr = dst; -} - -static void ace_dataout_be16(struct ace_device *ace) -{ - int i = ace_fifo_size / 2; - u16 *src = ace->data_ptr; - while (i--) - out_le16(ace->baseaddr + 0x40, *src++); - ace->data_ptr = src; -} - -/* 16 bit little endian bus attachment */ -static u16 ace_in_le16(struct ace_device *ace, int reg) -{ - return in_le16(ace->baseaddr + reg); -} - -static void ace_out_le16(struct ace_device *ace, int reg, u16 val) -{ - out_le16(ace->baseaddr + reg, val); -} - -static void ace_datain_le16(struct ace_device *ace) -{ - int i = ace_fifo_size / 2; - u16 *dst = ace->data_ptr; - while (i--) - *dst++ = in_be16(ace->baseaddr + 0x40); - ace->data_ptr = dst; -} - -static void ace_dataout_le16(struct ace_device *ace) -{ - int i = ace_fifo_size / 2; - u16 *src = ace->data_ptr; - while (i--) - out_be16(ace->baseaddr + 0x40, *src++); - ace->data_ptr = src; -} - -static struct ace_reg_ops ace_reg_be16_ops = { - .in = ace_in_be16, - .out = ace_out_be16, - .datain = ace_datain_be16, - .dataout = ace_dataout_be16, -}; - -static struct ace_reg_ops ace_reg_le16_ops = { - .in = ace_in_le16, - .out = ace_out_le16, - .datain = ace_datain_le16, - .dataout = ace_dataout_le16, -}; - -static inline u16 ace_in(struct ace_device *ace, int reg) -{ - return ace->reg_ops->in(ace, reg); -} - -static inline u32 ace_in32(struct ace_device *ace, int reg) -{ - return ace_in(ace, reg) | (ace_in(ace, reg + 2) << 16); -} - -static inline void ace_out(struct ace_device *ace, int reg, u16 val) -{ - ace->reg_ops->out(ace, reg, val); -} - -static inline void ace_out32(struct ace_device *ace, int reg, u32 val) -{ - ace_out(ace, reg, val); - ace_out(ace, reg + 2, val >> 16); -} - -/* --------------------------------------------------------------------- - * debug support functions - */ - -#if defined(debug) -static void ace_dump_mem(void *base, int len) -{ - const char *ptr = base; - int i, j; - - for (i = 0; i < len; i += 16) { - printk(kern_info "%.8x:", i); - for (j = 0; j < 16; j++) { - if (!(j % 4)) - printk(" "); - printk("%.2x", ptr[i + j]); - } - printk(" "); - for (j = 0; j < 16; j++) - printk("%c", isprint(ptr[i + j]) ? ptr[i + j] : '.'); - printk(" "); - } -} -#else -static inline void ace_dump_mem(void *base, int len) -{ -} -#endif - -static void ace_dump_regs(struct ace_device *ace) -{ - dev_info(ace->dev, - " ctrl: %.8x seccnt/cmd: %.4x ver:%.4x " - " status:%.8x mpu_lba:%.8x busmode:%4x " - " error: %.8x cfg_lba:%.8x fatstat:%.4x ", - ace_in32(ace, ace_ctrl), - ace_in(ace, ace_seccntcmd), - ace_in(ace, ace_version), - ace_in32(ace, ace_status), - ace_in32(ace, ace_mpulba), - ace_in(ace, ace_busmode), - ace_in32(ace, ace_error), - ace_in32(ace, ace_cfglba), ace_in(ace, ace_fatstat)); -} - -static void ace_fix_driveid(u16 *id) -{ -#if defined(__big_endian) - int i; - - /* all half words have wrong byte order; swap the bytes */ - for (i = 0; i < ata_id_words; i++, id++) - *id = le16_to_cpu(*id); -#endif -} - -/* --------------------------------------------------------------------- - * finite state machine (fsm) implementation - */ - -/* fsm tasks; used to direct state transitions */ -#define ace_task_idle 0 -#define ace_task_identify 1 -#define ace_task_read 2 -#define ace_task_write 3 -#define ace_fsm_num_tasks 4 - -/* fsm state definitions */ -#define ace_fsm_state_idle 0 -#define ace_fsm_state_req_lock 1 -#define ace_fsm_state_wait_lock 2 -#define ace_fsm_state_wait_cfready 3 -#define ace_fsm_state_identify_prepare 4 -#define ace_fsm_state_identify_transfer 5 -#define ace_fsm_state_identify_complete 6 -#define ace_fsm_state_req_prepare 7 -#define ace_fsm_state_req_transfer 8 -#define ace_fsm_state_req_complete 9 -#define ace_fsm_state_error 10 -#define ace_fsm_num_states 11 - -/* set flag to exit fsm loop and reschedule tasklet */ -static inline void ace_fsm_yieldpoll(struct ace_device *ace) -{ - tasklet_schedule(&ace->fsm_tasklet); - ace->fsm_continue_flag = 0; -} - -static inline void ace_fsm_yield(struct ace_device *ace) -{ - dev_dbg(ace->dev, "%s() ", __func__); - ace_fsm_yieldpoll(ace); -} - -/* set flag to exit fsm loop and wait for irq to reschedule tasklet */ -static inline void ace_fsm_yieldirq(struct ace_device *ace) -{ - dev_dbg(ace->dev, "ace_fsm_yieldirq() "); - - if (ace->irq > 0) - ace->fsm_continue_flag = 0; - else - ace_fsm_yieldpoll(ace); -} - -static bool ace_has_next_request(struct request_queue *q) -{ - struct ace_device *ace = q->queuedata; - - return !list_empty(&ace->rq_list); -} - -/* get the next read/write request; ending requests that we don't handle */ -static struct request *ace_get_next_request(struct request_queue *q) -{ - struct ace_device *ace = q->queuedata; - struct request *rq; - - rq = list_first_entry_or_null(&ace->rq_list, struct request, queuelist); - if (rq) { - list_del_init(&rq->queuelist); - blk_mq_start_request(rq); - } - - return null; -} - -static void ace_fsm_dostate(struct ace_device *ace) -{ - struct request *req; - u32 status; - u16 val; - int count; - -#if defined(debug) - dev_dbg(ace->dev, "fsm_state=%i, id_req_count=%i ", - ace->fsm_state, ace->id_req_count); -#endif - - /* verify that there is actually a cf in the slot. if not, then - * bail out back to the idle state and wake up all the waiters */ - status = ace_in32(ace, ace_status); - if ((status & ace_status_cfdetect) == 0) { - ace->fsm_state = ace_fsm_state_idle; - ace->media_change = 1; - set_capacity(ace->gd, 0); - dev_info(ace->dev, "no cf in slot "); - - /* drop all in-flight and pending requests */ - if (ace->req) { - blk_mq_end_request(ace->req, blk_sts_ioerr); - ace->req = null; - } - while ((req = ace_get_next_request(ace->queue)) != null) - blk_mq_end_request(req, blk_sts_ioerr); - - /* drop back to idle state and notify waiters */ - ace->fsm_state = ace_fsm_state_idle; - ace->id_result = -eio; - while (ace->id_req_count) { - complete(&ace->id_completion); - ace->id_req_count--; - } - } - - switch (ace->fsm_state) { - case ace_fsm_state_idle: - /* see if there is anything to do */ - if (ace->id_req_count || ace_has_next_request(ace->queue)) { - ace->fsm_iter_num++; - ace->fsm_state = ace_fsm_state_req_lock; - mod_timer(&ace->stall_timer, jiffies + hz); - if (!timer_pending(&ace->stall_timer)) - add_timer(&ace->stall_timer); - break; - } - del_timer(&ace->stall_timer); - ace->fsm_continue_flag = 0; - break; - - case ace_fsm_state_req_lock: - if (ace_in(ace, ace_status) & ace_status_mpulock) { - /* already have the lock, jump to next state */ - ace->fsm_state = ace_fsm_state_wait_cfready; - break; - } - - /* request the lock */ - val = ace_in(ace, ace_ctrl); - ace_out(ace, ace_ctrl, val | ace_ctrl_lockreq); - ace->fsm_state = ace_fsm_state_wait_lock; - break; - - case ace_fsm_state_wait_lock: - if (ace_in(ace, ace_status) & ace_status_mpulock) { - /* got the lock; move to next state */ - ace->fsm_state = ace_fsm_state_wait_cfready; - break; - } - - /* wait a bit for the lock */ - ace_fsm_yield(ace); - break; - - case ace_fsm_state_wait_cfready: - status = ace_in32(ace, ace_status); - if (!(status & ace_status_rdyforcfcmd) || - (status & ace_status_cfbsy)) { - /* cf card isn't ready; it needs to be polled */ - ace_fsm_yield(ace); - break; - } - - /* device is ready for command; determine what to do next */ - if (ace->id_req_count) - ace->fsm_state = ace_fsm_state_identify_prepare; - else - ace->fsm_state = ace_fsm_state_req_prepare; - break; - - case ace_fsm_state_identify_prepare: - /* send identify command */ - ace->fsm_task = ace_task_identify; - ace->data_ptr = ace->cf_id; - ace->data_count = ace_buf_per_sector; - ace_out(ace, ace_seccntcmd, ace_seccntcmd_identify); - - /* as per datasheet, put config controller in reset */ - val = ace_in(ace, ace_ctrl); - ace_out(ace, ace_ctrl, val | ace_ctrl_cfgreset); - - /* irq handler takes over from this point; wait for the - * transfer to complete */ - ace->fsm_state = ace_fsm_state_identify_transfer; - ace_fsm_yieldirq(ace); - break; - - case ace_fsm_state_identify_transfer: - /* check that the sysace is ready to receive data */ - status = ace_in32(ace, ace_status); - if (status & ace_status_cfbsy) { - dev_dbg(ace->dev, "cfbsy set; t=%i iter=%i dc=%i ", - ace->fsm_task, ace->fsm_iter_num, - ace->data_count); - ace_fsm_yield(ace); - break; - } - if (!(status & ace_status_databufrdy)) { - ace_fsm_yield(ace); - break; - } - - /* transfer the next buffer */ - ace->reg_ops->datain(ace); - ace->data_count--; - - /* if there are still buffers to be transfers; jump out here */ - if (ace->data_count != 0) { - ace_fsm_yieldirq(ace); - break; - } - - /* transfer finished; kick state machine */ - dev_dbg(ace->dev, "identify finished "); - ace->fsm_state = ace_fsm_state_identify_complete; - break; - - case ace_fsm_state_identify_complete: - ace_fix_driveid(ace->cf_id); - ace_dump_mem(ace->cf_id, 512); /* debug: dump out disk id */ - - if (ace->data_result) { - /* error occurred, disable the disk */ - ace->media_change = 1; - set_capacity(ace->gd, 0); - dev_err(ace->dev, "error fetching cf id (%i) ", - ace->data_result); - } else { - ace->media_change = 0; - - /* record disk parameters */ - set_capacity(ace->gd, - ata_id_u32(ace->cf_id, ata_id_lba_capacity)); - dev_info(ace->dev, "capacity: %i sectors ", - ata_id_u32(ace->cf_id, ata_id_lba_capacity)); - } - - /* we're done, drop to idle state and notify waiters */ - ace->fsm_state = ace_fsm_state_idle; - ace->id_result = ace->data_result; - while (ace->id_req_count) { - complete(&ace->id_completion); - ace->id_req_count--; - } - break; - - case ace_fsm_state_req_prepare: - req = ace_get_next_request(ace->queue); - if (!req) { - ace->fsm_state = ace_fsm_state_idle; - break; - } - - /* okay, it's a data request, set it up for transfer */ - dev_dbg(ace->dev, - "request: sec=%llx hcnt=%x, ccnt=%x, dir=%i ", - (unsigned long long)blk_rq_pos(req), - blk_rq_sectors(req), blk_rq_cur_sectors(req), - rq_data_dir(req)); - - ace->req = req; - ace->data_ptr = bio_data(req->bio); - ace->data_count = blk_rq_cur_sectors(req) * ace_buf_per_sector; - ace_out32(ace, ace_mpulba, blk_rq_pos(req) & 0x0fffffff); - - count = blk_rq_sectors(req); - if (rq_data_dir(req)) { - /* kick off write request */ - dev_dbg(ace->dev, "write data "); - ace->fsm_task = ace_task_write; - ace_out(ace, ace_seccntcmd, - count | ace_seccntcmd_write_data); - } else { - /* kick off read request */ - dev_dbg(ace->dev, "read data "); - ace->fsm_task = ace_task_read; - ace_out(ace, ace_seccntcmd, - count | ace_seccntcmd_read_data); - } - - /* as per datasheet, put config controller in reset */ - val = ace_in(ace, ace_ctrl); - ace_out(ace, ace_ctrl, val | ace_ctrl_cfgreset); - - /* move to the transfer state. the systemace will raise - * an interrupt once there is something to do - */ - ace->fsm_state = ace_fsm_state_req_transfer; - if (ace->fsm_task == ace_task_read) - ace_fsm_yieldirq(ace); /* wait for data ready */ - break; - - case ace_fsm_state_req_transfer: - /* check that the sysace is ready to receive data */ - status = ace_in32(ace, ace_status); - if (status & ace_status_cfbsy) { - dev_dbg(ace->dev, - "cfbsy set; t=%i iter=%i c=%i dc=%i irq=%i ", - ace->fsm_task, ace->fsm_iter_num, - blk_rq_cur_sectors(ace->req) * 16, - ace->data_count, ace->in_irq); - ace_fsm_yield(ace); /* need to poll cfbsy bit */ - break; - } - if (!(status & ace_status_databufrdy)) { - dev_dbg(ace->dev, - "databuf not set; t=%i iter=%i c=%i dc=%i irq=%i ", - ace->fsm_task, ace->fsm_iter_num, - blk_rq_cur_sectors(ace->req) * 16, - ace->data_count, ace->in_irq); - ace_fsm_yieldirq(ace); - break; - } - - /* transfer the next buffer */ - if (ace->fsm_task == ace_task_write) - ace->reg_ops->dataout(ace); - else - ace->reg_ops->datain(ace); - ace->data_count--; - - /* if there are still buffers to be transfers; jump out here */ - if (ace->data_count != 0) { - ace_fsm_yieldirq(ace); - break; - } - - /* bio finished; is there another one? */ - if (blk_update_request(ace->req, blk_sts_ok, - blk_rq_cur_bytes(ace->req))) { - /* dev_dbg(ace->dev, "next block; h=%u c=%u ", - * blk_rq_sectors(ace->req), - * blk_rq_cur_sectors(ace->req)); - */ - ace->data_ptr = bio_data(ace->req->bio); - ace->data_count = blk_rq_cur_sectors(ace->req) * 16; - ace_fsm_yieldirq(ace); - break; - } - - ace->fsm_state = ace_fsm_state_req_complete; - break; - - case ace_fsm_state_req_complete: - ace->req = null; - - /* finished request; go to idle state */ - ace->fsm_state = ace_fsm_state_idle; - break; - - default: - ace->fsm_state = ace_fsm_state_idle; - break; - } -} - -static void ace_fsm_tasklet(unsigned long data) -{ - struct ace_device *ace = (void *)data; - unsigned long flags; - - spin_lock_irqsave(&ace->lock, flags); - - /* loop over state machine until told to stop */ - ace->fsm_continue_flag = 1; - while (ace->fsm_continue_flag) - ace_fsm_dostate(ace); - - spin_unlock_irqrestore(&ace->lock, flags); -} - -static void ace_stall_timer(struct timer_list *t) -{ - struct ace_device *ace = from_timer(ace, t, stall_timer); - unsigned long flags; - - dev_warn(ace->dev, - "kicking stalled fsm; state=%i task=%i iter=%i dc=%i ", - ace->fsm_state, ace->fsm_task, ace->fsm_iter_num, - ace->data_count); - spin_lock_irqsave(&ace->lock, flags); - - /* rearm the stall timer *before* entering fsm (which may then - * delete the timer) */ - mod_timer(&ace->stall_timer, jiffies + hz); - - /* loop over state machine until told to stop */ - ace->fsm_continue_flag = 1; - while (ace->fsm_continue_flag) - ace_fsm_dostate(ace); - - spin_unlock_irqrestore(&ace->lock, flags); -} - -/* --------------------------------------------------------------------- - * interrupt handling routines - */ -static int ace_interrupt_checkstate(struct ace_device *ace) -{ - u32 sreg = ace_in32(ace, ace_status); - u16 creg = ace_in(ace, ace_ctrl); - - /* check for error occurrence */ - if ((sreg & (ace_status_cfgerror | ace_status_cfcerror)) && - (creg & ace_ctrl_errorirq)) { - dev_err(ace->dev, "transfer failure "); - ace_dump_regs(ace); - return -eio; - } - - return 0; -} - -static irqreturn_t ace_interrupt(int irq, void *dev_id) -{ - u16 creg; - struct ace_device *ace = dev_id; - - /* be safe and get the lock */ - spin_lock(&ace->lock); - ace->in_irq = 1; - - /* clear the interrupt */ - creg = ace_in(ace, ace_ctrl); - ace_out(ace, ace_ctrl, creg | ace_ctrl_resetirq); - ace_out(ace, ace_ctrl, creg); - - /* check for io failures */ - if (ace_interrupt_checkstate(ace)) - ace->data_result = -eio; - - if (ace->fsm_task == 0) { - dev_err(ace->dev, - "spurious irq; stat=%.8x ctrl=%.8x cmd=%.4x ", - ace_in32(ace, ace_status), ace_in32(ace, ace_ctrl), - ace_in(ace, ace_seccntcmd)); - dev_err(ace->dev, "fsm_task=%i fsm_state=%i data_count=%i ", - ace->fsm_task, ace->fsm_state, ace->data_count); - } - - /* loop over state machine until told to stop */ - ace->fsm_continue_flag = 1; - while (ace->fsm_continue_flag) - ace_fsm_dostate(ace); - - /* done with interrupt; drop the lock */ - ace->in_irq = 0; - spin_unlock(&ace->lock); - - return irq_handled; -} - -/* --------------------------------------------------------------------- - * block ops - */ -static blk_status_t ace_queue_rq(struct blk_mq_hw_ctx *hctx, - const struct blk_mq_queue_data *bd) -{ - struct ace_device *ace = hctx->queue->queuedata; - struct request *req = bd->rq; - - if (blk_rq_is_passthrough(req)) { - blk_mq_start_request(req); - return blk_sts_ioerr; - } - - spin_lock_irq(&ace->lock); - list_add_tail(&req->queuelist, &ace->rq_list); - spin_unlock_irq(&ace->lock); - - tasklet_schedule(&ace->fsm_tasklet); - return blk_sts_ok; -} - -static unsigned int ace_check_events(struct gendisk *gd, unsigned int clearing) -{ - struct ace_device *ace = gd->private_data; - dev_dbg(ace->dev, "ace_check_events(): %i ", ace->media_change); - - return ace->media_change ? disk_event_media_change : 0; -} - -static void ace_media_changed(struct ace_device *ace) -{ - unsigned long flags; - - dev_dbg(ace->dev, "requesting cf id and scheduling tasklet "); - - spin_lock_irqsave(&ace->lock, flags); - ace->id_req_count++; - spin_unlock_irqrestore(&ace->lock, flags); - - tasklet_schedule(&ace->fsm_tasklet); - wait_for_completion(&ace->id_completion); - - dev_dbg(ace->dev, "revalidate complete "); -} - -static int ace_open(struct block_device *bdev, fmode_t mode) -{ - struct ace_device *ace = bdev->bd_disk->private_data; - unsigned long flags; - - dev_dbg(ace->dev, "ace_open() users=%i ", ace->users + 1); - - mutex_lock(&xsysace_mutex); - spin_lock_irqsave(&ace->lock, flags); - ace->users++; - spin_unlock_irqrestore(&ace->lock, flags); - - if (bdev_check_media_change(bdev) && ace->media_change) - ace_media_changed(ace); - mutex_unlock(&xsysace_mutex); - - return 0; -} - -static void ace_release(struct gendisk *disk, fmode_t mode) -{ - struct ace_device *ace = disk->private_data; - unsigned long flags; - u16 val; - - dev_dbg(ace->dev, "ace_release() users=%i ", ace->users - 1); - - mutex_lock(&xsysace_mutex); - spin_lock_irqsave(&ace->lock, flags); - ace->users--; - if (ace->users == 0) { - val = ace_in(ace, ace_ctrl); - ace_out(ace, ace_ctrl, val & ~ace_ctrl_lockreq); - } - spin_unlock_irqrestore(&ace->lock, flags); - mutex_unlock(&xsysace_mutex); -} - -static int ace_getgeo(struct block_device *bdev, struct hd_geometry *geo) -{ - struct ace_device *ace = bdev->bd_disk->private_data; - u16 *cf_id = ace->cf_id; - - dev_dbg(ace->dev, "ace_getgeo() "); - - geo->heads = cf_id[ata_id_heads]; - geo->sectors = cf_id[ata_id_sectors]; - geo->cylinders = cf_id[ata_id_cyls]; - - return 0; -} - -static const struct block_device_operations ace_fops = { - .owner = this_module, - .open = ace_open, - .release = ace_release, - .check_events = ace_check_events, - .getgeo = ace_getgeo, -}; - -static const struct blk_mq_ops ace_mq_ops = { - .queue_rq = ace_queue_rq, -}; - -/* -------------------------------------------------------------------- - * systemace device setup/teardown code - */ -static int ace_setup(struct ace_device *ace) -{ - u16 version; - u16 val; - int rc; - - dev_dbg(ace->dev, "ace_setup(ace=0x%p) ", ace); - dev_dbg(ace->dev, "physaddr=0x%llx irq=%i ", - (unsigned long long)ace->physaddr, ace->irq); - - spin_lock_init(&ace->lock); - init_completion(&ace->id_completion); - init_list_head(&ace->rq_list); - - /* - * map the device - */ - ace->baseaddr = ioremap(ace->physaddr, 0x80); - if (!ace->baseaddr) - goto err_ioremap; - - /* - * initialize the state machine tasklet and stall timer - */ - tasklet_init(&ace->fsm_tasklet, ace_fsm_tasklet, (unsigned long)ace); - timer_setup(&ace->stall_timer, ace_stall_timer, 0); - - /* - * initialize the request queue - */ - ace->queue = blk_mq_init_sq_queue(&ace->tag_set, &ace_mq_ops, 2, - blk_mq_f_should_merge); - if (is_err(ace->queue)) { - rc = ptr_err(ace->queue); - ace->queue = null; - goto err_blk_initq; - } - ace->queue->queuedata = ace; - - blk_queue_logical_block_size(ace->queue, 512); - blk_queue_bounce_limit(ace->queue, blk_bounce_high); - - /* - * allocate and initialize gd structure - */ - ace->gd = alloc_disk(ace_num_minors); - if (!ace->gd) - goto err_alloc_disk; - - ace->gd->major = ace_major; - ace->gd->first_minor = ace->id * ace_num_minors; - ace->gd->fops = &ace_fops; - ace->gd->events = disk_event_media_change; - ace->gd->queue = ace->queue; - ace->gd->private_data = ace; - snprintf(ace->gd->disk_name, 32, "xs%c", ace->id + 'a'); - - /* set bus width */ - if (ace->bus_width == ace_bus_width_16) { - /* 0x0101 should work regardless of endianess */ - ace_out_le16(ace, ace_busmode, 0x0101); - - /* read it back to determine endianess */ - if (ace_in_le16(ace, ace_busmode) == 0x0001) - ace->reg_ops = &ace_reg_le16_ops; - else - ace->reg_ops = &ace_reg_be16_ops; - } else { - ace_out_8(ace, ace_busmode, 0x00); - ace->reg_ops = &ace_reg_8_ops; - } - - /* make sure version register is sane */ - version = ace_in(ace, ace_version); - if ((version == 0) || (version == 0xffff)) - goto err_read; - - /* put sysace in a sane state by clearing most control reg bits */ - ace_out(ace, ace_ctrl, ace_ctrl_forcecfgmode | - ace_ctrl_databufrdyirq | ace_ctrl_errorirq); - - /* now we can hook up the irq handler */ - if (ace->irq > 0) { - rc = request_irq(ace->irq, ace_interrupt, 0, "systemace", ace); - if (rc) { - /* failure - fall back to polled mode */ - dev_err(ace->dev, "request_irq failed "); - ace->irq = rc; - } - } - - /* enable interrupts */ - val = ace_in(ace, ace_ctrl); - val |= ace_ctrl_databufrdyirq | ace_ctrl_errorirq; - ace_out(ace, ace_ctrl, val); - - /* print the identification */ - dev_info(ace->dev, "xilinx systemace revision %i.%i.%i ", - (version >> 12) & 0xf, (version >> 8) & 0x0f, version & 0xff); - dev_dbg(ace->dev, "physaddr 0x%llx, mapped to 0x%p, irq=%i ", - (unsigned long long) ace->physaddr, ace->baseaddr, ace->irq); - - ace->media_change = 1; - ace_media_changed(ace); - - /* make the sysace device 'live' */ - add_disk(ace->gd); - - return 0; - -err_read: - /* prevent double queue cleanup */ - ace->gd->queue = null; - put_disk(ace->gd); -err_alloc_disk: - blk_cleanup_queue(ace->queue); - blk_mq_free_tag_set(&ace->tag_set); -err_blk_initq: - iounmap(ace->baseaddr); -err_ioremap: - dev_info(ace->dev, "xsysace: error initializing device at 0x%llx ", - (unsigned long long) ace->physaddr); - return -enomem; -} - -static void ace_teardown(struct ace_device *ace) -{ - if (ace->gd) { - del_gendisk(ace->gd); - put_disk(ace->gd); - } - - if (ace->queue) { - blk_cleanup_queue(ace->queue); - blk_mq_free_tag_set(&ace->tag_set); - } - - tasklet_kill(&ace->fsm_tasklet); - - if (ace->irq > 0) - free_irq(ace->irq, ace); - - iounmap(ace->baseaddr); -} - -static int ace_alloc(struct device *dev, int id, resource_size_t physaddr, - int irq, int bus_width) -{ - struct ace_device *ace; - int rc; - dev_dbg(dev, "ace_alloc(%p) ", dev); - - /* allocate and initialize the ace device structure */ - ace = kzalloc(sizeof(struct ace_device), gfp_kernel); - if (!ace) { - rc = -enomem; - goto err_alloc; - } - - ace->dev = dev; - ace->id = id; - ace->physaddr = physaddr; - ace->irq = irq; - ace->bus_width = bus_width; - - /* call the setup code */ - rc = ace_setup(ace); - if (rc) - goto err_setup; - - dev_set_drvdata(dev, ace); - return 0; - -err_setup: - dev_set_drvdata(dev, null); - kfree(ace); -err_alloc: - dev_err(dev, "could not initialize device, err=%i ", rc); - return rc; -} - -static void ace_free(struct device *dev) -{ - struct ace_device *ace = dev_get_drvdata(dev); - dev_dbg(dev, "ace_free(%p) ", dev); - - if (ace) { - ace_teardown(ace); - dev_set_drvdata(dev, null); - kfree(ace); - } -} - -/* --------------------------------------------------------------------- - * platform bus support - */ - -static int ace_probe(struct platform_device *dev) -{ - int bus_width = ace_bus_width_16; /* fixme: should not be hard coded */ - resource_size_t physaddr; - struct resource *res; - u32 id = dev->id; - int irq; - int i; - - dev_dbg(&dev->dev, "ace_probe(%p) ", dev); - - /* device id and bus width */ - if (of_property_read_u32(dev->dev.of_node, "port-number", &id)) - id = 0; - if (of_find_property(dev->dev.of_node, "8-bit", null)) - bus_width = ace_bus_width_8; - - res = platform_get_resource(dev, ioresource_mem, 0); - if (!res) - return -einval; - - physaddr = res->start; - if (!physaddr) - return -enodev; - - irq = platform_get_irq_optional(dev, 0); - - /* call the bus-independent setup code */ - return ace_alloc(&dev->dev, id, physaddr, irq, bus_width); -} - -/* - * platform bus remove() method - */ -static int ace_remove(struct platform_device *dev) -{ - ace_free(&dev->dev); - return 0; -} - -#if defined(config_of) -/* match table for of_platform binding */ -static const struct of_device_id ace_of_match[] = { - { .compatible = "xlnx,opb-sysace-1.00.b", }, - { .compatible = "xlnx,opb-sysace-1.00.c", }, - { .compatible = "xlnx,xps-sysace-1.00.a", }, - { .compatible = "xlnx,sysace", }, - {}, -}; -module_device_table(of, ace_of_match); -#else /* config_of */ -#define ace_of_match null -#endif /* config_of */ - -static struct platform_driver ace_platform_driver = { - .probe = ace_probe, - .remove = ace_remove, - .driver = { - .name = "xsysace", - .of_match_table = ace_of_match, - }, -}; - -/* --------------------------------------------------------------------- - * module init/exit routines - */ -static int __init ace_init(void) -{ - int rc; - - ace_major = register_blkdev(ace_major, "xsysace"); - if (ace_major <= 0) { - rc = -enomem; - goto err_blk; - } - - rc = platform_driver_register(&ace_platform_driver); - if (rc) - goto err_plat; - - pr_info("xilinx systemace device driver, major=%i ", ace_major); - return 0; - -err_plat: - unregister_blkdev(ace_major, "xsysace"); -err_blk: - printk(kern_err "xsysace: registration failed; err=%i ", rc); - return rc; -} -module_init(ace_init); - -static void __exit ace_exit(void) -{ - pr_debug("unregistering xilinx systemace driver "); - platform_driver_unregister(&ace_platform_driver); - unregister_blkdev(ace_major, "xsysace"); -} -module_exit(ace_exit);
|
Drivers in the Staging area
|
2907f851f64a2f1ec5d75e60740e0819a660c5c0
|
michal simek
|
arch
|
powerpc
|
44x, boot, configs, dts
|
drivers/block: remove the umem driver
|
this removes the driver on the premise that it has been unused for a long time. this is a better approach compared to changing untestable code nobody cares about in the first place. similarly, the umem.com website now shows a mere godaddy parking add.
|
this release includes the landlock security module, which aims to make easier to sandbox applications; support for the clang control flow integrity, which aims to abort the program upon detecting certain forms of undefined behavior; support for randomising the stack address offset in each syscall; support for concurrent tbl flushing; preparatory apple m1 support; support for incoming amd and intel graphics chips; bpf support for calling kernel functions directly; a virtio sound driver for improved sound experience on virtualized guests; io_uring support for multi shot mode and a misc cgroup for miscellaneous resources. as always, there are many other features, new drivers, improvements and fixes.
|
remove the umem driver
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'security', 'networking', 'architectures arm x86 mips powerpc riscv s390 ia64 xtensa']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'remote processors', 'clock', 'phy ("physical layer" framework)', 'various']
|
['block']
|
['malta_kvm_defconfig', 'c', 'h', 'kconfig', 'maltaup_xpa_defconfig', 'malta_defconfig', 'makefile']
| 7
| 0
| 1,283
|
--- diff --git a/arch/mips/configs/malta_defconfig b/arch/mips/configs/malta_defconfig --- a/arch/mips/configs/malta_defconfig +++ b/arch/mips/configs/malta_defconfig -config_blk_dev_umem=m diff --git a/arch/mips/configs/malta_kvm_defconfig b/arch/mips/configs/malta_kvm_defconfig --- a/arch/mips/configs/malta_kvm_defconfig +++ b/arch/mips/configs/malta_kvm_defconfig -config_blk_dev_umem=m diff --git a/arch/mips/configs/maltaup_xpa_defconfig b/arch/mips/configs/maltaup_xpa_defconfig --- a/arch/mips/configs/maltaup_xpa_defconfig +++ b/arch/mips/configs/maltaup_xpa_defconfig -config_blk_dev_umem=m diff --git a/drivers/block/kconfig b/drivers/block/kconfig --- a/drivers/block/kconfig +++ b/drivers/block/kconfig -config blk_dev_umem - tristate "micro memory mm5415 battery backed ram support" - depends on pci - help - saying y here will include support for the mm5415 family of - battery backed (non-volatile) ram cards. - <http://www.umem.com/> - - the cards appear as block devices that can be partitioned into - as many as 15 partitions. - - to compile this driver as a module, choose m here: the - module will be called umem. - - the umem driver has not yet been allocated a major number, so - one is chosen dynamically. - diff --git a/drivers/block/makefile b/drivers/block/makefile --- a/drivers/block/makefile +++ b/drivers/block/makefile -obj-$(config_blk_dev_umem) += umem.o diff --git a/drivers/block/umem.c b/drivers/block/umem.c --- a/drivers/block/umem.c +++ /dev/null -// spdx-license-identifier: gpl-2.0-only -/* - * mm.c - micro memory(tm) pci memory board block device driver - v2.3 - * - * (c) 2001 san mehat <nettwerk@valinux.com> - * (c) 2001 johannes erdfelt <jerdfelt@valinux.com> - * (c) 2001 neilbrown <neilb@cse.unsw.edu.au> - * - * this driver for the micro memory pci memory module with battery backup - * is copyright micro memory inc 2001-2002. all rights reserved. - * - * this driver provides a standard block device interface for micro memory(tm) - * pci based ram boards. - * 10/05/01: phap nguyen - rebuilt the driver - * 10/22/01: phap nguyen - v2.1 added disk partitioning - * 29oct2001:neilbrown - use make_request_fn instead of request_fn - * - use stand disk partitioning (so fdisk works). - * 08nov2001:neilbrown - change driver name from "mm" to "umem" - * - incorporate into main kernel - * 08apr2002:neilbrown - move some of interrupt handle to tasklet - * - use spin_lock_bh instead of _irq - * - never block on make_request. queue - * bh's instead. - * - unregister umem from devfs at mod unload - * - change version to 2.3 - * 07nov2001:phap nguyen - select pci read command: 06, 12, 15 (decimal) - * 07jan2002: p. nguyen - used pci memory write & invalidate for dma - * 15may2002:neilbrown - convert to bio for 2.5 - * 17may2002:neilbrown - remove init_mem initialisation. instead detect - * - a sequence of writes that cover the card, and - * - set initialised bit then. - */ - -#undef debug /* #define debug if you want debugging info (pr_debug) */ -#include <linux/fs.h> -#include <linux/bio.h> -#include <linux/kernel.h> -#include <linux/mm.h> -#include <linux/mman.h> -#include <linux/gfp.h> -#include <linux/ioctl.h> -#include <linux/module.h> -#include <linux/init.h> -#include <linux/interrupt.h> -#include <linux/timer.h> -#include <linux/pci.h> -#include <linux/dma-mapping.h> - -#include <linux/fcntl.h> /* o_accmode */ -#include <linux/hdreg.h> /* hdio_getgeo */ - -#include "umem.h" - -#include <linux/uaccess.h> -#include <asm/io.h> - -#define mm_maxcards 4 -#define mm_rahead 2 /* two sectors */ -#define mm_blksize 1024 /* 1k blocks */ -#define mm_hardsect 512 /* 512-byte hardware sectors */ -#define mm_shift 6 /* max 64 partitions on 4 cards */ - -/* - * version information - */ - -#define driver_name "umem" -#define driver_version "v2.3" -#define driver_author "san mehat, johannes erdfelt, neilbrown" -#define driver_desc "micro memory(tm) pci memory board block driver" - -static int debug; -/* #define hw_trace(x) writeb(x,cards[0].csr_remap + memctrlstatus_magic) */ -#define hw_trace(x) - -#define debug_led_on_transfer 0x01 -#define debug_battery_polling 0x02 - -module_param(debug, int, 0644); -module_parm_desc(debug, "debug bitmask"); - -static int pci_read_cmd = 0x0c; /* read multiple */ -module_param(pci_read_cmd, int, 0); -module_parm_desc(pci_read_cmd, "pci read command"); - -static int pci_write_cmd = 0x0f; /* write and invalidate */ -module_param(pci_write_cmd, int, 0); -module_parm_desc(pci_write_cmd, "pci write command"); - -static int pci_cmds; - -static int major_nr; - -#include <linux/blkdev.h> -#include <linux/blkpg.h> - -struct cardinfo { - struct pci_dev *dev; - - unsigned char __iomem *csr_remap; - unsigned int mm_size; /* size in kbytes */ - - unsigned int init_size; /* initial segment, in sectors, - * that we know to - * have been written - */ - struct bio *bio, *currentbio, **biotail; - struct bvec_iter current_iter; - - struct request_queue *queue; - - struct mm_page { - dma_addr_t page_dma; - struct mm_dma_desc *desc; - int cnt, headcnt; - struct bio *bio, **biotail; - struct bvec_iter iter; - } mm_pages[2]; -#define desc_per_page ((page_size*2)/sizeof(struct mm_dma_desc)) - - int active, ready; - - struct tasklet_struct tasklet; - unsigned int dma_status; - - struct { - int good; - int warned; - unsigned long last_change; - } battery[2]; - - spinlock_t lock; - int check_batteries; - - int flags; -}; - -static struct cardinfo cards[mm_maxcards]; -static struct timer_list battery_timer; - -static int num_cards; - -static struct gendisk *mm_gendisk[mm_maxcards]; - -static void check_batteries(struct cardinfo *card); - -static int get_userbit(struct cardinfo *card, int bit) -{ - unsigned char led; - - led = readb(card->csr_remap + memctrlcmd_ledctrl); - return led & bit; -} - -static int set_userbit(struct cardinfo *card, int bit, unsigned char state) -{ - unsigned char led; - - led = readb(card->csr_remap + memctrlcmd_ledctrl); - if (state) - led |= bit; - else - led &= ~bit; - writeb(led, card->csr_remap + memctrlcmd_ledctrl); - - return 0; -} - -/* - * note: for the power led, use the led_power_* macros since they differ - */ -static void set_led(struct cardinfo *card, int shift, unsigned char state) -{ - unsigned char led; - - led = readb(card->csr_remap + memctrlcmd_ledctrl); - if (state == led_flip) - led ^= (1<<shift); - else { - led &= ~(0x03 << shift); - led |= (state << shift); - } - writeb(led, card->csr_remap + memctrlcmd_ledctrl); - -} - -#ifdef mm_diag -static void dump_regs(struct cardinfo *card) -{ - unsigned char *p; - int i, i1; - - p = card->csr_remap; - for (i = 0; i < 8; i++) { - printk(kern_debug "%p ", p); - - for (i1 = 0; i1 < 16; i1++) - printk("%02x ", *p++); - - printk(" "); - } -} -#endif - -static void dump_dmastat(struct cardinfo *card, unsigned int dmastat) -{ - dev_printk(kern_debug, &card->dev->dev, "dmastat - "); - if (dmastat & dmascr_any_err) - printk(kern_cont "any_err "); - if (dmastat & dmascr_mbe_err) - printk(kern_cont "mbe_err "); - if (dmastat & dmascr_parity_err_rep) - printk(kern_cont "parity_err_rep "); - if (dmastat & dmascr_parity_err_det) - printk(kern_cont "parity_err_det "); - if (dmastat & dmascr_system_err_sig) - printk(kern_cont "system_err_sig "); - if (dmastat & dmascr_target_abt) - printk(kern_cont "target_abt "); - if (dmastat & dmascr_master_abt) - printk(kern_cont "master_abt "); - if (dmastat & dmascr_chain_complete) - printk(kern_cont "chain_complete "); - if (dmastat & dmascr_dma_complete) - printk(kern_cont "dma_complete "); - printk(" "); -} - -/* - * theory of request handling - * - * each bio is assigned to one mm_dma_desc - which may not be enough fixme - * we have two pages of mm_dma_desc, holding about 64 descriptors - * each. these are allocated at init time. - * one page is "ready" and is either full, or can have request added. - * the other page might be "active", which dma is happening on it. - * - * whenever io on the active page completes, the ready page is activated - * and the ex-active page is clean out and made ready. - * otherwise the ready page is only activated when it becomes full. - * - * if a request arrives while both pages a full, it is queued, and b_rdev is - * overloaded to record whether it was a read or a write. - * - * the interrupt handler only polls the device to clear the interrupt. - * the processing of the result is done in a tasklet. - */ - -static void mm_start_io(struct cardinfo *card) -{ - /* we have the lock, we know there is - * no io active, and we know that card->active - * is set - */ - struct mm_dma_desc *desc; - struct mm_page *page; - int offset; - - /* make the last descriptor end the chain */ - page = &card->mm_pages[card->active]; - pr_debug("start_io: %d %d->%d ", - card->active, page->headcnt, page->cnt - 1); - desc = &page->desc[page->cnt-1]; - - desc->control_bits |= cpu_to_le32(dmascr_chain_comp_en); - desc->control_bits &= ~cpu_to_le32(dmascr_chain_en); - desc->sem_control_bits = desc->control_bits; - - - if (debug & debug_led_on_transfer) - set_led(card, led_remove, led_on); - - desc = &page->desc[page->headcnt]; - writel(0, card->csr_remap + dma_pci_addr); - writel(0, card->csr_remap + dma_pci_addr + 4); - - writel(0, card->csr_remap + dma_local_addr); - writel(0, card->csr_remap + dma_local_addr + 4); - - writel(0, card->csr_remap + dma_transfer_size); - writel(0, card->csr_remap + dma_transfer_size + 4); - - writel(0, card->csr_remap + dma_semaphore_addr); - writel(0, card->csr_remap + dma_semaphore_addr + 4); - - offset = ((char *)desc) - ((char *)page->desc); - writel(cpu_to_le32((page->page_dma+offset) & 0xffffffff), - card->csr_remap + dma_descriptor_addr); - /* force the value to u64 before shifting otherwise >> 32 is undefined c - * and on some ports will do nothing ! */ - writel(cpu_to_le32(((u64)page->page_dma)>>32), - card->csr_remap + dma_descriptor_addr + 4); - - /* go, go, go */ - writel(cpu_to_le32(dmascr_go | dmascr_chain_en | pci_cmds), - card->csr_remap + dma_status_ctrl); -} - -static int add_bio(struct cardinfo *card); - -static void activate(struct cardinfo *card) -{ - /* if no page is active, and ready is - * not empty, then switch ready page - * to active and start io. - * then add any bh's that are available to ready - */ - - do { - while (add_bio(card)) - ; - - if (card->active == -1 && - card->mm_pages[card->ready].cnt > 0) { - card->active = card->ready; - card->ready = 1-card->ready; - mm_start_io(card); - } - - } while (card->active == -1 && add_bio(card)); -} - -static inline void reset_page(struct mm_page *page) -{ - page->cnt = 0; - page->headcnt = 0; - page->bio = null; - page->biotail = &page->bio; -} - -/* - * if there is room on ready page, take - * one bh off list and add it. - * return 1 if there was room, else 0. - */ -static int add_bio(struct cardinfo *card) -{ - struct mm_page *p; - struct mm_dma_desc *desc; - dma_addr_t dma_handle; - int offset; - struct bio *bio; - struct bio_vec vec; - - bio = card->currentbio; - if (!bio && card->bio) { - card->currentbio = card->bio; - card->current_iter = card->bio->bi_iter; - card->bio = card->bio->bi_next; - if (card->bio == null) - card->biotail = &card->bio; - card->currentbio->bi_next = null; - return 1; - } - if (!bio) - return 0; - - if (card->mm_pages[card->ready].cnt >= desc_per_page) - return 0; - - vec = bio_iter_iovec(bio, card->current_iter); - - dma_handle = dma_map_page(&card->dev->dev, - vec.bv_page, - vec.bv_offset, - vec.bv_len, - bio_op(bio) == req_op_read ? - dma_from_device : dma_to_device); - - p = &card->mm_pages[card->ready]; - desc = &p->desc[p->cnt]; - p->cnt++; - if (p->bio == null) - p->iter = card->current_iter; - if ((p->biotail) != &bio->bi_next) { - *(p->biotail) = bio; - p->biotail = &(bio->bi_next); - bio->bi_next = null; - } - - desc->data_dma_handle = dma_handle; - - desc->pci_addr = cpu_to_le64((u64)desc->data_dma_handle); - desc->local_addr = cpu_to_le64(card->current_iter.bi_sector << 9); - desc->transfer_size = cpu_to_le32(vec.bv_len); - offset = (((char *)&desc->sem_control_bits) - ((char *)p->desc)); - desc->sem_addr = cpu_to_le64((u64)(p->page_dma+offset)); - desc->zero1 = desc->zero2 = 0; - offset = (((char *)(desc+1)) - ((char *)p->desc)); - desc->next_desc_addr = cpu_to_le64(p->page_dma+offset); - desc->control_bits = cpu_to_le32(dmascr_go|dmascr_err_int_en| - dmascr_parity_int_en| - dmascr_chain_en | - dmascr_sem_en | - pci_cmds); - if (bio_op(bio) == req_op_write) - desc->control_bits |= cpu_to_le32(dmascr_transfer_read); - desc->sem_control_bits = desc->control_bits; - - - bio_advance_iter(bio, &card->current_iter, vec.bv_len); - if (!card->current_iter.bi_size) - card->currentbio = null; - - return 1; -} - -static void process_page(unsigned long data) -{ - /* check if any of the requests in the page are dma_complete, - * and deal with them appropriately. - * if we find a descriptor without dma_complete in the semaphore, then - * dma must have hit an error on that descriptor, so use dma_status - * instead and assume that all following descriptors must be re-tried. - */ - struct mm_page *page; - struct bio *return_bio = null; - struct cardinfo *card = (struct cardinfo *)data; - unsigned int dma_status = card->dma_status; - - spin_lock(&card->lock); - if (card->active < 0) - goto out_unlock; - page = &card->mm_pages[card->active]; - - while (page->headcnt < page->cnt) { - struct bio *bio = page->bio; - struct mm_dma_desc *desc = &page->desc[page->headcnt]; - int control = le32_to_cpu(desc->sem_control_bits); - int last = 0; - struct bio_vec vec; - - if (!(control & dmascr_dma_complete)) { - control = dma_status; - last = 1; - } - - page->headcnt++; - vec = bio_iter_iovec(bio, page->iter); - bio_advance_iter(bio, &page->iter, vec.bv_len); - - if (!page->iter.bi_size) { - page->bio = bio->bi_next; - if (page->bio) - page->iter = page->bio->bi_iter; - } - - dma_unmap_page(&card->dev->dev, desc->data_dma_handle, - vec.bv_len, - (control & dmascr_transfer_read) ? - dma_to_device : dma_from_device); - if (control & dmascr_hard_error) { - /* error */ - bio->bi_status = blk_sts_ioerr; - dev_printk(kern_warning, &card->dev->dev, - "i/o error on sector %d/%d ", - le32_to_cpu(desc->local_addr)>>9, - le32_to_cpu(desc->transfer_size)); - dump_dmastat(card, control); - } else if (op_is_write(bio_op(bio)) && - le32_to_cpu(desc->local_addr) >> 9 == - card->init_size) { - card->init_size += le32_to_cpu(desc->transfer_size) >> 9; - if (card->init_size >> 1 >= card->mm_size) { - dev_printk(kern_info, &card->dev->dev, - "memory now initialised "); - set_userbit(card, memory_initialized, 1); - } - } - if (bio != page->bio) { - bio->bi_next = return_bio; - return_bio = bio; - } - - if (last) - break; - } - - if (debug & debug_led_on_transfer) - set_led(card, led_remove, led_off); - - if (card->check_batteries) { - card->check_batteries = 0; - check_batteries(card); - } - if (page->headcnt >= page->cnt) { - reset_page(page); - card->active = -1; - activate(card); - } else { - /* haven't finished with this one yet */ - pr_debug("do some more "); - mm_start_io(card); - } - out_unlock: - spin_unlock(&card->lock); - - while (return_bio) { - struct bio *bio = return_bio; - - return_bio = bio->bi_next; - bio->bi_next = null; - bio_endio(bio); - } -} - -static void mm_unplug(struct blk_plug_cb *cb, bool from_schedule) -{ - struct cardinfo *card = cb->data; - - spin_lock_irq(&card->lock); - activate(card); - spin_unlock_irq(&card->lock); - kfree(cb); -} - -static int mm_check_plugged(struct cardinfo *card) -{ - return !!blk_check_plugged(mm_unplug, card, sizeof(struct blk_plug_cb)); -} - -static blk_qc_t mm_submit_bio(struct bio *bio) -{ - struct cardinfo *card = bio->bi_bdev->bd_disk->private_data; - - pr_debug("mm_make_request %llu %u ", - (unsigned long long)bio->bi_iter.bi_sector, - bio->bi_iter.bi_size); - - blk_queue_split(&bio); - - spin_lock_irq(&card->lock); - *card->biotail = bio; - bio->bi_next = null; - card->biotail = &bio->bi_next; - if (op_is_sync(bio->bi_opf) || !mm_check_plugged(card)) - activate(card); - spin_unlock_irq(&card->lock); - - return blk_qc_t_none; -} - -static irqreturn_t mm_interrupt(int irq, void *__card) -{ - struct cardinfo *card = (struct cardinfo *) __card; - unsigned int dma_status; - unsigned short cfg_status; - -hw_trace(0x30); - - dma_status = le32_to_cpu(readl(card->csr_remap + dma_status_ctrl)); - - if (!(dma_status & (dmascr_error_mask | dmascr_chain_complete))) { - /* interrupt wasn't for me ... */ - return irq_none; - } - - /* clear completion interrupts */ - if (card->flags & um_flag_no_byte_status) - writel(cpu_to_le32(dmascr_dma_complete|dmascr_chain_complete), - card->csr_remap + dma_status_ctrl); - else - writeb((dmascr_dma_complete|dmascr_chain_complete) >> 16, - card->csr_remap + dma_status_ctrl + 2); - - /* log errors and clear interrupt status */ - if (dma_status & dmascr_any_err) { - unsigned int data_log1, data_log2; - unsigned int addr_log1, addr_log2; - unsigned char stat, count, syndrome, check; - - stat = readb(card->csr_remap + memctrlcmd_errstatus); - - data_log1 = le32_to_cpu(readl(card->csr_remap + - error_data_log)); - data_log2 = le32_to_cpu(readl(card->csr_remap + - error_data_log + 4)); - addr_log1 = le32_to_cpu(readl(card->csr_remap + - error_addr_log)); - addr_log2 = readb(card->csr_remap + error_addr_log + 4); - - count = readb(card->csr_remap + error_count); - syndrome = readb(card->csr_remap + error_syndrome); - check = readb(card->csr_remap + error_check); - - dump_dmastat(card, dma_status); - - if (stat & 0x01) - dev_printk(kern_err, &card->dev->dev, - "memory access error detected (err count %d) ", - count); - if (stat & 0x02) - dev_printk(kern_err, &card->dev->dev, - "multi-bit edc error "); - - dev_printk(kern_err, &card->dev->dev, - "fault address 0x%02x%08x, fault data 0x%08x%08x ", - addr_log2, addr_log1, data_log2, data_log1); - dev_printk(kern_err, &card->dev->dev, - "fault check 0x%02x, fault syndrome 0x%02x ", - check, syndrome); - - writeb(0, card->csr_remap + error_count); - } - - if (dma_status & dmascr_parity_err_rep) { - dev_printk(kern_err, &card->dev->dev, - "parity error reported "); - pci_read_config_word(card->dev, pci_status, &cfg_status); - pci_write_config_word(card->dev, pci_status, cfg_status); - } - - if (dma_status & dmascr_parity_err_det) { - dev_printk(kern_err, &card->dev->dev, - "parity error detected "); - pci_read_config_word(card->dev, pci_status, &cfg_status); - pci_write_config_word(card->dev, pci_status, cfg_status); - } - - if (dma_status & dmascr_system_err_sig) { - dev_printk(kern_err, &card->dev->dev, "system error "); - pci_read_config_word(card->dev, pci_status, &cfg_status); - pci_write_config_word(card->dev, pci_status, cfg_status); - } - - if (dma_status & dmascr_target_abt) { - dev_printk(kern_err, &card->dev->dev, "target abort "); - pci_read_config_word(card->dev, pci_status, &cfg_status); - pci_write_config_word(card->dev, pci_status, cfg_status); - } - - if (dma_status & dmascr_master_abt) { - dev_printk(kern_err, &card->dev->dev, "master abort "); - pci_read_config_word(card->dev, pci_status, &cfg_status); - pci_write_config_word(card->dev, pci_status, cfg_status); - } - - /* and process the dma descriptors */ - card->dma_status = dma_status; - tasklet_schedule(&card->tasklet); - -hw_trace(0x36); - - return irq_handled; -} - -/* - * if both batteries are good, no led - * if either battery has been warned, solid led - * if both batteries are bad, flash the led quickly - * if either battery is bad, flash the led semi quickly - */ -static void set_fault_to_battery_status(struct cardinfo *card) -{ - if (card->battery[0].good && card->battery[1].good) - set_led(card, led_fault, led_off); - else if (card->battery[0].warned || card->battery[1].warned) - set_led(card, led_fault, led_on); - else if (!card->battery[0].good && !card->battery[1].good) - set_led(card, led_fault, led_flash_7_0); - else - set_led(card, led_fault, led_flash_3_5); -} - -static void init_battery_timer(void); - -static int check_battery(struct cardinfo *card, int battery, int status) -{ - if (status != card->battery[battery].good) { - card->battery[battery].good = !card->battery[battery].good; - card->battery[battery].last_change = jiffies; - - if (card->battery[battery].good) { - dev_printk(kern_err, &card->dev->dev, - "battery %d now good ", battery + 1); - card->battery[battery].warned = 0; - } else - dev_printk(kern_err, &card->dev->dev, - "battery %d now failed ", battery + 1); - - return 1; - } else if (!card->battery[battery].good && - !card->battery[battery].warned && - time_after_eq(jiffies, card->battery[battery].last_change + - (hz * 60 * 60 * 5))) { - dev_printk(kern_err, &card->dev->dev, - "battery %d still failed after 5 hours ", battery + 1); - card->battery[battery].warned = 1; - - return 1; - } - - return 0; -} - -static void check_batteries(struct cardinfo *card) -{ - /* note: this must *never* be called while the card - * is doing (bus-to-card) dma, or you will need the - * reset switch - */ - unsigned char status; - int ret1, ret2; - - status = readb(card->csr_remap + memctrlstatus_battery); - if (debug & debug_battery_polling) - dev_printk(kern_debug, &card->dev->dev, - "checking battery status, 1 = %s, 2 = %s ", - (status & battery_1_failure) ? "failure" : "ok", - (status & battery_2_failure) ? "failure" : "ok"); - - ret1 = check_battery(card, 0, !(status & battery_1_failure)); - ret2 = check_battery(card, 1, !(status & battery_2_failure)); - - if (ret1 || ret2) - set_fault_to_battery_status(card); -} - -static void check_all_batteries(struct timer_list *unused) -{ - int i; - - for (i = 0; i < num_cards; i++) - if (!(cards[i].flags & um_flag_no_batt)) { - struct cardinfo *card = &cards[i]; - spin_lock_bh(&card->lock); - if (card->active >= 0) - card->check_batteries = 1; - else - check_batteries(card); - spin_unlock_bh(&card->lock); - } - - init_battery_timer(); -} - -static void init_battery_timer(void) -{ - timer_setup(&battery_timer, check_all_batteries, 0); - battery_timer.expires = jiffies + (hz * 60); - add_timer(&battery_timer); -} - -static void del_battery_timer(void) -{ - del_timer(&battery_timer); -} - -/* - * note no locks taken out here. in a worst case scenario, we could drop - * a chunk of system memory. but that should never happen, since validation - * happens at open or mount time, when locks are held. - * - * that's crap, since doing that while some partitions are opened - * or mounted will give you really nasty results. - */ -static int mm_revalidate(struct gendisk *disk) -{ - struct cardinfo *card = disk->private_data; - set_capacity(disk, card->mm_size << 1); - return 0; -} - -static int mm_getgeo(struct block_device *bdev, struct hd_geometry *geo) -{ - struct cardinfo *card = bdev->bd_disk->private_data; - int size = card->mm_size * (1024 / mm_hardsect); - - /* - * get geometry: we have to fake one... trim the size to a - * multiple of 2048 (1m): tell we have 32 sectors, 64 heads, - * whatever cylinders. - */ - geo->heads = 64; - geo->sectors = 32; - geo->cylinders = size / (geo->heads * geo->sectors); - return 0; -} - -static const struct block_device_operations mm_fops = { - .owner = this_module, - .submit_bio = mm_submit_bio, - .getgeo = mm_getgeo, - .revalidate_disk = mm_revalidate, -}; - -static int mm_pci_probe(struct pci_dev *dev, const struct pci_device_id *id) -{ - int ret; - struct cardinfo *card = &cards[num_cards]; - unsigned char mem_present; - unsigned char batt_status; - unsigned int saved_bar, data; - unsigned long csr_base; - unsigned long csr_len; - int magic_number; - static int printed_version; - - if (!printed_version++) - printk(kern_info driver_version " : " driver_desc " "); - - ret = pci_enable_device(dev); - if (ret) - return ret; - - pci_write_config_byte(dev, pci_latency_timer, 0xf8); - pci_set_master(dev); - - card->dev = dev; - - csr_base = pci_resource_start(dev, 0); - csr_len = pci_resource_len(dev, 0); - if (!csr_base || !csr_len) - return -enodev; - - dev_printk(kern_info, &dev->dev, - "micro memory(tm) controller found (pci mem module (battery backup)) "); - - if (dma_set_mask(&dev->dev, dma_bit_mask(64)) && - dma_set_mask(&dev->dev, dma_bit_mask(32))) { - dev_printk(kern_warning, &dev->dev, "no suitable dma found "); - return -enomem; - } - - ret = pci_request_regions(dev, driver_name); - if (ret) { - dev_printk(kern_err, &card->dev->dev, - "unable to request memory region "); - goto failed_req_csr; - } - - card->csr_remap = ioremap(csr_base, csr_len); - if (!card->csr_remap) { - dev_printk(kern_err, &card->dev->dev, - "unable to remap memory region "); - ret = -enomem; - - goto failed_remap_csr; - } - - dev_printk(kern_info, &card->dev->dev, - "csr 0x%08lx -> 0x%p (0x%lx) ", - csr_base, card->csr_remap, csr_len); - - switch (card->dev->device) { - case 0x5415: - card->flags |= um_flag_no_byte_status | um_flag_no_battreg; - magic_number = 0x59; - break; - - case 0x5425: - card->flags |= um_flag_no_byte_status; - magic_number = 0x5c; - break; - - case 0x6155: - card->flags |= um_flag_no_byte_status | - um_flag_no_battreg | um_flag_no_batt; - magic_number = 0x99; - break; - - default: - magic_number = 0x100; - break; - } - - if (readb(card->csr_remap + memctrlstatus_magic) != magic_number) { - dev_printk(kern_err, &card->dev->dev, "magic number invalid "); - ret = -enomem; - goto failed_magic; - } - - card->mm_pages[0].desc = dma_alloc_coherent(&card->dev->dev, - page_size * 2, &card->mm_pages[0].page_dma, gfp_kernel); - card->mm_pages[1].desc = dma_alloc_coherent(&card->dev->dev, - page_size * 2, &card->mm_pages[1].page_dma, gfp_kernel); - if (card->mm_pages[0].desc == null || - card->mm_pages[1].desc == null) { - dev_printk(kern_err, &card->dev->dev, "alloc failed "); - ret = -enomem; - goto failed_alloc; - } - reset_page(&card->mm_pages[0]); - reset_page(&card->mm_pages[1]); - card->ready = 0; /* page 0 is ready */ - card->active = -1; /* no page is active */ - card->bio = null; - card->biotail = &card->bio; - spin_lock_init(&card->lock); - - card->queue = blk_alloc_queue(numa_no_node); - if (!card->queue) { - ret = -enomem; - goto failed_alloc; - } - - tasklet_init(&card->tasklet, process_page, (unsigned long)card); - - card->check_batteries = 0; - - mem_present = readb(card->csr_remap + memctrlstatus_memory); - switch (mem_present) { - case mem_128_mb: - card->mm_size = 1024 * 128; - break; - case mem_256_mb: - card->mm_size = 1024 * 256; - break; - case mem_512_mb: - card->mm_size = 1024 * 512; - break; - case mem_1_gb: - card->mm_size = 1024 * 1024; - break; - case mem_2_gb: - card->mm_size = 1024 * 2048; - break; - default: - card->mm_size = 0; - break; - } - - /* clear the led's we control */ - set_led(card, led_remove, led_off); - set_led(card, led_fault, led_off); - - batt_status = readb(card->csr_remap + memctrlstatus_battery); - - card->battery[0].good = !(batt_status & battery_1_failure); - card->battery[1].good = !(batt_status & battery_2_failure); - card->battery[0].last_change = card->battery[1].last_change = jiffies; - - if (card->flags & um_flag_no_batt) - dev_printk(kern_info, &card->dev->dev, - "size %d kb ", card->mm_size); - else { - dev_printk(kern_info, &card->dev->dev, - "size %d kb, battery 1 %s (%s), battery 2 %s (%s) ", - card->mm_size, - batt_status & battery_1_disabled ? "disabled" : "enabled", - card->battery[0].good ? "ok" : "failure", - batt_status & battery_2_disabled ? "disabled" : "enabled", - card->battery[1].good ? "ok" : "failure"); - - set_fault_to_battery_status(card); - } - - pci_read_config_dword(dev, pci_base_address_1, &saved_bar); - data = 0xffffffff; - pci_write_config_dword(dev, pci_base_address_1, data); - pci_read_config_dword(dev, pci_base_address_1, &data); - pci_write_config_dword(dev, pci_base_address_1, saved_bar); - data &= 0xfffffff0; - data = ~data; - data += 1; - - if (request_irq(dev->irq, mm_interrupt, irqf_shared, driver_name, - card)) { - dev_printk(kern_err, &card->dev->dev, - "unable to allocate irq "); - ret = -enodev; - goto failed_req_irq; - } - - dev_printk(kern_info, &card->dev->dev, - "window size %d bytes, irq %d ", data, dev->irq); - - pci_set_drvdata(dev, card); - - if (pci_write_cmd != 0x0f) /* if not memory write & invalidate */ - pci_write_cmd = 0x07; /* then memory write command */ - - if (pci_write_cmd & 0x08) { /* use memory write and invalidate */ - unsigned short cfg_command; - pci_read_config_word(dev, pci_command, &cfg_command); - cfg_command |= 0x10; /* memory write & invalidate enable */ - pci_write_config_word(dev, pci_command, cfg_command); - } - pci_cmds = (pci_read_cmd << 28) | (pci_write_cmd << 24); - - num_cards++; - - if (!get_userbit(card, memory_initialized)) { - dev_printk(kern_info, &card->dev->dev, - "memory not initialized. consider over-writing whole device. "); - card->init_size = 0; - } else { - dev_printk(kern_info, &card->dev->dev, - "memory already initialized "); - card->init_size = card->mm_size; - } - - /* enable ecc */ - writeb(edc_store_correct, card->csr_remap + memctrlcmd_errctrl); - - return 0; - - failed_req_irq: - failed_alloc: - if (card->mm_pages[0].desc) - dma_free_coherent(&card->dev->dev, page_size * 2, - card->mm_pages[0].desc, - card->mm_pages[0].page_dma); - if (card->mm_pages[1].desc) - dma_free_coherent(&card->dev->dev, page_size * 2, - card->mm_pages[1].desc, - card->mm_pages[1].page_dma); - failed_magic: - iounmap(card->csr_remap); - failed_remap_csr: - pci_release_regions(dev); - failed_req_csr: - - return ret; -} - -static void mm_pci_remove(struct pci_dev *dev) -{ - struct cardinfo *card = pci_get_drvdata(dev); - - tasklet_kill(&card->tasklet); - free_irq(dev->irq, card); - iounmap(card->csr_remap); - - if (card->mm_pages[0].desc) - dma_free_coherent(&card->dev->dev, page_size * 2, - card->mm_pages[0].desc, - card->mm_pages[0].page_dma); - if (card->mm_pages[1].desc) - dma_free_coherent(&card->dev->dev, page_size * 2, - card->mm_pages[1].desc, - card->mm_pages[1].page_dma); - blk_cleanup_queue(card->queue); - - pci_release_regions(dev); - pci_disable_device(dev); -} - -static const struct pci_device_id mm_pci_ids[] = { - {pci_device(pci_vendor_id_micro_memory, pci_device_id_micro_memory_5415cn)}, - {pci_device(pci_vendor_id_micro_memory, pci_device_id_micro_memory_5425cn)}, - {pci_device(pci_vendor_id_micro_memory, pci_device_id_micro_memory_6155)}, - { - .vendor = 0x8086, - .device = 0xb555, - .subvendor = 0x1332, - .subdevice = 0x5460, - .class = 0x050000, - .class_mask = 0, - }, { /* end: all zeroes */ } -}; - -module_device_table(pci, mm_pci_ids); - -static struct pci_driver mm_pci_driver = { - .name = driver_name, - .id_table = mm_pci_ids, - .probe = mm_pci_probe, - .remove = mm_pci_remove, -}; - -static int __init mm_init(void) -{ - int retval, i; - int err; - - retval = pci_register_driver(&mm_pci_driver); - if (retval) - return -enomem; - - err = major_nr = register_blkdev(0, driver_name); - if (err < 0) { - pci_unregister_driver(&mm_pci_driver); - return -eio; - } - - for (i = 0; i < num_cards; i++) { - mm_gendisk[i] = alloc_disk(1 << mm_shift); - if (!mm_gendisk[i]) - goto out; - } - - for (i = 0; i < num_cards; i++) { - struct gendisk *disk = mm_gendisk[i]; - sprintf(disk->disk_name, "umem%c", 'a'+i); - spin_lock_init(&cards[i].lock); - disk->major = major_nr; - disk->first_minor = i << mm_shift; - disk->fops = &mm_fops; - disk->private_data = &cards[i]; - disk->queue = cards[i].queue; - set_capacity(disk, cards[i].mm_size << 1); - add_disk(disk); - } - - init_battery_timer(); - printk(kern_info "mm: desc_per_page = %ld ", desc_per_page); -/* printk("mm_init: done. 10-19-01 9:00 "); */ - return 0; - -out: - pci_unregister_driver(&mm_pci_driver); - unregister_blkdev(major_nr, driver_name); - while (i--) - put_disk(mm_gendisk[i]); - return -enomem; -} - -static void __exit mm_cleanup(void) -{ - int i; - - del_battery_timer(); - - for (i = 0; i < num_cards ; i++) { - del_gendisk(mm_gendisk[i]); - put_disk(mm_gendisk[i]); - } - - pci_unregister_driver(&mm_pci_driver); - - unregister_blkdev(major_nr, driver_name); -} - -module_init(mm_init); -module_exit(mm_cleanup); - -module_author(driver_author); -module_description(driver_desc); -module_license("gpl"); diff --git a/drivers/block/umem.h b/drivers/block/umem.h --- a/drivers/block/umem.h +++ /dev/null -/* spdx-license-identifier: gpl-2.0-only */ - -/* - * this file contains defines for the - * micro memory mm5415 - * family pci memory module with battery backup. - * - * copyright micro memory inc 2001. all rights reserved. - */ - -#ifndef _drivers_block_mm_h -#define _drivers_block_mm_h - - -#define irq_timeout (1 * hz) - -/* csr register definition */ -#define memctrlstatus_magic 0x00 -#define mm_magic_value (unsigned char)0x59 - -#define memctrlstatus_battery 0x04 -#define battery_1_disabled 0x01 -#define battery_1_failure 0x02 -#define battery_2_disabled 0x04 -#define battery_2_failure 0x08 - -#define memctrlstatus_memory 0x07 -#define mem_128_mb 0xfe -#define mem_256_mb 0xfc -#define mem_512_mb 0xf8 -#define mem_1_gb 0xf0 -#define mem_2_gb 0xe0 - -#define memctrlcmd_ledctrl 0x08 -#define led_remove 2 -#define led_fault 4 -#define led_power 6 -#define led_flip 255 -#define led_off 0x00 -#define led_on 0x01 -#define led_flash_3_5 0x02 -#define led_flash_7_0 0x03 -#define led_power_on 0x00 -#define led_power_off 0x01 -#define user_bit1 0x01 -#define user_bit2 0x02 - -#define memory_initialized user_bit1 - -#define memctrlcmd_errctrl 0x0c -#define edc_none_default 0x00 -#define edc_none 0x01 -#define edc_store_read 0x02 -#define edc_store_correct 0x03 - -#define memctrlcmd_errcnt 0x0d -#define memctrlcmd_errstatus 0x0e - -#define error_data_log 0x20 -#define error_addr_log 0x28 -#define error_count 0x3d -#define error_syndrome 0x3e -#define error_check 0x3f - -#define dma_pci_addr 0x40 -#define dma_local_addr 0x48 -#define dma_transfer_size 0x50 -#define dma_descriptor_addr 0x58 -#define dma_semaphore_addr 0x60 -#define dma_status_ctrl 0x68 -#define dmascr_go 0x00001 -#define dmascr_transfer_read 0x00002 -#define dmascr_chain_en 0x00004 -#define dmascr_sem_en 0x00010 -#define dmascr_dma_comp_en 0x00020 -#define dmascr_chain_comp_en 0x00040 -#define dmascr_err_int_en 0x00080 -#define dmascr_parity_int_en 0x00100 -#define dmascr_any_err 0x00800 -#define dmascr_mbe_err 0x01000 -#define dmascr_parity_err_rep 0x02000 -#define dmascr_parity_err_det 0x04000 -#define dmascr_system_err_sig 0x08000 -#define dmascr_target_abt 0x10000 -#define dmascr_master_abt 0x20000 -#define dmascr_dma_complete 0x40000 -#define dmascr_chain_complete 0x80000 - -/* -3.some pcs have host bridges which apparently do not correctly handle -read-line (0xe) or read-multiple (0xc) pci command codes during dma -transfers. in other systems these command codes will cause the host bridge -to allow longer bursts during dma read operations. the upper four bits -(31..28) of the dma csr have been made programmable, so that either a 0x6, -an 0xe or a 0xc can be written to them to set the command code used during -dma read operations. -*/ -#define dmascr_read 0x60000000 -#define dmascr_readline 0xe0000000 -#define dmascr_readmulti 0xc0000000 - - -#define dmascr_error_mask (dmascr_master_abt | dmascr_target_abt | dmascr_system_err_sig | dmascr_parity_err_det | dmascr_mbe_err | dmascr_any_err) -#define dmascr_hard_error (dmascr_master_abt | dmascr_target_abt | dmascr_system_err_sig | dmascr_parity_err_det | dmascr_mbe_err) - -#define windowmap_winnum 0x7b - -#define dma_read_from_host 0 -#define dma_write_to_host 1 - -struct mm_dma_desc { - __le64 pci_addr; - __le64 local_addr; - __le32 transfer_size; - u32 zero1; - __le64 next_desc_addr; - __le64 sem_addr; - __le32 control_bits; - u32 zero2; - - dma_addr_t data_dma_handle; - - /* copy of the bits */ - __le64 sem_control_bits; -} __attribute__((aligned(8))); - -/* bits for card->flags */ -#define um_flag_dma_in_regs 1 -#define um_flag_no_byte_status 2 -#define um_flag_no_battreg 4 -#define um_flag_no_batt 8 -#endif
|
Drivers in the Staging area
|
14d97622448acbea0348be62f62e25d9a361e16b
|
davidlohr bueso neilbrown neilb suse de christoph hellwig hch infradead org
|
drivers
|
block
|
configs
|
net: dsa: check tx timestamp request in core driver
|
check tx timestamp request in core driver at very beginning of dsa_skb_tx_timestamp(), so that most skbs not requiring tx timestamp just return. and drop such checking in device drivers.
|
this release includes the landlock security module, which aims to make easier to sandbox applications; support for the clang control flow integrity, which aims to abort the program upon detecting certain forms of undefined behavior; support for randomising the stack address offset in each syscall; support for concurrent tbl flushing; preparatory apple m1 support; support for incoming amd and intel graphics chips; bpf support for calling kernel functions directly; a virtio sound driver for improved sound experience on virtualized guests; io_uring support for multi shot mode and a misc cgroup for miscellaneous resources. as always, there are many other features, new drivers, improvements and fixes.
|
support ocelot ptp sync one-step timestamping
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'security', 'networking', 'architectures arm x86 mips powerpc riscv s390 ia64 xtensa']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'remote processors', 'clock', 'phy ("physical layer" framework)', 'various']
|
[]
|
['c']
| 4
| 4
| 9
|
--- diff --git a/drivers/net/dsa/hirschmann/hellcreek_hwtstamp.c b/drivers/net/dsa/hirschmann/hellcreek_hwtstamp.c --- a/drivers/net/dsa/hirschmann/hellcreek_hwtstamp.c +++ b/drivers/net/dsa/hirschmann/hellcreek_hwtstamp.c - /* check if the driver is expected to do hw timestamping */ - if (!(skb_shinfo(clone)->tx_flags & skbtx_hw_tstamp)) - return false; - diff --git a/drivers/net/dsa/mv88e6xxx/hwtstamp.c b/drivers/net/dsa/mv88e6xxx/hwtstamp.c --- a/drivers/net/dsa/mv88e6xxx/hwtstamp.c +++ b/drivers/net/dsa/mv88e6xxx/hwtstamp.c - if (!(skb_shinfo(clone)->tx_flags & skbtx_hw_tstamp)) - return false; - diff --git a/drivers/net/dsa/ocelot/felix.c b/drivers/net/dsa/ocelot/felix.c --- a/drivers/net/dsa/ocelot/felix.c +++ b/drivers/net/dsa/ocelot/felix.c - if (ocelot->ptp && (skb_shinfo(clone)->tx_flags & skbtx_hw_tstamp) && - ocelot_port->ptp_cmd == ifh_rew_op_two_step_ptp) { + if (ocelot->ptp && ocelot_port->ptp_cmd == ifh_rew_op_two_step_ptp) { diff --git a/net/dsa/slave.c b/net/dsa/slave.c --- a/net/dsa/slave.c +++ b/net/dsa/slave.c + if (!(skb_shinfo(skb)->tx_flags & skbtx_hw_tstamp)) + return; +
|
Networking
|
cfd12c06cdceac094aab3f097cce24c279bfd43b
|
yangbo lu richard cochran richardcochran gmail com florian fainelli f fainelli gmail com kurt kanzenbach kurt linutronix de
|
drivers
|
net
|
dsa, hirschmann, mv88e6xxx, ocelot
|
net: dsa: no longer identify ptp packet in core driver
|
move ptp_classify_raw out of dsa core driver for handling tx timestamp request. let device drivers do this if they want. not all drivers want to limit tx timestamping for only ptp packet.
|
this release includes the landlock security module, which aims to make easier to sandbox applications; support for the clang control flow integrity, which aims to abort the program upon detecting certain forms of undefined behavior; support for randomising the stack address offset in each syscall; support for concurrent tbl flushing; preparatory apple m1 support; support for incoming amd and intel graphics chips; bpf support for calling kernel functions directly; a virtio sound driver for improved sound experience on virtualized guests; io_uring support for multi shot mode and a misc cgroup for miscellaneous resources. as always, there are many other features, new drivers, improvements and fixes.
|
support ocelot ptp sync one-step timestamping
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'security', 'networking', 'architectures arm x86 mips powerpc riscv s390 ia64 xtensa']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'remote processors', 'clock', 'phy ("physical layer" framework)', 'various']
|
[]
|
['h', 'c']
| 9
| 21
| 21
|
--- diff --git a/drivers/net/dsa/hirschmann/hellcreek_hwtstamp.c b/drivers/net/dsa/hirschmann/hellcreek_hwtstamp.c --- a/drivers/net/dsa/hirschmann/hellcreek_hwtstamp.c +++ b/drivers/net/dsa/hirschmann/hellcreek_hwtstamp.c - struct sk_buff *clone, unsigned int type) + struct sk_buff *clone) + unsigned int type; + type = ptp_classify_raw(clone); + if (type == ptp_class_none) + return false; + diff --git a/drivers/net/dsa/hirschmann/hellcreek_hwtstamp.h b/drivers/net/dsa/hirschmann/hellcreek_hwtstamp.h --- a/drivers/net/dsa/hirschmann/hellcreek_hwtstamp.h +++ b/drivers/net/dsa/hirschmann/hellcreek_hwtstamp.h - struct sk_buff *clone, unsigned int type); + struct sk_buff *clone); diff --git a/drivers/net/dsa/mv88e6xxx/hwtstamp.c b/drivers/net/dsa/mv88e6xxx/hwtstamp.c --- a/drivers/net/dsa/mv88e6xxx/hwtstamp.c +++ b/drivers/net/dsa/mv88e6xxx/hwtstamp.c - struct sk_buff *clone, unsigned int type) + struct sk_buff *clone) + unsigned int type; + + type = ptp_classify_raw(clone); + if (type == ptp_class_none) + return false; diff --git a/drivers/net/dsa/mv88e6xxx/hwtstamp.h b/drivers/net/dsa/mv88e6xxx/hwtstamp.h --- a/drivers/net/dsa/mv88e6xxx/hwtstamp.h +++ b/drivers/net/dsa/mv88e6xxx/hwtstamp.h - struct sk_buff *clone, unsigned int type); + struct sk_buff *clone); - struct sk_buff *clone, - unsigned int type) + struct sk_buff *clone) diff --git a/drivers/net/dsa/ocelot/felix.c b/drivers/net/dsa/ocelot/felix.c --- a/drivers/net/dsa/ocelot/felix.c +++ b/drivers/net/dsa/ocelot/felix.c - struct sk_buff *clone, unsigned int type) + struct sk_buff *clone) diff --git a/drivers/net/dsa/sja1105/sja1105_ptp.c b/drivers/net/dsa/sja1105/sja1105_ptp.c --- a/drivers/net/dsa/sja1105/sja1105_ptp.c +++ b/drivers/net/dsa/sja1105/sja1105_ptp.c -bool sja1105_port_txtstamp(struct dsa_switch *ds, int port, - struct sk_buff *skb, unsigned int type) +bool sja1105_port_txtstamp(struct dsa_switch *ds, int port, struct sk_buff *skb) diff --git a/drivers/net/dsa/sja1105/sja1105_ptp.h b/drivers/net/dsa/sja1105/sja1105_ptp.h --- a/drivers/net/dsa/sja1105/sja1105_ptp.h +++ b/drivers/net/dsa/sja1105/sja1105_ptp.h - struct sk_buff *skb, unsigned int type); + struct sk_buff *skb); diff --git a/include/net/dsa.h b/include/net/dsa.h --- a/include/net/dsa.h +++ b/include/net/dsa.h - struct sk_buff *clone, unsigned int type); + struct sk_buff *clone); diff --git a/net/dsa/slave.c b/net/dsa/slave.c --- a/net/dsa/slave.c +++ b/net/dsa/slave.c -#include <linux/ptp_classify.h> - unsigned int type; - type = ptp_classify_raw(skb); - if (type == ptp_class_none) - return; - - if (ds->ops->port_txtstamp(ds, p->dp->index, clone, type)) { + if (ds->ops->port_txtstamp(ds, p->dp->index, clone)) { - /* identify ptp protocol packets, clone them, and pass them to the - * switch driver - */ + /* handle tx timestamp if any */
|
Networking
|
cf536ea3c7eefb26082836eb7f930b293dd38345
|
yangbo lu richard cochran richardcochran gmail com kurt kanzenbach kurt linutronix de
|
drivers
|
net
|
dsa, hirschmann, mv88e6xxx, ocelot, sja1105
|
net: dsa: no longer clone skb in core driver
|
it was a waste to clone skb directly in dsa_skb_tx_timestamp(). for one-step timestamping, a clone was not needed. for any failure of port_txtstamp (this may usually happen), the skb clone had to be freed.
|
this release includes the landlock security module, which aims to make easier to sandbox applications; support for the clang control flow integrity, which aims to abort the program upon detecting certain forms of undefined behavior; support for randomising the stack address offset in each syscall; support for concurrent tbl flushing; preparatory apple m1 support; support for incoming amd and intel graphics chips; bpf support for calling kernel functions directly; a virtio sound driver for improved sound experience on virtualized guests; io_uring support for multi shot mode and a misc cgroup for miscellaneous resources. as always, there are many other features, new drivers, improvements and fixes.
|
support ocelot ptp sync one-step timestamping
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'security', 'networking', 'architectures arm x86 mips powerpc riscv s390 ia64 xtensa']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'remote processors', 'clock', 'phy ("physical layer" framework)', 'various']
|
[]
|
['h', 'c']
| 9
| 57
| 49
|
--- diff --git a/drivers/net/dsa/hirschmann/hellcreek_hwtstamp.c b/drivers/net/dsa/hirschmann/hellcreek_hwtstamp.c --- a/drivers/net/dsa/hirschmann/hellcreek_hwtstamp.c +++ b/drivers/net/dsa/hirschmann/hellcreek_hwtstamp.c -bool hellcreek_port_txtstamp(struct dsa_switch *ds, int port, - struct sk_buff *clone) +void hellcreek_port_txtstamp(struct dsa_switch *ds, int port, + struct sk_buff *skb) + struct sk_buff *clone; - type = ptp_classify_raw(clone); + type = ptp_classify_raw(skb); - return false; + return; - hdr = hellcreek_should_tstamp(hellcreek, port, clone, type); + hdr = hellcreek_should_tstamp(hellcreek, port, skb, type); - return false; + return; + + clone = skb_clone_sk(skb); + if (!clone) + return; - &ps->state)) - return false; + &ps->state)) { + kfree_skb(clone); + return; + } - - return true; diff --git a/drivers/net/dsa/hirschmann/hellcreek_hwtstamp.h b/drivers/net/dsa/hirschmann/hellcreek_hwtstamp.h --- a/drivers/net/dsa/hirschmann/hellcreek_hwtstamp.h +++ b/drivers/net/dsa/hirschmann/hellcreek_hwtstamp.h -bool hellcreek_port_txtstamp(struct dsa_switch *ds, int port, - struct sk_buff *clone); +void hellcreek_port_txtstamp(struct dsa_switch *ds, int port, + struct sk_buff *skb); diff --git a/drivers/net/dsa/mv88e6xxx/hwtstamp.c b/drivers/net/dsa/mv88e6xxx/hwtstamp.c --- a/drivers/net/dsa/mv88e6xxx/hwtstamp.c +++ b/drivers/net/dsa/mv88e6xxx/hwtstamp.c -bool mv88e6xxx_port_txtstamp(struct dsa_switch *ds, int port, - struct sk_buff *clone) +void mv88e6xxx_port_txtstamp(struct dsa_switch *ds, int port, + struct sk_buff *skb) + struct sk_buff *clone; - type = ptp_classify_raw(clone); + type = ptp_classify_raw(skb); - return false; + return; - hdr = mv88e6xxx_should_tstamp(chip, port, clone, type); + hdr = mv88e6xxx_should_tstamp(chip, port, skb, type); - return false; + return; + + clone = skb_clone_sk(skb); + if (!clone) + return; - &ps->state)) - return false; + &ps->state)) { + kfree_skb(clone); + return; + } - return true; diff --git a/drivers/net/dsa/mv88e6xxx/hwtstamp.h b/drivers/net/dsa/mv88e6xxx/hwtstamp.h --- a/drivers/net/dsa/mv88e6xxx/hwtstamp.h +++ b/drivers/net/dsa/mv88e6xxx/hwtstamp.h -bool mv88e6xxx_port_txtstamp(struct dsa_switch *ds, int port, - struct sk_buff *clone); +void mv88e6xxx_port_txtstamp(struct dsa_switch *ds, int port, + struct sk_buff *skb); -static inline bool mv88e6xxx_port_txtstamp(struct dsa_switch *ds, int port, - struct sk_buff *clone) +static inline void mv88e6xxx_port_txtstamp(struct dsa_switch *ds, int port, + struct sk_buff *skb) - return false; diff --git a/drivers/net/dsa/ocelot/felix.c b/drivers/net/dsa/ocelot/felix.c --- a/drivers/net/dsa/ocelot/felix.c +++ b/drivers/net/dsa/ocelot/felix.c -static bool felix_txtstamp(struct dsa_switch *ds, int port, - struct sk_buff *clone) +static void felix_txtstamp(struct dsa_switch *ds, int port, + struct sk_buff *skb) + struct sk_buff *clone; + clone = skb_clone_sk(skb); + if (!clone) + return; + - return true; + dsa_skb_cb(skb)->clone = clone; - - return false; diff --git a/drivers/net/dsa/sja1105/sja1105_ptp.c b/drivers/net/dsa/sja1105/sja1105_ptp.c --- a/drivers/net/dsa/sja1105/sja1105_ptp.c +++ b/drivers/net/dsa/sja1105/sja1105_ptp.c -/* called from dsa_skb_tx_timestamp. this callback is just to make dsa clone +/* called from dsa_skb_tx_timestamp. this callback is just to clone -bool sja1105_port_txtstamp(struct dsa_switch *ds, int port, struct sk_buff *skb) +void sja1105_port_txtstamp(struct dsa_switch *ds, int port, struct sk_buff *skb) + struct sk_buff *clone; - return false; + return; - return true; + clone = skb_clone_sk(skb); + if (!clone) + return; + + dsa_skb_cb(skb)->clone = clone; diff --git a/drivers/net/dsa/sja1105/sja1105_ptp.h b/drivers/net/dsa/sja1105/sja1105_ptp.h --- a/drivers/net/dsa/sja1105/sja1105_ptp.h +++ b/drivers/net/dsa/sja1105/sja1105_ptp.h -bool sja1105_port_txtstamp(struct dsa_switch *ds, int port, +void sja1105_port_txtstamp(struct dsa_switch *ds, int port, diff --git a/include/net/dsa.h b/include/net/dsa.h --- a/include/net/dsa.h +++ b/include/net/dsa.h - bool (*port_txtstamp)(struct dsa_switch *ds, int port, - struct sk_buff *clone); + void (*port_txtstamp)(struct dsa_switch *ds, int port, + struct sk_buff *skb); diff --git a/net/dsa/slave.c b/net/dsa/slave.c --- a/net/dsa/slave.c +++ b/net/dsa/slave.c - struct sk_buff *clone; - clone = skb_clone_sk(skb); - if (!clone) - return; - - if (ds->ops->port_txtstamp(ds, p->dp->index, clone)) { - dsa_skb_cb(skb)->clone = clone; - return; - } - - kfree_skb(clone); + ds->ops->port_txtstamp(ds, p->dp->index, skb);
|
Networking
|
5c5416f5d4c75fe6aba56f6c2c45a070b5e7cc78
|
yangbo lu
|
drivers
|
net
|
dsa, hirschmann, mv88e6xxx, ocelot, sja1105
|
net: dsa: free skb->cb usage in core driver
|
free skb->cb usage in core driver and let device drivers decide to use or not. the reason having a dsa_skb_cb(skb)->clone was because dsa_skb_tx_timestamp() which may set the clone pointer was called before p->xmit() which would use the clone if any, and the device driver has no way to initialize the clone pointer.
|
this release includes the landlock security module, which aims to make easier to sandbox applications; support for the clang control flow integrity, which aims to abort the program upon detecting certain forms of undefined behavior; support for randomising the stack address offset in each syscall; support for concurrent tbl flushing; preparatory apple m1 support; support for incoming amd and intel graphics chips; bpf support for calling kernel functions directly; a virtio sound driver for improved sound experience on virtualized guests; io_uring support for multi shot mode and a misc cgroup for miscellaneous resources. as always, there are many other features, new drivers, improvements and fixes.
|
support ocelot ptp sync one-step timestamping
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'security', 'networking', 'architectures arm x86 mips powerpc riscv s390 ia64 xtensa']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'remote processors', 'clock', 'phy ("physical layer" framework)', 'various']
|
[]
|
['h', 'c']
| 11
| 27
| 32
|
--- diff --git a/drivers/net/dsa/ocelot/felix.c b/drivers/net/dsa/ocelot/felix.c --- a/drivers/net/dsa/ocelot/felix.c +++ b/drivers/net/dsa/ocelot/felix.c - dsa_skb_cb(skb)->clone = clone; + ocelot_skb_cb(skb)->clone = clone; diff --git a/drivers/net/dsa/sja1105/sja1105_main.c b/drivers/net/dsa/sja1105/sja1105_main.c --- a/drivers/net/dsa/sja1105/sja1105_main.c +++ b/drivers/net/dsa/sja1105/sja1105_main.c - struct sk_buff *clone = dsa_skb_cb(skb)->clone; + struct sk_buff *clone = sja1105_skb_cb(skb)->clone; diff --git a/drivers/net/dsa/sja1105/sja1105_ptp.c b/drivers/net/dsa/sja1105/sja1105_ptp.c --- a/drivers/net/dsa/sja1105/sja1105_ptp.c +++ b/drivers/net/dsa/sja1105/sja1105_ptp.c - * the skb and have it available in dsa_skb_cb in the .port_deferred_xmit + * the skb and have it available in sja1105_skb_cb in the .port_deferred_xmit - dsa_skb_cb(skb)->clone = clone; + sja1105_skb_cb(skb)->clone = clone; diff --git a/drivers/net/ethernet/mscc/ocelot.c b/drivers/net/ethernet/mscc/ocelot.c --- a/drivers/net/ethernet/mscc/ocelot.c +++ b/drivers/net/ethernet/mscc/ocelot.c - /* store timestamp id in cb[0] of sk_buff */ - clone->cb[0] = ocelot_port->ts_id; + /* store timestamp id in ocelot_skb_cb(clone)->ts_id */ + ocelot_skb_cb(clone)->ts_id = ocelot_port->ts_id; - if (skb->cb[0] != id) + if (ocelot_skb_cb(skb)->ts_id != id) diff --git a/drivers/net/ethernet/mscc/ocelot_net.c b/drivers/net/ethernet/mscc/ocelot_net.c --- a/drivers/net/ethernet/mscc/ocelot_net.c +++ b/drivers/net/ethernet/mscc/ocelot_net.c - rew_op |= clone->cb[0] << 3; + rew_op |= ocelot_skb_cb(clone)->ts_id << 3; diff --git a/include/linux/dsa/sja1105.h b/include/linux/dsa/sja1105.h --- a/include/linux/dsa/sja1105.h +++ b/include/linux/dsa/sja1105.h + struct sk_buff *clone; - ((struct sja1105_skb_cb *)dsa_skb_cb_priv(skb)) + ((struct sja1105_skb_cb *)((skb)->cb)) diff --git a/include/net/dsa.h b/include/net/dsa.h --- a/include/net/dsa.h +++ b/include/net/dsa.h -struct dsa_skb_cb { - struct sk_buff *clone; -}; - -struct __dsa_skb_cb { - struct dsa_skb_cb cb; - u8 priv[48 - sizeof(struct dsa_skb_cb)]; -}; - -#define dsa_skb_cb(skb) ((struct dsa_skb_cb *)((skb)->cb)) - -#define dsa_skb_cb_priv(skb) \ - ((void *)(skb)->cb + offsetof(struct __dsa_skb_cb, priv)) - diff --git a/include/soc/mscc/ocelot.h b/include/soc/mscc/ocelot.h --- a/include/soc/mscc/ocelot.h +++ b/include/soc/mscc/ocelot.h +struct ocelot_skb_cb { + struct sk_buff *clone; + u8 ts_id; +}; + +#define ocelot_skb_cb(skb) \ + ((struct ocelot_skb_cb *)((skb)->cb)) + diff --git a/net/dsa/slave.c b/net/dsa/slave.c --- a/net/dsa/slave.c +++ b/net/dsa/slave.c - dsa_skb_cb(skb)->clone = null; + memset(skb->cb, 0, sizeof(skb->cb)); diff --git a/net/dsa/tag_ocelot.c b/net/dsa/tag_ocelot.c --- a/net/dsa/tag_ocelot.c +++ b/net/dsa/tag_ocelot.c - /* retrieve timestamp id populated inside skb->cb[0] of the - * clone by ocelot_port_add_txtstamp_skb + /* retrieve timestamp id populated inside ocelot_skb_cb(clone)->ts_id + * by ocelot_port_add_txtstamp_skb - rew_op |= clone->cb[0] << 3; + rew_op |= ocelot_skb_cb(clone)->ts_id << 3; - struct sk_buff *clone = dsa_skb_cb(skb)->clone; + struct sk_buff *clone = ocelot_skb_cb(skb)->clone; diff --git a/net/dsa/tag_ocelot_8021q.c b/net/dsa/tag_ocelot_8021q.c --- a/net/dsa/tag_ocelot_8021q.c +++ b/net/dsa/tag_ocelot_8021q.c - /* retrieve timestamp id populated inside skb->cb[0] of the - * clone by ocelot_port_add_txtstamp_skb + /* retrieve timestamp id populated inside ocelot_skb_cb(clone)->ts_id + * by ocelot_port_add_txtstamp_skb - rew_op |= clone->cb[0] << 3; + rew_op |= ocelot_skb_cb(clone)->ts_id << 3; - struct sk_buff *clone = dsa_skb_cb(skb)->clone; + struct sk_buff *clone = ocelot_skb_cb(skb)->clone;
|
Networking
|
c4b364ce1270d689ee5010001344b8eae3685f32
|
yangbo lu
|
drivers
|
net
|
dsa, ethernet, mscc, ocelot, sja1105
|
docs: networking: timestamping: update for dsa switches
|
update timestamping doc for dsa switches to describe current implementation accurately. on tx, the skb cloning is no longer in dsa generic code.
|
this release includes the landlock security module, which aims to make easier to sandbox applications; support for the clang control flow integrity, which aims to abort the program upon detecting certain forms of undefined behavior; support for randomising the stack address offset in each syscall; support for concurrent tbl flushing; preparatory apple m1 support; support for incoming amd and intel graphics chips; bpf support for calling kernel functions directly; a virtio sound driver for improved sound experience on virtualized guests; io_uring support for multi shot mode and a misc cgroup for miscellaneous resources. as always, there are many other features, new drivers, improvements and fixes.
|
support ocelot ptp sync one-step timestamping
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'security', 'networking', 'architectures arm x86 mips powerpc riscv s390 ia64 xtensa']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'remote processors', 'clock', 'phy ("physical layer" framework)', 'various']
|
[]
|
['rst']
| 1
| 39
| 24
|
--- diff --git a/documentation/networking/timestamping.rst b/documentation/networking/timestamping.rst --- a/documentation/networking/timestamping.rst +++ b/documentation/networking/timestamping.rst -in code, dsa provides for most of the infrastructure for timestamping already, -in generic code: a bpf classifier (''ptp_classify_raw'') is used to identify -ptp event messages (any other packets, including ptp general messages, are not -timestamped), and provides two hooks to drivers: - -- ''.port_txtstamp()'': the driver is passed a clone of the timestampable skb - to be transmitted, before actually transmitting it. typically, a switch will - have a ptp tx timestamp register (or sometimes a fifo) where the timestamp - becomes available. there may be an irq that is raised upon this timestamp's - availability, or the driver might have to poll after invoking - ''dev_queue_xmit()'' towards the host interface. either way, in the - ''.port_txtstamp()'' method, the driver only needs to save the clone for - later use (when the timestamp becomes available). each skb is annotated with - a pointer to its clone, in ''dsa_skb_cb(skb)->clone'', to ease the driver's - job of keeping track of which clone belongs to which skb. - -- ''.port_rxtstamp()'': the original (and only) timestampable skb is provided - to the driver, for it to annotate it with a timestamp, if that is immediately - available, or defer to later. on reception, timestamps might either be - available in-band (through metadata in the dsa header, or attached in other - ways to the packet), or out-of-band (through another rx timestamping fifo). - deferral on rx is typically necessary when retrieving the timestamp needs a - sleepable context. in that case, it is the responsibility of the dsa driver - to call ''netif_rx_ni()'' on the freshly timestamped skb. +in the generic layer, dsa provides the following infrastructure for ptp +timestamping: + +- ''.port_txtstamp()'': a hook called prior to the transmission of + packets with a hardware tx timestamping request from user space. + this is required for two-step timestamping, since the hardware + timestamp becomes available after the actual mac transmission, so the + driver must be prepared to correlate the timestamp with the original + packet so that it can re-enqueue the packet back into the socket's + error queue. to save the packet for when the timestamp becomes + available, the driver can call ''skb_clone_sk'' , save the clone pointer + in skb->cb and enqueue a tx skb queue. typically, a switch will have a + ptp tx timestamp register (or sometimes a fifo) where the timestamp + becomes available. in case of a fifo, the hardware might store + key-value pairs of ptp sequence id/message type/domain number and the + actual timestamp. to perform the correlation correctly between the + packets in a queue waiting for timestamping and the actual timestamps, + drivers can use a bpf classifier (''ptp_classify_raw'') to identify + the ptp transport type, and ''ptp_parse_header'' to interpret the ptp + header fields. there may be an irq that is raised upon this + timestamp's availability, or the driver might have to poll after + invoking ''dev_queue_xmit()'' towards the host interface. + one-step tx timestamping do not require packet cloning, since there is + no follow-up message required by the ptp protocol (because the + tx timestamp is embedded into the packet by the mac), and therefore + user space does not expect the packet annotated with the tx timestamp + to be re-enqueued into its socket's error queue. + +- ''.port_rxtstamp()'': on rx, the bpf classifier is run by dsa to + identify ptp event messages (any other packets, including ptp general + messages, are not timestamped). the original (and only) timestampable + skb is provided to the driver, for it to annotate it with a timestamp, + if that is immediately available, or defer to later. on reception, + timestamps might either be available in-band (through metadata in the + dsa header, or attached in other ways to the packet), or out-of-band + (through another rx timestamping fifo). deferral on rx is typically + necessary when retrieving the timestamp needs a sleepable context. in + that case, it is the responsibility of the dsa driver to call + ''netif_rx_ni()'' on the freshly timestamped skb.
|
Networking
|
d150946ed878d566ac55003b4722621bb55d9ac2
|
yangbo lu richard cochran richardcochran gmail com
|
documentation
|
networking
| |
net: mscc: ocelot: convert to ocelot_port_txtstamp_request()
|
convert to a common ocelot_port_txtstamp_request() for tx timestamp request handling.
|
this release includes the landlock security module, which aims to make easier to sandbox applications; support for the clang control flow integrity, which aims to abort the program upon detecting certain forms of undefined behavior; support for randomising the stack address offset in each syscall; support for concurrent tbl flushing; preparatory apple m1 support; support for incoming amd and intel graphics chips; bpf support for calling kernel functions directly; a virtio sound driver for improved sound experience on virtualized guests; io_uring support for multi shot mode and a misc cgroup for miscellaneous resources. as always, there are many other features, new drivers, improvements and fixes.
|
support ocelot ptp sync one-step timestamping
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'security', 'networking', 'architectures arm x86 mips powerpc riscv s390 ia64 xtensa']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'remote processors', 'clock', 'phy ("physical layer" framework)', 'various']
|
[]
|
['h', 'c']
| 4
| 38
| 24
|
--- diff --git a/drivers/net/dsa/ocelot/felix.c b/drivers/net/dsa/ocelot/felix.c --- a/drivers/net/dsa/ocelot/felix.c +++ b/drivers/net/dsa/ocelot/felix.c - struct ocelot_port *ocelot_port = ocelot->ports[port]; - struct sk_buff *clone; + struct sk_buff *clone = null; - if (ocelot->ptp && ocelot_port->ptp_cmd == ifh_rew_op_two_step_ptp) { - clone = skb_clone_sk(skb); - if (!clone) - return; + if (!ocelot->ptp) + return; - ocelot_port_add_txtstamp_skb(ocelot, port, clone); + if (ocelot_port_txtstamp_request(ocelot, port, skb, &clone)) + return; + + if (clone) - } diff --git a/drivers/net/ethernet/mscc/ocelot.c b/drivers/net/ethernet/mscc/ocelot.c --- a/drivers/net/ethernet/mscc/ocelot.c +++ b/drivers/net/ethernet/mscc/ocelot.c -void ocelot_port_add_txtstamp_skb(struct ocelot *ocelot, int port, - struct sk_buff *clone) +static void ocelot_port_add_txtstamp_skb(struct ocelot *ocelot, int port, + struct sk_buff *clone) -export_symbol(ocelot_port_add_txtstamp_skb); + +int ocelot_port_txtstamp_request(struct ocelot *ocelot, int port, + struct sk_buff *skb, + struct sk_buff **clone) +{ + struct ocelot_port *ocelot_port = ocelot->ports[port]; + u8 ptp_cmd = ocelot_port->ptp_cmd; + + if (ptp_cmd == ifh_rew_op_two_step_ptp) { + *clone = skb_clone_sk(skb); + if (!(*clone)) + return -enomem; + + ocelot_port_add_txtstamp_skb(ocelot, port, *clone); + } + + return 0; +} +export_symbol(ocelot_port_txtstamp_request); diff --git a/drivers/net/ethernet/mscc/ocelot_net.c b/drivers/net/ethernet/mscc/ocelot_net.c --- a/drivers/net/ethernet/mscc/ocelot_net.c +++ b/drivers/net/ethernet/mscc/ocelot_net.c - rew_op = ocelot_port->ptp_cmd; + struct sk_buff *clone = null; - if (ocelot_port->ptp_cmd == ifh_rew_op_two_step_ptp) { - struct sk_buff *clone; - - clone = skb_clone_sk(skb); - if (!clone) { - kfree_skb(skb); - return netdev_tx_ok; - } - - ocelot_port_add_txtstamp_skb(ocelot, port, clone); + if (ocelot_port_txtstamp_request(ocelot, port, skb, &clone)) { + kfree_skb(skb); + return netdev_tx_ok; + } + if (ocelot_port->ptp_cmd == ifh_rew_op_two_step_ptp) { + rew_op = ocelot_port->ptp_cmd; diff --git a/include/soc/mscc/ocelot.h b/include/soc/mscc/ocelot.h --- a/include/soc/mscc/ocelot.h +++ b/include/soc/mscc/ocelot.h -void ocelot_port_add_txtstamp_skb(struct ocelot *ocelot, int port, - struct sk_buff *clone); +int ocelot_port_txtstamp_request(struct ocelot *ocelot, int port, + struct sk_buff *skb, + struct sk_buff **clone);
|
Networking
|
682eaad93e8cfaaa439af39861ab8610eae5ff33
|
yangbo lu richard cochran richardcochran gmail com vladimir oltean vladimir oltean nxp com
|
include
|
soc
|
dsa, ethernet, mscc, ocelot
|
net: mscc: ocelot: support ptp sync one-step timestamping
|
although hwtstamp_tx_onestep_sync existed in ioctl for hardware timestamp configuration, the ptp sync one-step timestamping had never been supported.
|
this release includes the landlock security module, which aims to make easier to sandbox applications; support for the clang control flow integrity, which aims to abort the program upon detecting certain forms of undefined behavior; support for randomising the stack address offset in each syscall; support for concurrent tbl flushing; preparatory apple m1 support; support for incoming amd and intel graphics chips; bpf support for calling kernel functions directly; a virtio sound driver for improved sound experience on virtualized guests; io_uring support for multi shot mode and a misc cgroup for miscellaneous resources. as always, there are many other features, new drivers, improvements and fixes.
|
support ocelot ptp sync one-step timestamping
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'security', 'networking', 'architectures arm x86 mips powerpc riscv s390 ia64 xtensa']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'remote processors', 'clock', 'phy ("physical layer" framework)', 'various']
|
[]
|
['kconfig', 'h', 'c']
| 6
| 81
| 58
|
- ocelot_port_txtstamp_request() - ocelot_ptp_rew_op() --- diff --git a/drivers/net/ethernet/mscc/ocelot.c b/drivers/net/ethernet/mscc/ocelot.c --- a/drivers/net/ethernet/mscc/ocelot.c +++ b/drivers/net/ethernet/mscc/ocelot.c +#include <linux/ptp_classify.h> +u32 ocelot_ptp_rew_op(struct sk_buff *skb) +{ + struct sk_buff *clone = ocelot_skb_cb(skb)->clone; + u8 ptp_cmd = ocelot_skb_cb(skb)->ptp_cmd; + u32 rew_op = 0; + + if (ptp_cmd == ifh_rew_op_two_step_ptp && clone) { + rew_op = ptp_cmd; + rew_op |= ocelot_skb_cb(clone)->ts_id << 3; + } else if (ptp_cmd == ifh_rew_op_origin_ptp) { + rew_op = ptp_cmd; + } + + return rew_op; +} +export_symbol(ocelot_ptp_rew_op); + +static bool ocelot_ptp_is_onestep_sync(struct sk_buff *skb) +{ + struct ptp_header *hdr; + unsigned int ptp_class; + u8 msgtype, twostep; + + ptp_class = ptp_classify_raw(skb); + if (ptp_class == ptp_class_none) + return false; + + hdr = ptp_parse_header(skb, ptp_class); + if (!hdr) + return false; + + msgtype = ptp_get_msgtype(hdr, ptp_class); + twostep = hdr->flag_field[0] & 0x2; + + if (msgtype == ptp_msgtype_sync && twostep == 0) + return true; + + return false; +} + + /* store ptp_cmd in ocelot_skb_cb(skb)->ptp_cmd */ + if (ptp_cmd == ifh_rew_op_origin_ptp) { + if (ocelot_ptp_is_onestep_sync(skb)) { + ocelot_skb_cb(skb)->ptp_cmd = ptp_cmd; + return 0; + } + + /* fall back to two-step timestamping */ + ptp_cmd = ifh_rew_op_two_step_ptp; + } + + ocelot_skb_cb(skb)->ptp_cmd = ptp_cmd; diff --git a/drivers/net/ethernet/mscc/ocelot_net.c b/drivers/net/ethernet/mscc/ocelot_net.c --- a/drivers/net/ethernet/mscc/ocelot_net.c +++ b/drivers/net/ethernet/mscc/ocelot_net.c - if (ocelot_port->ptp_cmd == ifh_rew_op_two_step_ptp) { - rew_op = ocelot_port->ptp_cmd; - rew_op |= ocelot_skb_cb(clone)->ts_id << 3; - } + if (clone) + ocelot_skb_cb(skb)->clone = clone; + + rew_op = ocelot_ptp_rew_op(skb); diff --git a/include/soc/mscc/ocelot.h b/include/soc/mscc/ocelot.h --- a/include/soc/mscc/ocelot.h +++ b/include/soc/mscc/ocelot.h + u8 ptp_cmd; -/* packet i/o */ +/* packet i/o */ +u32 ocelot_ptp_rew_op(struct sk_buff *skb); +static inline u32 ocelot_ptp_rew_op(struct sk_buff *skb) +{ + return 0; +} diff --git a/net/dsa/kconfig b/net/dsa/kconfig --- a/net/dsa/kconfig +++ b/net/dsa/kconfig + depends on mscc_ocelot_switch_lib || \ + (mscc_ocelot_switch_lib=n && compile_test) diff --git a/net/dsa/tag_ocelot.c b/net/dsa/tag_ocelot.c --- a/net/dsa/tag_ocelot.c +++ b/net/dsa/tag_ocelot.c -static void ocelot_xmit_ptp(struct dsa_port *dp, void *injection, - struct sk_buff *clone) -{ - struct ocelot *ocelot = dp->ds->priv; - struct ocelot_port *ocelot_port; - u64 rew_op; - - ocelot_port = ocelot->ports[dp->index]; - rew_op = ocelot_port->ptp_cmd; - - /* retrieve timestamp id populated inside ocelot_skb_cb(clone)->ts_id - * by ocelot_port_add_txtstamp_skb - */ - if (ocelot_port->ptp_cmd == ifh_rew_op_two_step_ptp) - rew_op |= ocelot_skb_cb(clone)->ts_id << 3; - - ocelot_ifh_set_rew_op(injection, rew_op); -} - - struct sk_buff *clone = ocelot_skb_cb(skb)->clone; + u32 rew_op = 0; - /* tx timestamping was requested */ - if (clone) - ocelot_xmit_ptp(dp, injection, clone); + rew_op = ocelot_ptp_rew_op(skb); + if (rew_op) + ocelot_ifh_set_rew_op(injection, rew_op); diff --git a/net/dsa/tag_ocelot_8021q.c b/net/dsa/tag_ocelot_8021q.c --- a/net/dsa/tag_ocelot_8021q.c +++ b/net/dsa/tag_ocelot_8021q.c -static struct sk_buff *ocelot_xmit_ptp(struct dsa_port *dp, - struct sk_buff *skb, - struct sk_buff *clone) -{ - struct ocelot *ocelot = dp->ds->priv; - struct ocelot_port *ocelot_port; - int port = dp->index; - u32 rew_op; - - if (!ocelot_can_inject(ocelot, 0)) - return null; - - ocelot_port = ocelot->ports[port]; - rew_op = ocelot_port->ptp_cmd; - - /* retrieve timestamp id populated inside ocelot_skb_cb(clone)->ts_id - * by ocelot_port_add_txtstamp_skb - */ - if (ocelot_port->ptp_cmd == ifh_rew_op_two_step_ptp) - rew_op |= ocelot_skb_cb(clone)->ts_id << 3; - - ocelot_port_inject_frame(ocelot, port, 0, rew_op, skb); - - return null; -} - - struct sk_buff *clone = ocelot_skb_cb(skb)->clone; + struct ocelot *ocelot = dp->ds->priv; + int port = dp->index; + u32 rew_op = 0; + + rew_op = ocelot_ptp_rew_op(skb); + if (rew_op) { + if (!ocelot_can_inject(ocelot, 0)) + return null; - /* tx timestamping was requested, so inject through mmio */ - if (clone) - return ocelot_xmit_ptp(dp, skb, clone); + ocelot_port_inject_frame(ocelot, port, 0, rew_op, skb); + return null; + }
|
Networking
|
39e5308b3250666cc92c5ca33a667698ac645bd2
|
yangbo lu
|
include
|
soc
|
ethernet, mscc
|
net: mana: add a driver for microsoft azure network adapter (mana)
|
add a vf driver for microsoft azure network adapter (mana) that will be available in the future.
|
this release includes the landlock security module, which aims to make easier to sandbox applications; support for the clang control flow integrity, which aims to abort the program upon detecting certain forms of undefined behavior; support for randomising the stack address offset in each syscall; support for concurrent tbl flushing; preparatory apple m1 support; support for incoming amd and intel graphics chips; bpf support for calling kernel functions directly; a virtio sound driver for improved sound experience on virtualized guests; io_uring support for multi shot mode and a misc cgroup for miscellaneous resources. as always, there are many other features, new drivers, improvements and fixes.
|
add a driver for microsoft azure network adapter (mana)
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'security', 'networking', 'architectures arm x86 mips powerpc riscv s390 ia64 xtensa']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'remote processors', 'clock', 'phy ("physical layer" framework)', 'various']
|
['mana']
|
['c', 'h', 'kconfig', 'maintainers', 'makefile']
| 15
| 6,156
| 1
|
--- diff --git a/maintainers b/maintainers --- a/maintainers +++ b/maintainers -hyper-v core and drivers +hyper-v/azure core and drivers +m: dexuan cui <decui@microsoft.com> +f: drivers/net/ethernet/microsoft/ diff --git a/drivers/net/ethernet/kconfig b/drivers/net/ethernet/kconfig --- a/drivers/net/ethernet/kconfig +++ b/drivers/net/ethernet/kconfig +source "drivers/net/ethernet/microsoft/kconfig" diff --git a/drivers/net/ethernet/makefile b/drivers/net/ethernet/makefile --- a/drivers/net/ethernet/makefile +++ b/drivers/net/ethernet/makefile +obj-$(config_net_vendor_microsoft) += microsoft/ diff --git a/drivers/net/ethernet/microsoft/kconfig b/drivers/net/ethernet/microsoft/kconfig --- /dev/null +++ b/drivers/net/ethernet/microsoft/kconfig +# +# microsoft azure network device configuration +# + +config net_vendor_microsoft + bool "microsoft network devices" + default y + help + if you have a network (ethernet) device belonging to this class, say y. + + note that the answer to this question doesn't directly affect the + kernel: saying n will just cause the configurator to skip the + question about microsoft network devices. if you say y, you will be + asked for your specific device in the following question. + +if net_vendor_microsoft + +config microsoft_mana + tristate "microsoft azure network adapter (mana) support" + depends on pci_msi && x86_64 + select pci_hyperv + help + this driver supports microsoft azure network adapter (mana). + so far, the driver is only supported on x86_64. + + to compile this driver as a module, choose m here. + the module will be called mana. + +endif #net_vendor_microsoft diff --git a/drivers/net/ethernet/microsoft/makefile b/drivers/net/ethernet/microsoft/makefile --- /dev/null +++ b/drivers/net/ethernet/microsoft/makefile +# +# makefile for the microsoft azure network device driver. +# + +obj-$(config_microsoft_mana) += mana/ diff --git a/drivers/net/ethernet/microsoft/mana/makefile b/drivers/net/ethernet/microsoft/mana/makefile --- /dev/null +++ b/drivers/net/ethernet/microsoft/mana/makefile +# spdx-license-identifier: gpl-2.0 or bsd-3-clause +# +# makefile for the microsoft azure network adapter driver + +obj-$(config_microsoft_mana) += mana.o +mana-objs := gdma_main.o shm_channel.o hw_channel.o mana_en.o mana_ethtool.o diff --git a/drivers/net/ethernet/microsoft/mana/gdma.h b/drivers/net/ethernet/microsoft/mana/gdma.h --- /dev/null +++ b/drivers/net/ethernet/microsoft/mana/gdma.h +/* spdx-license-identifier: gpl-2.0 or bsd-3-clause */ +/* copyright (c) 2021, microsoft corporation. */ + +#ifndef _gdma_h +#define _gdma_h + +#include <linux/dma-mapping.h> +#include <linux/netdevice.h> + +#include "shm_channel.h" + +/* structures labeled with "hw data" are exchanged with the hardware. all of + * them are naturally aligned and hence don't need __packed. + */ + +enum gdma_request_type { + gdma_verify_vf_driver_version = 1, + gdma_query_max_resources = 2, + gdma_list_devices = 3, + gdma_register_device = 4, + gdma_deregister_device = 5, + gdma_generate_test_eqe = 10, + gdma_create_queue = 12, + gdma_disable_queue = 13, + gdma_create_dma_region = 25, + gdma_dma_region_add_pages = 26, + gdma_destroy_dma_region = 27, +}; + +enum gdma_queue_type { + gdma_invalid_queue, + gdma_sq, + gdma_rq, + gdma_cq, + gdma_eq, +}; + +enum gdma_work_request_flags { + gdma_wr_none = 0, + gdma_wr_oob_in_sgl = bit(0), + gdma_wr_pad_by_sge0 = bit(1), +}; + +enum gdma_eqe_type { + gdma_eqe_completion = 3, + gdma_eqe_test_event = 64, + gdma_eqe_hwc_init_eq_id_db = 129, + gdma_eqe_hwc_init_data = 130, + gdma_eqe_hwc_init_done = 131, +}; + +enum { + gdma_device_none = 0, + gdma_device_hwc = 1, + gdma_device_mana = 2, +}; + +struct gdma_resource { + /* protect the bitmap */ + spinlock_t lock; + + /* the bitmap size in bits. */ + u32 size; + + /* the bitmap tracks the resources. */ + unsigned long *map; +}; + +union gdma_doorbell_entry { + u64 as_uint64; + + struct { + u64 id : 24; + u64 reserved : 8; + u64 tail_ptr : 31; + u64 arm : 1; + } cq; + + struct { + u64 id : 24; + u64 wqe_cnt : 8; + u64 tail_ptr : 32; + } rq; + + struct { + u64 id : 24; + u64 reserved : 8; + u64 tail_ptr : 32; + } sq; + + struct { + u64 id : 16; + u64 reserved : 16; + u64 tail_ptr : 31; + u64 arm : 1; + } eq; +}; /* hw data */ + +struct gdma_msg_hdr { + u32 hdr_type; + u32 msg_type; + u16 msg_version; + u16 hwc_msg_id; + u32 msg_size; +}; /* hw data */ + +struct gdma_dev_id { + union { + struct { + u16 type; + u16 instance; + }; + + u32 as_uint32; + }; +}; /* hw data */ + +struct gdma_req_hdr { + struct gdma_msg_hdr req; + struct gdma_msg_hdr resp; /* the expected response */ + struct gdma_dev_id dev_id; + u32 activity_id; +}; /* hw data */ + +struct gdma_resp_hdr { + struct gdma_msg_hdr response; + struct gdma_dev_id dev_id; + u32 activity_id; + u32 status; + u32 reserved; +}; /* hw data */ + +struct gdma_general_req { + struct gdma_req_hdr hdr; +}; /* hw data */ + +#define gdma_message_v1 1 + +struct gdma_general_resp { + struct gdma_resp_hdr hdr; +}; /* hw data */ + +#define gdma_standard_header_type 0 + +static inline void mana_gd_init_req_hdr(struct gdma_req_hdr *hdr, u32 code, + u32 req_size, u32 resp_size) +{ + hdr->req.hdr_type = gdma_standard_header_type; + hdr->req.msg_type = code; + hdr->req.msg_version = gdma_message_v1; + hdr->req.msg_size = req_size; + + hdr->resp.hdr_type = gdma_standard_header_type; + hdr->resp.msg_type = code; + hdr->resp.msg_version = gdma_message_v1; + hdr->resp.msg_size = resp_size; +} + +/* the 16-byte struct is part of the gdma work queue entry (wqe). */ +struct gdma_sge { + u64 address; + u32 mem_key; + u32 size; +}; /* hw data */ + +struct gdma_wqe_request { + struct gdma_sge *sgl; + u32 num_sge; + + u32 inline_oob_size; + const void *inline_oob_data; + + u32 flags; + u32 client_data_unit; +}; + +enum gdma_page_type { + gdma_page_type_4k, +}; + +#define gdma_invalid_dma_region 0 + +struct gdma_mem_info { + struct device *dev; + + dma_addr_t dma_handle; + void *virt_addr; + u64 length; + + /* allocated by the pf driver */ + u64 gdma_region; +}; + +#define register_atb_mst_mkey_lower_size 8 + +struct gdma_dev { + struct gdma_context *gdma_context; + + struct gdma_dev_id dev_id; + + u32 pdid; + u32 doorbell; + u32 gpa_mkey; + + /* gdma driver specific pointer */ + void *driver_data; +}; + +#define minimum_supported_page_size page_size + +#define gdma_cqe_size 64 +#define gdma_eqe_size 16 +#define gdma_max_sqe_size 512 +#define gdma_max_rqe_size 256 + +#define gdma_comp_data_size 0x3c + +#define gdma_event_data_size 0xc + +/* the wqe size must be a multiple of the basic unit, which is 32 bytes. */ +#define gdma_wqe_bu_size 32 + +#define invalid_pdid uint_max +#define invalid_doorbell uint_max +#define invalid_mem_key uint_max +#define invalid_queue_id uint_max +#define invalid_pci_msix_index uint_max + +struct gdma_comp { + u32 cqe_data[gdma_comp_data_size / 4]; + u32 wq_num; + bool is_sq; +}; + +struct gdma_event { + u32 details[gdma_event_data_size / 4]; + u8 type; +}; + +struct gdma_queue; + +#define cqe_polling_buffer 512 +struct mana_eq { + struct gdma_queue *eq; + struct gdma_comp cqe_poll[cqe_polling_buffer]; +}; + +typedef void gdma_eq_callback(void *context, struct gdma_queue *q, + struct gdma_event *e); + +typedef void gdma_cq_callback(void *context, struct gdma_queue *q); + +/* the 'head' is the producer index. for sq/rq, when the driver posts a wqe + * (note: the wqe size must be a multiple of the 32-byte basic unit), the + * driver increases the 'head' in bus rather than in bytes, and notifies + * the hw of the updated head. for eq/cq, the driver uses the 'head' to track + * the hw head, and increases the 'head' by 1 for every processed eqe/cqe. + * + * the 'tail' is the consumer index for sq/rq. after the cqe of the sq/rq is + * processed, the driver increases the 'tail' to indicate that wqes have + * been consumed by the hw, so the driver can post new wqes into the sq/rq. + * + * the driver doesn't use the 'tail' for eq/cq, because the driver ensures + * that the eq/cq is big enough so they can't overflow, and the driver uses + * the owner bits mechanism to detect if the queue has become empty. + */ +struct gdma_queue { + struct gdma_dev *gdma_dev; + + enum gdma_queue_type type; + u32 id; + + struct gdma_mem_info mem_info; + + void *queue_mem_ptr; + u32 queue_size; + + bool monitor_avl_buf; + + u32 head; + u32 tail; + + /* extra fields specific to eq/cq. */ + union { + struct { + bool disable_needed; + + gdma_eq_callback *callback; + void *context; + + unsigned int msix_index; + + u32 log2_throttle_limit; + + /* napi data */ + struct napi_struct napi; + int work_done; + int budget; + } eq; + + struct { + gdma_cq_callback *callback; + void *context; + + struct gdma_queue *parent; /* for cq/eq relationship */ + } cq; + }; +}; + +struct gdma_queue_spec { + enum gdma_queue_type type; + bool monitor_avl_buf; + unsigned int queue_size; + + /* extra fields specific to eq/cq. */ + union { + struct { + gdma_eq_callback *callback; + void *context; + + unsigned long log2_throttle_limit; + + /* only used by the mana device. */ + struct net_device *ndev; + } eq; + + struct { + gdma_cq_callback *callback; + void *context; + + struct gdma_queue *parent_eq; + + } cq; + }; +}; + +struct gdma_irq_context { + void (*handler)(void *arg); + void *arg; +}; + +struct gdma_context { + struct device *dev; + + /* per-vport max number of queues */ + unsigned int max_num_queues; + unsigned int max_num_msix; + unsigned int num_msix_usable; + struct gdma_resource msix_resource; + struct gdma_irq_context *irq_contexts; + + /* this maps a cq index to the queue structure. */ + unsigned int max_num_cqs; + struct gdma_queue **cq_table; + + /* protect eq_test_event and test_event_eq_id */ + struct mutex eq_test_event_mutex; + struct completion eq_test_event; + u32 test_event_eq_id; + + void __iomem *bar0_va; + void __iomem *shm_base; + void __iomem *db_page_base; + u32 db_page_size; + + /* shared memory chanenl (used to bootstrap hwc) */ + struct shm_channel shm_channel; + + /* hardware communication channel (hwc) */ + struct gdma_dev hwc; + + /* azure network adapter */ + struct gdma_dev mana; +}; + +#define max_num_gdma_devices 4 + +static inline bool mana_gd_is_mana(struct gdma_dev *gd) +{ + return gd->dev_id.type == gdma_device_mana; +} + +static inline bool mana_gd_is_hwc(struct gdma_dev *gd) +{ + return gd->dev_id.type == gdma_device_hwc; +} + +u8 *mana_gd_get_wqe_ptr(const struct gdma_queue *wq, u32 wqe_offset); +u32 mana_gd_wq_avail_space(struct gdma_queue *wq); + +int mana_gd_test_eq(struct gdma_context *gc, struct gdma_queue *eq); + +int mana_gd_create_hwc_queue(struct gdma_dev *gd, + const struct gdma_queue_spec *spec, + struct gdma_queue **queue_ptr); + +int mana_gd_create_mana_eq(struct gdma_dev *gd, + const struct gdma_queue_spec *spec, + struct gdma_queue **queue_ptr); + +int mana_gd_create_mana_wq_cq(struct gdma_dev *gd, + const struct gdma_queue_spec *spec, + struct gdma_queue **queue_ptr); + +void mana_gd_destroy_queue(struct gdma_context *gc, struct gdma_queue *queue); + +int mana_gd_poll_cq(struct gdma_queue *cq, struct gdma_comp *comp, int num_cqe); + +void mana_gd_arm_cq(struct gdma_queue *cq); + +struct gdma_wqe { + u32 reserved :24; + u32 last_vbytes :8; + + union { + u32 flags; + + struct { + u32 num_sge :8; + u32 inline_oob_size_div4:3; + u32 client_oob_in_sgl :1; + u32 reserved1 :4; + u32 client_data_unit :14; + u32 reserved2 :2; + }; + }; +}; /* hw data */ + +#define inline_oob_small_size 8 +#define inline_oob_large_size 24 + +#define max_tx_wqe_size 512 +#define max_rx_wqe_size 256 + +struct gdma_cqe { + u32 cqe_data[gdma_comp_data_size / 4]; + + union { + u32 as_uint32; + + struct { + u32 wq_num : 24; + u32 is_sq : 1; + u32 reserved : 4; + u32 owner_bits : 3; + }; + } cqe_info; +}; /* hw data */ + +#define gdma_cqe_owner_bits 3 + +#define gdma_cqe_owner_mask ((1 << gdma_cqe_owner_bits) - 1) + +#define set_arm_bit 1 + +#define gdma_eqe_owner_bits 3 + +union gdma_eqe_info { + u32 as_uint32; + + struct { + u32 type : 8; + u32 reserved1 : 8; + u32 client_id : 2; + u32 reserved2 : 11; + u32 owner_bits : 3; + }; +}; /* hw data */ + +#define gdma_eqe_owner_mask ((1 << gdma_eqe_owner_bits) - 1) +#define initialized_owner_bit(log2_num_entries) (1ul << (log2_num_entries)) + +struct gdma_eqe { + u32 details[gdma_event_data_size / 4]; + u32 eqe_info; +}; /* hw data */ + +#define gdma_reg_db_page_offset 8 +#define gdma_reg_db_page_size 0x10 +#define gdma_reg_shm_offset 0x18 + +struct gdma_posted_wqe_info { + u32 wqe_size_in_bu; +}; + +/* gdma_generate_test_eqe */ +struct gdma_generate_test_event_req { + struct gdma_req_hdr hdr; + u32 queue_index; +}; /* hw data */ + +/* gdma_verify_vf_driver_version */ +enum { + gdma_protocol_v1 = 1, + gdma_protocol_first = gdma_protocol_v1, + gdma_protocol_last = gdma_protocol_v1, +}; + +struct gdma_verify_ver_req { + struct gdma_req_hdr hdr; + + /* mandatory fields required for protocol establishment */ + u64 protocol_ver_min; + u64 protocol_ver_max; + u64 drv_cap_flags1; + u64 drv_cap_flags2; + u64 drv_cap_flags3; + u64 drv_cap_flags4; + + /* advisory fields */ + u64 drv_ver; + u32 os_type; /* linux = 0x10; windows = 0x20; other = 0x30 */ + u32 reserved; + u32 os_ver_major; + u32 os_ver_minor; + u32 os_ver_build; + u32 os_ver_platform; + u64 reserved_2; + u8 os_ver_str1[128]; + u8 os_ver_str2[128]; + u8 os_ver_str3[128]; + u8 os_ver_str4[128]; +}; /* hw data */ + +struct gdma_verify_ver_resp { + struct gdma_resp_hdr hdr; + u64 gdma_protocol_ver; + u64 pf_cap_flags1; + u64 pf_cap_flags2; + u64 pf_cap_flags3; + u64 pf_cap_flags4; +}; /* hw data */ + +/* gdma_query_max_resources */ +struct gdma_query_max_resources_resp { + struct gdma_resp_hdr hdr; + u32 status; + u32 max_sq; + u32 max_rq; + u32 max_cq; + u32 max_eq; + u32 max_db; + u32 max_mst; + u32 max_cq_mod_ctx; + u32 max_mod_cq; + u32 max_msix; +}; /* hw data */ + +/* gdma_list_devices */ +struct gdma_list_devices_resp { + struct gdma_resp_hdr hdr; + u32 num_of_devs; + u32 reserved; + struct gdma_dev_id devs[64]; +}; /* hw data */ + +/* gdma_register_device */ +struct gdma_register_device_resp { + struct gdma_resp_hdr hdr; + u32 pdid; + u32 gpa_mkey; + u32 db_id; +}; /* hw data */ + +/* gdma_create_queue */ +struct gdma_create_queue_req { + struct gdma_req_hdr hdr; + u32 type; + u32 reserved1; + u32 pdid; + u32 doolbell_id; + u64 gdma_region; + u32 reserved2; + u32 queue_size; + u32 log2_throttle_limit; + u32 eq_pci_msix_index; + u32 cq_mod_ctx_id; + u32 cq_parent_eq_id; + u8 rq_drop_on_overrun; + u8 rq_err_on_wqe_overflow; + u8 rq_chain_rec_wqes; + u8 sq_hw_db; + u32 reserved3; +}; /* hw data */ + +struct gdma_create_queue_resp { + struct gdma_resp_hdr hdr; + u32 queue_index; +}; /* hw data */ + +/* gdma_disable_queue */ +struct gdma_disable_queue_req { + struct gdma_req_hdr hdr; + u32 type; + u32 queue_index; + u32 alloc_res_id_on_creation; +}; /* hw data */ + +/* gdma_create_dma_region */ +struct gdma_create_dma_region_req { + struct gdma_req_hdr hdr; + + /* the total size of the dma region */ + u64 length; + + /* the offset in the first page */ + u32 offset_in_page; + + /* enum gdma_page_type */ + u32 gdma_page_type; + + /* the total number of pages */ + u32 page_count; + + /* if page_addr_list_len is smaller than page_count, + * the remaining page addresses will be added via the + * message gdma_dma_region_add_pages. + */ + u32 page_addr_list_len; + u64 page_addr_list[]; +}; /* hw data */ + +struct gdma_create_dma_region_resp { + struct gdma_resp_hdr hdr; + u64 gdma_region; +}; /* hw data */ + +/* gdma_dma_region_add_pages */ +struct gdma_dma_region_add_pages_req { + struct gdma_req_hdr hdr; + + u64 gdma_region; + + u32 page_addr_list_len; + u32 reserved3; + + u64 page_addr_list[]; +}; /* hw data */ + +/* gdma_destroy_dma_region */ +struct gdma_destroy_dma_region_req { + struct gdma_req_hdr hdr; + + u64 gdma_region; +}; /* hw data */ + +int mana_gd_verify_vf_version(struct pci_dev *pdev); + +int mana_gd_register_device(struct gdma_dev *gd); +int mana_gd_deregister_device(struct gdma_dev *gd); + +int mana_gd_post_work_request(struct gdma_queue *wq, + const struct gdma_wqe_request *wqe_req, + struct gdma_posted_wqe_info *wqe_info); + +int mana_gd_post_and_ring(struct gdma_queue *queue, + const struct gdma_wqe_request *wqe, + struct gdma_posted_wqe_info *wqe_info); + +int mana_gd_alloc_res_map(u32 res_avail, struct gdma_resource *r); +void mana_gd_free_res_map(struct gdma_resource *r); + +void mana_gd_wq_ring_doorbell(struct gdma_context *gc, + struct gdma_queue *queue); + +int mana_gd_alloc_memory(struct gdma_context *gc, unsigned int length, + struct gdma_mem_info *gmi); + +void mana_gd_free_memory(struct gdma_mem_info *gmi); + +int mana_gd_send_request(struct gdma_context *gc, u32 req_len, const void *req, + u32 resp_len, void *resp); +#endif /* _gdma_h */ diff --git a/drivers/net/ethernet/microsoft/mana/gdma_main.c b/drivers/net/ethernet/microsoft/mana/gdma_main.c --- /dev/null +++ b/drivers/net/ethernet/microsoft/mana/gdma_main.c +// spdx-license-identifier: gpl-2.0 or bsd-3-clause +/* copyright (c) 2021, microsoft corporation. */ + +#include <linux/module.h> +#include <linux/pci.h> + +#include "mana.h" + +static u32 mana_gd_r32(struct gdma_context *g, u64 offset) +{ + return readl(g->bar0_va + offset); +} + +static u64 mana_gd_r64(struct gdma_context *g, u64 offset) +{ + return readq(g->bar0_va + offset); +} + +static void mana_gd_init_registers(struct pci_dev *pdev) +{ + struct gdma_context *gc = pci_get_drvdata(pdev); + + gc->db_page_size = mana_gd_r32(gc, gdma_reg_db_page_size) & 0xffff; + + gc->db_page_base = gc->bar0_va + + mana_gd_r64(gc, gdma_reg_db_page_offset); + + gc->shm_base = gc->bar0_va + mana_gd_r64(gc, gdma_reg_shm_offset); +} + +static int mana_gd_query_max_resources(struct pci_dev *pdev) +{ + struct gdma_context *gc = pci_get_drvdata(pdev); + struct gdma_query_max_resources_resp resp = {}; + struct gdma_general_req req = {}; + int err; + + mana_gd_init_req_hdr(&req.hdr, gdma_query_max_resources, + sizeof(req), sizeof(resp)); + + err = mana_gd_send_request(gc, sizeof(req), &req, sizeof(resp), &resp); + if (err || resp.hdr.status) { + dev_err(gc->dev, "failed to query resource info: %d, 0x%x ", + err, resp.hdr.status); + return err ? err : -eproto; + } + + if (gc->num_msix_usable > resp.max_msix) + gc->num_msix_usable = resp.max_msix; + + if (gc->num_msix_usable <= 1) + return -enospc; + + gc->max_num_queues = num_online_cpus(); + if (gc->max_num_queues > mana_max_num_queues) + gc->max_num_queues = mana_max_num_queues; + + if (gc->max_num_queues > resp.max_eq) + gc->max_num_queues = resp.max_eq; + + if (gc->max_num_queues > resp.max_cq) + gc->max_num_queues = resp.max_cq; + + if (gc->max_num_queues > resp.max_sq) + gc->max_num_queues = resp.max_sq; + + if (gc->max_num_queues > resp.max_rq) + gc->max_num_queues = resp.max_rq; + + return 0; +} + +static int mana_gd_detect_devices(struct pci_dev *pdev) +{ + struct gdma_context *gc = pci_get_drvdata(pdev); + struct gdma_list_devices_resp resp = {}; + struct gdma_general_req req = {}; + struct gdma_dev_id dev; + u32 i, max_num_devs; + u16 dev_type; + int err; + + mana_gd_init_req_hdr(&req.hdr, gdma_list_devices, sizeof(req), + sizeof(resp)); + + err = mana_gd_send_request(gc, sizeof(req), &req, sizeof(resp), &resp); + if (err || resp.hdr.status) { + dev_err(gc->dev, "failed to detect devices: %d, 0x%x ", err, + resp.hdr.status); + return err ? err : -eproto; + } + + max_num_devs = min_t(u32, max_num_gdma_devices, resp.num_of_devs); + + for (i = 0; i < max_num_devs; i++) { + dev = resp.devs[i]; + dev_type = dev.type; + + /* hwc is already detected in mana_hwc_create_channel(). */ + if (dev_type == gdma_device_hwc) + continue; + + if (dev_type == gdma_device_mana) { + gc->mana.gdma_context = gc; + gc->mana.dev_id = dev; + } + } + + return gc->mana.dev_id.type == 0 ? -enodev : 0; +} + +int mana_gd_send_request(struct gdma_context *gc, u32 req_len, const void *req, + u32 resp_len, void *resp) +{ + struct hw_channel_context *hwc = gc->hwc.driver_data; + + return mana_hwc_send_request(hwc, req_len, req, resp_len, resp); +} + +int mana_gd_alloc_memory(struct gdma_context *gc, unsigned int length, + struct gdma_mem_info *gmi) +{ + dma_addr_t dma_handle; + void *buf; + + if (length < page_size || !is_power_of_2(length)) + return -einval; + + gmi->dev = gc->dev; + buf = dma_alloc_coherent(gmi->dev, length, &dma_handle, gfp_kernel); + if (!buf) + return -enomem; + + gmi->dma_handle = dma_handle; + gmi->virt_addr = buf; + gmi->length = length; + + return 0; +} + +void mana_gd_free_memory(struct gdma_mem_info *gmi) +{ + dma_free_coherent(gmi->dev, gmi->length, gmi->virt_addr, + gmi->dma_handle); +} + +static int mana_gd_create_hw_eq(struct gdma_context *gc, + struct gdma_queue *queue) +{ + struct gdma_create_queue_resp resp = {}; + struct gdma_create_queue_req req = {}; + int err; + + if (queue->type != gdma_eq) + return -einval; + + mana_gd_init_req_hdr(&req.hdr, gdma_create_queue, + sizeof(req), sizeof(resp)); + + req.hdr.dev_id = queue->gdma_dev->dev_id; + req.type = queue->type; + req.pdid = queue->gdma_dev->pdid; + req.doolbell_id = queue->gdma_dev->doorbell; + req.gdma_region = queue->mem_info.gdma_region; + req.queue_size = queue->queue_size; + req.log2_throttle_limit = queue->eq.log2_throttle_limit; + req.eq_pci_msix_index = queue->eq.msix_index; + + err = mana_gd_send_request(gc, sizeof(req), &req, sizeof(resp), &resp); + if (err || resp.hdr.status) { + dev_err(gc->dev, "failed to create queue: %d, 0x%x ", err, + resp.hdr.status); + return err ? err : -eproto; + } + + queue->id = resp.queue_index; + queue->eq.disable_needed = true; + queue->mem_info.gdma_region = gdma_invalid_dma_region; + return 0; +} + +static int mana_gd_disable_queue(struct gdma_queue *queue) +{ + struct gdma_context *gc = queue->gdma_dev->gdma_context; + struct gdma_disable_queue_req req = {}; + struct gdma_general_resp resp = {}; + int err; + + warn_on(queue->type != gdma_eq); + + mana_gd_init_req_hdr(&req.hdr, gdma_disable_queue, + sizeof(req), sizeof(resp)); + + req.hdr.dev_id = queue->gdma_dev->dev_id; + req.type = queue->type; + req.queue_index = queue->id; + req.alloc_res_id_on_creation = 1; + + err = mana_gd_send_request(gc, sizeof(req), &req, sizeof(resp), &resp); + if (err || resp.hdr.status) { + dev_err(gc->dev, "failed to disable queue: %d, 0x%x ", err, + resp.hdr.status); + return err ? err : -eproto; + } + + return 0; +} + +#define doorbell_offset_sq 0x0 +#define doorbell_offset_rq 0x400 +#define doorbell_offset_cq 0x800 +#define doorbell_offset_eq 0xff8 + +static void mana_gd_ring_doorbell(struct gdma_context *gc, u32 db_index, + enum gdma_queue_type q_type, u32 qid, + u32 tail_ptr, u8 num_req) +{ + void __iomem *addr = gc->db_page_base + gc->db_page_size * db_index; + union gdma_doorbell_entry e = {}; + + switch (q_type) { + case gdma_eq: + e.eq.id = qid; + e.eq.tail_ptr = tail_ptr; + e.eq.arm = num_req; + + addr += doorbell_offset_eq; + break; + + case gdma_cq: + e.cq.id = qid; + e.cq.tail_ptr = tail_ptr; + e.cq.arm = num_req; + + addr += doorbell_offset_cq; + break; + + case gdma_rq: + e.rq.id = qid; + e.rq.tail_ptr = tail_ptr; + e.rq.wqe_cnt = num_req; + + addr += doorbell_offset_rq; + break; + + case gdma_sq: + e.sq.id = qid; + e.sq.tail_ptr = tail_ptr; + + addr += doorbell_offset_sq; + break; + + default: + warn_on(1); + return; + } + + /* ensure all writes are done before ring doorbell */ + wmb(); + + writeq(e.as_uint64, addr); +} + +void mana_gd_wq_ring_doorbell(struct gdma_context *gc, struct gdma_queue *queue) +{ + mana_gd_ring_doorbell(gc, queue->gdma_dev->doorbell, queue->type, + queue->id, queue->head * gdma_wqe_bu_size, 1); +} + +void mana_gd_arm_cq(struct gdma_queue *cq) +{ + struct gdma_context *gc = cq->gdma_dev->gdma_context; + + u32 num_cqe = cq->queue_size / gdma_cqe_size; + + u32 head = cq->head % (num_cqe << gdma_cqe_owner_bits); + + mana_gd_ring_doorbell(gc, cq->gdma_dev->doorbell, cq->type, cq->id, + head, set_arm_bit); +} + +static void mana_gd_process_eqe(struct gdma_queue *eq) +{ + u32 head = eq->head % (eq->queue_size / gdma_eqe_size); + struct gdma_context *gc = eq->gdma_dev->gdma_context; + struct gdma_eqe *eq_eqe_ptr = eq->queue_mem_ptr; + union gdma_eqe_info eqe_info; + enum gdma_eqe_type type; + struct gdma_event event; + struct gdma_queue *cq; + struct gdma_eqe *eqe; + u32 cq_id; + + eqe = &eq_eqe_ptr[head]; + eqe_info.as_uint32 = eqe->eqe_info; + type = eqe_info.type; + + switch (type) { + case gdma_eqe_completion: + cq_id = eqe->details[0] & 0xffffff; + if (warn_on_once(cq_id >= gc->max_num_cqs)) + break; + + cq = gc->cq_table[cq_id]; + if (warn_on_once(!cq || cq->type != gdma_cq || cq->id != cq_id)) + break; + + if (cq->cq.callback) + cq->cq.callback(cq->cq.context, cq); + + break; + + case gdma_eqe_test_event: + gc->test_event_eq_id = eq->id; + complete(&gc->eq_test_event); + break; + + case gdma_eqe_hwc_init_eq_id_db: + case gdma_eqe_hwc_init_data: + case gdma_eqe_hwc_init_done: + if (!eq->eq.callback) + break; + + event.type = type; + memcpy(&event.details, &eqe->details, gdma_event_data_size); + eq->eq.callback(eq->eq.context, eq, &event); + break; + + default: + break; + } +} + +static void mana_gd_process_eq_events(void *arg) +{ + u32 owner_bits, new_bits, old_bits; + union gdma_eqe_info eqe_info; + struct gdma_eqe *eq_eqe_ptr; + struct gdma_queue *eq = arg; + struct gdma_context *gc; + struct gdma_eqe *eqe; + unsigned int arm_bit; + u32 head, num_eqe; + int i; + + gc = eq->gdma_dev->gdma_context; + + num_eqe = eq->queue_size / gdma_eqe_size; + eq_eqe_ptr = eq->queue_mem_ptr; + + /* process up to 5 eqes at a time, and update the hw head. */ + for (i = 0; i < 5; i++) { + eqe = &eq_eqe_ptr[eq->head % num_eqe]; + eqe_info.as_uint32 = eqe->eqe_info; + owner_bits = eqe_info.owner_bits; + + old_bits = (eq->head / num_eqe - 1) & gdma_eqe_owner_mask; + /* no more entries */ + if (owner_bits == old_bits) + break; + + new_bits = (eq->head / num_eqe) & gdma_eqe_owner_mask; + if (owner_bits != new_bits) { + dev_err(gc->dev, "eq %d: overflow detected ", eq->id); + break; + } + + mana_gd_process_eqe(eq); + + eq->head++; + } + + /* always rearm the eq for hwc. for mana, rearm it when napi is done. */ + if (mana_gd_is_hwc(eq->gdma_dev)) { + arm_bit = set_arm_bit; + } else if (eq->eq.work_done < eq->eq.budget && + napi_complete_done(&eq->eq.napi, eq->eq.work_done)) { + arm_bit = set_arm_bit; + } else { + arm_bit = 0; + } + + head = eq->head % (num_eqe << gdma_eqe_owner_bits); + + mana_gd_ring_doorbell(gc, eq->gdma_dev->doorbell, eq->type, eq->id, + head, arm_bit); +} + +static int mana_poll(struct napi_struct *napi, int budget) +{ + struct gdma_queue *eq = container_of(napi, struct gdma_queue, eq.napi); + + eq->eq.work_done = 0; + eq->eq.budget = budget; + + mana_gd_process_eq_events(eq); + + return min(eq->eq.work_done, budget); +} + +static void mana_gd_schedule_napi(void *arg) +{ + struct gdma_queue *eq = arg; + struct napi_struct *napi; + + napi = &eq->eq.napi; + napi_schedule_irqoff(napi); +} + +static int mana_gd_register_irq(struct gdma_queue *queue, + const struct gdma_queue_spec *spec) +{ + struct gdma_dev *gd = queue->gdma_dev; + bool is_mana = mana_gd_is_mana(gd); + struct gdma_irq_context *gic; + struct gdma_context *gc; + struct gdma_resource *r; + unsigned int msi_index; + unsigned long flags; + int err; + + gc = gd->gdma_context; + r = &gc->msix_resource; + + spin_lock_irqsave(&r->lock, flags); + + msi_index = find_first_zero_bit(r->map, r->size); + if (msi_index >= r->size) { + err = -enospc; + } else { + bitmap_set(r->map, msi_index, 1); + queue->eq.msix_index = msi_index; + err = 0; + } + + spin_unlock_irqrestore(&r->lock, flags); + + if (err) + return err; + + warn_on(msi_index >= gc->num_msix_usable); + + gic = &gc->irq_contexts[msi_index]; + + if (is_mana) { + netif_napi_add(spec->eq.ndev, &queue->eq.napi, mana_poll, + napi_poll_weight); + napi_enable(&queue->eq.napi); + } + + warn_on(gic->handler || gic->arg); + + gic->arg = queue; + + if (is_mana) + gic->handler = mana_gd_schedule_napi; + else + gic->handler = mana_gd_process_eq_events; + + return 0; +} + +static void mana_gd_deregiser_irq(struct gdma_queue *queue) +{ + struct gdma_dev *gd = queue->gdma_dev; + struct gdma_irq_context *gic; + struct gdma_context *gc; + struct gdma_resource *r; + unsigned int msix_index; + unsigned long flags; + + gc = gd->gdma_context; + r = &gc->msix_resource; + + /* at most num_online_cpus() + 1 interrupts are used. */ + msix_index = queue->eq.msix_index; + if (warn_on(msix_index >= gc->num_msix_usable)) + return; + + gic = &gc->irq_contexts[msix_index]; + gic->handler = null; + gic->arg = null; + + spin_lock_irqsave(&r->lock, flags); + bitmap_clear(r->map, msix_index, 1); + spin_unlock_irqrestore(&r->lock, flags); + + queue->eq.msix_index = invalid_pci_msix_index; +} + +int mana_gd_test_eq(struct gdma_context *gc, struct gdma_queue *eq) +{ + struct gdma_generate_test_event_req req = {}; + struct gdma_general_resp resp = {}; + struct device *dev = gc->dev; + int err; + + mutex_lock(&gc->eq_test_event_mutex); + + init_completion(&gc->eq_test_event); + gc->test_event_eq_id = invalid_queue_id; + + mana_gd_init_req_hdr(&req.hdr, gdma_generate_test_eqe, + sizeof(req), sizeof(resp)); + + req.hdr.dev_id = eq->gdma_dev->dev_id; + req.queue_index = eq->id; + + err = mana_gd_send_request(gc, sizeof(req), &req, sizeof(resp), &resp); + if (err) { + dev_err(dev, "test_eq failed: %d ", err); + goto out; + } + + err = -eproto; + + if (resp.hdr.status) { + dev_err(dev, "test_eq failed: 0x%x ", resp.hdr.status); + goto out; + } + + if (!wait_for_completion_timeout(&gc->eq_test_event, 30 * hz)) { + dev_err(dev, "test_eq timed out on queue %d ", eq->id); + goto out; + } + + if (eq->id != gc->test_event_eq_id) { + dev_err(dev, "test_eq got an event on wrong queue %d (%d) ", + gc->test_event_eq_id, eq->id); + goto out; + } + + err = 0; +out: + mutex_unlock(&gc->eq_test_event_mutex); + return err; +} + +static void mana_gd_destroy_eq(struct gdma_context *gc, bool flush_evenets, + struct gdma_queue *queue) +{ + int err; + + if (flush_evenets) { + err = mana_gd_test_eq(gc, queue); + if (err) + dev_warn(gc->dev, "failed to flush eq: %d ", err); + } + + mana_gd_deregiser_irq(queue); + + if (mana_gd_is_mana(queue->gdma_dev)) { + napi_disable(&queue->eq.napi); + netif_napi_del(&queue->eq.napi); + } + + if (queue->eq.disable_needed) + mana_gd_disable_queue(queue); +} + +static int mana_gd_create_eq(struct gdma_dev *gd, + const struct gdma_queue_spec *spec, + bool create_hwq, struct gdma_queue *queue) +{ + struct gdma_context *gc = gd->gdma_context; + struct device *dev = gc->dev; + u32 log2_num_entries; + int err; + + queue->eq.msix_index = invalid_pci_msix_index; + + log2_num_entries = ilog2(queue->queue_size / gdma_eqe_size); + + if (spec->eq.log2_throttle_limit > log2_num_entries) { + dev_err(dev, "eq throttling limit (%lu) > maximum eqe (%u) ", + spec->eq.log2_throttle_limit, log2_num_entries); + return -einval; + } + + err = mana_gd_register_irq(queue, spec); + if (err) { + dev_err(dev, "failed to register irq: %d ", err); + return err; + } + + queue->eq.callback = spec->eq.callback; + queue->eq.context = spec->eq.context; + queue->head |= initialized_owner_bit(log2_num_entries); + queue->eq.log2_throttle_limit = spec->eq.log2_throttle_limit ?: 1; + + if (create_hwq) { + err = mana_gd_create_hw_eq(gc, queue); + if (err) + goto out; + + err = mana_gd_test_eq(gc, queue); + if (err) + goto out; + } + + return 0; +out: + dev_err(dev, "failed to create eq: %d ", err); + mana_gd_destroy_eq(gc, false, queue); + return err; +} + +static void mana_gd_create_cq(const struct gdma_queue_spec *spec, + struct gdma_queue *queue) +{ + u32 log2_num_entries = ilog2(spec->queue_size / gdma_cqe_size); + + queue->head |= initialized_owner_bit(log2_num_entries); + queue->cq.parent = spec->cq.parent_eq; + queue->cq.context = spec->cq.context; + queue->cq.callback = spec->cq.callback; +} + +static void mana_gd_destroy_cq(struct gdma_context *gc, + struct gdma_queue *queue) +{ + u32 id = queue->id; + + if (id >= gc->max_num_cqs) + return; + + if (!gc->cq_table[id]) + return; + + gc->cq_table[id] = null; +} + +int mana_gd_create_hwc_queue(struct gdma_dev *gd, + const struct gdma_queue_spec *spec, + struct gdma_queue **queue_ptr) +{ + struct gdma_context *gc = gd->gdma_context; + struct gdma_mem_info *gmi; + struct gdma_queue *queue; + int err; + + queue = kzalloc(sizeof(*queue), gfp_kernel); + if (!queue) + return -enomem; + + gmi = &queue->mem_info; + err = mana_gd_alloc_memory(gc, spec->queue_size, gmi); + if (err) + goto free_q; + + queue->head = 0; + queue->tail = 0; + queue->queue_mem_ptr = gmi->virt_addr; + queue->queue_size = spec->queue_size; + queue->monitor_avl_buf = spec->monitor_avl_buf; + queue->type = spec->type; + queue->gdma_dev = gd; + + if (spec->type == gdma_eq) + err = mana_gd_create_eq(gd, spec, false, queue); + else if (spec->type == gdma_cq) + mana_gd_create_cq(spec, queue); + + if (err) + goto out; + + *queue_ptr = queue; + return 0; +out: + mana_gd_free_memory(gmi); +free_q: + kfree(queue); + return err; +} + +static void mana_gd_destroy_dma_region(struct gdma_context *gc, u64 gdma_region) +{ + struct gdma_destroy_dma_region_req req = {}; + struct gdma_general_resp resp = {}; + int err; + + if (gdma_region == gdma_invalid_dma_region) + return; + + mana_gd_init_req_hdr(&req.hdr, gdma_destroy_dma_region, sizeof(req), + sizeof(resp)); + req.gdma_region = gdma_region; + + err = mana_gd_send_request(gc, sizeof(req), &req, sizeof(resp), &resp); + if (err || resp.hdr.status) + dev_err(gc->dev, "failed to destroy dma region: %d, 0x%x ", + err, resp.hdr.status); +} + +static int mana_gd_create_dma_region(struct gdma_dev *gd, + struct gdma_mem_info *gmi) +{ + unsigned int num_page = gmi->length / page_size; + struct gdma_create_dma_region_req *req = null; + struct gdma_create_dma_region_resp resp = {}; + struct gdma_context *gc = gd->gdma_context; + struct hw_channel_context *hwc; + u32 length = gmi->length; + u32 req_msg_size; + int err; + int i; + + if (length < page_size || !is_power_of_2(length)) + return -einval; + + if (offset_in_page(gmi->virt_addr) != 0) + return -einval; + + hwc = gc->hwc.driver_data; + req_msg_size = sizeof(*req) + num_page * sizeof(u64); + if (req_msg_size > hwc->max_req_msg_size) + return -einval; + + req = kzalloc(req_msg_size, gfp_kernel); + if (!req) + return -enomem; + + mana_gd_init_req_hdr(&req->hdr, gdma_create_dma_region, + req_msg_size, sizeof(resp)); + req->length = length; + req->offset_in_page = 0; + req->gdma_page_type = gdma_page_type_4k; + req->page_count = num_page; + req->page_addr_list_len = num_page; + + for (i = 0; i < num_page; i++) + req->page_addr_list[i] = gmi->dma_handle + i * page_size; + + err = mana_gd_send_request(gc, req_msg_size, req, sizeof(resp), &resp); + if (err) + goto out; + + if (resp.hdr.status || resp.gdma_region == gdma_invalid_dma_region) { + dev_err(gc->dev, "failed to create dma region: 0x%x ", + resp.hdr.status); + err = -eproto; + goto out; + } + + gmi->gdma_region = resp.gdma_region; +out: + kfree(req); + return err; +} + +int mana_gd_create_mana_eq(struct gdma_dev *gd, + const struct gdma_queue_spec *spec, + struct gdma_queue **queue_ptr) +{ + struct gdma_context *gc = gd->gdma_context; + struct gdma_mem_info *gmi; + struct gdma_queue *queue; + int err; + + if (spec->type != gdma_eq) + return -einval; + + queue = kzalloc(sizeof(*queue), gfp_kernel); + if (!queue) + return -enomem; + + gmi = &queue->mem_info; + err = mana_gd_alloc_memory(gc, spec->queue_size, gmi); + if (err) + goto free_q; + + err = mana_gd_create_dma_region(gd, gmi); + if (err) + goto out; + + queue->head = 0; + queue->tail = 0; + queue->queue_mem_ptr = gmi->virt_addr; + queue->queue_size = spec->queue_size; + queue->monitor_avl_buf = spec->monitor_avl_buf; + queue->type = spec->type; + queue->gdma_dev = gd; + + err = mana_gd_create_eq(gd, spec, true, queue); + if (err) + goto out; + + *queue_ptr = queue; + return 0; +out: + mana_gd_free_memory(gmi); +free_q: + kfree(queue); + return err; +} + +int mana_gd_create_mana_wq_cq(struct gdma_dev *gd, + const struct gdma_queue_spec *spec, + struct gdma_queue **queue_ptr) +{ + struct gdma_context *gc = gd->gdma_context; + struct gdma_mem_info *gmi; + struct gdma_queue *queue; + int err; + + if (spec->type != gdma_cq && spec->type != gdma_sq && + spec->type != gdma_rq) + return -einval; + + queue = kzalloc(sizeof(*queue), gfp_kernel); + if (!queue) + return -enomem; + + gmi = &queue->mem_info; + err = mana_gd_alloc_memory(gc, spec->queue_size, gmi); + if (err) + goto free_q; + + err = mana_gd_create_dma_region(gd, gmi); + if (err) + goto out; + + queue->head = 0; + queue->tail = 0; + queue->queue_mem_ptr = gmi->virt_addr; + queue->queue_size = spec->queue_size; + queue->monitor_avl_buf = spec->monitor_avl_buf; + queue->type = spec->type; + queue->gdma_dev = gd; + + if (spec->type == gdma_cq) + mana_gd_create_cq(spec, queue); + + *queue_ptr = queue; + return 0; +out: + mana_gd_free_memory(gmi); +free_q: + kfree(queue); + return err; +} + +void mana_gd_destroy_queue(struct gdma_context *gc, struct gdma_queue *queue) +{ + struct gdma_mem_info *gmi = &queue->mem_info; + + switch (queue->type) { + case gdma_eq: + mana_gd_destroy_eq(gc, queue->eq.disable_needed, queue); + break; + + case gdma_cq: + mana_gd_destroy_cq(gc, queue); + break; + + case gdma_rq: + break; + + case gdma_sq: + break; + + default: + dev_err(gc->dev, "can't destroy unknown queue: type=%d ", + queue->type); + return; + } + + mana_gd_destroy_dma_region(gc, gmi->gdma_region); + mana_gd_free_memory(gmi); + kfree(queue); +} + +int mana_gd_verify_vf_version(struct pci_dev *pdev) +{ + struct gdma_context *gc = pci_get_drvdata(pdev); + struct gdma_verify_ver_resp resp = {}; + struct gdma_verify_ver_req req = {}; + int err; + + mana_gd_init_req_hdr(&req.hdr, gdma_verify_vf_driver_version, + sizeof(req), sizeof(resp)); + + req.protocol_ver_min = gdma_protocol_first; + req.protocol_ver_max = gdma_protocol_last; + + err = mana_gd_send_request(gc, sizeof(req), &req, sizeof(resp), &resp); + if (err || resp.hdr.status) { + dev_err(gc->dev, "vfverifyversionoutput: %d, status=0x%x ", + err, resp.hdr.status); + return err ? err : -eproto; + } + + return 0; +} + +int mana_gd_register_device(struct gdma_dev *gd) +{ + struct gdma_context *gc = gd->gdma_context; + struct gdma_register_device_resp resp = {}; + struct gdma_general_req req = {}; + int err; + + gd->pdid = invalid_pdid; + gd->doorbell = invalid_doorbell; + gd->gpa_mkey = invalid_mem_key; + + mana_gd_init_req_hdr(&req.hdr, gdma_register_device, sizeof(req), + sizeof(resp)); + + req.hdr.dev_id = gd->dev_id; + + err = mana_gd_send_request(gc, sizeof(req), &req, sizeof(resp), &resp); + if (err || resp.hdr.status) { + dev_err(gc->dev, "gdma_register_device_resp failed: %d, 0x%x ", + err, resp.hdr.status); + return err ? err : -eproto; + } + + gd->pdid = resp.pdid; + gd->gpa_mkey = resp.gpa_mkey; + gd->doorbell = resp.db_id; + + return 0; +} + +int mana_gd_deregister_device(struct gdma_dev *gd) +{ + struct gdma_context *gc = gd->gdma_context; + struct gdma_general_resp resp = {}; + struct gdma_general_req req = {}; + int err; + + if (gd->pdid == invalid_pdid) + return -einval; + + mana_gd_init_req_hdr(&req.hdr, gdma_deregister_device, sizeof(req), + sizeof(resp)); + + req.hdr.dev_id = gd->dev_id; + + err = mana_gd_send_request(gc, sizeof(req), &req, sizeof(resp), &resp); + if (err || resp.hdr.status) { + dev_err(gc->dev, "failed to deregister device: %d, 0x%x ", + err, resp.hdr.status); + if (!err) + err = -eproto; + } + + gd->pdid = invalid_pdid; + gd->doorbell = invalid_doorbell; + gd->gpa_mkey = invalid_mem_key; + + return err; +} + +u32 mana_gd_wq_avail_space(struct gdma_queue *wq) +{ + u32 used_space = (wq->head - wq->tail) * gdma_wqe_bu_size; + u32 wq_size = wq->queue_size; + + warn_on_once(used_space > wq_size); + + return wq_size - used_space; +} + +u8 *mana_gd_get_wqe_ptr(const struct gdma_queue *wq, u32 wqe_offset) +{ + u32 offset = (wqe_offset * gdma_wqe_bu_size) & (wq->queue_size - 1); + + warn_on_once((offset + gdma_wqe_bu_size) > wq->queue_size); + + return wq->queue_mem_ptr + offset; +} + +static u32 mana_gd_write_client_oob(const struct gdma_wqe_request *wqe_req, + enum gdma_queue_type q_type, + u32 client_oob_size, u32 sgl_data_size, + u8 *wqe_ptr) +{ + bool oob_in_sgl = !!(wqe_req->flags & gdma_wr_oob_in_sgl); + bool pad_data = !!(wqe_req->flags & gdma_wr_pad_by_sge0); + struct gdma_wqe *header = (struct gdma_wqe *)wqe_ptr; + u8 *ptr; + + memset(header, 0, sizeof(struct gdma_wqe)); + header->num_sge = wqe_req->num_sge; + header->inline_oob_size_div4 = client_oob_size / sizeof(u32); + + if (oob_in_sgl) { + warn_on_once(!pad_data || wqe_req->num_sge < 2); + + header->client_oob_in_sgl = 1; + + if (pad_data) + header->last_vbytes = wqe_req->sgl[0].size; + } + + if (q_type == gdma_sq) + header->client_data_unit = wqe_req->client_data_unit; + + /* the size of gdma_wqe + client_oob_size must be less than or equal + * to one basic unit (i.e. 32 bytes), so the pointer can't go beyond + * the queue memory buffer boundary. + */ + ptr = wqe_ptr + sizeof(header); + + if (wqe_req->inline_oob_data && wqe_req->inline_oob_size > 0) { + memcpy(ptr, wqe_req->inline_oob_data, wqe_req->inline_oob_size); + + if (client_oob_size > wqe_req->inline_oob_size) + memset(ptr + wqe_req->inline_oob_size, 0, + client_oob_size - wqe_req->inline_oob_size); + } + + return sizeof(header) + client_oob_size; +} + +static void mana_gd_write_sgl(struct gdma_queue *wq, u8 *wqe_ptr, + const struct gdma_wqe_request *wqe_req) +{ + u32 sgl_size = sizeof(struct gdma_sge) * wqe_req->num_sge; + const u8 *address = (u8 *)wqe_req->sgl; + u8 *base_ptr, *end_ptr; + u32 size_to_end; + + base_ptr = wq->queue_mem_ptr; + end_ptr = base_ptr + wq->queue_size; + size_to_end = (u32)(end_ptr - wqe_ptr); + + if (size_to_end < sgl_size) { + memcpy(wqe_ptr, address, size_to_end); + + wqe_ptr = base_ptr; + address += size_to_end; + sgl_size -= size_to_end; + } + + memcpy(wqe_ptr, address, sgl_size); +} + +int mana_gd_post_work_request(struct gdma_queue *wq, + const struct gdma_wqe_request *wqe_req, + struct gdma_posted_wqe_info *wqe_info) +{ + u32 client_oob_size = wqe_req->inline_oob_size; + struct gdma_context *gc; + u32 sgl_data_size; + u32 max_wqe_size; + u32 wqe_size; + u8 *wqe_ptr; + + if (wqe_req->num_sge == 0) + return -einval; + + if (wq->type == gdma_rq) { + if (client_oob_size != 0) + return -einval; + + client_oob_size = inline_oob_small_size; + + max_wqe_size = gdma_max_rqe_size; + } else { + if (client_oob_size != inline_oob_small_size && + client_oob_size != inline_oob_large_size) + return -einval; + + max_wqe_size = gdma_max_sqe_size; + } + + sgl_data_size = sizeof(struct gdma_sge) * wqe_req->num_sge; + wqe_size = align(sizeof(struct gdma_wqe) + client_oob_size + + sgl_data_size, gdma_wqe_bu_size); + if (wqe_size > max_wqe_size) + return -einval; + + if (wq->monitor_avl_buf && wqe_size > mana_gd_wq_avail_space(wq)) { + gc = wq->gdma_dev->gdma_context; + dev_err(gc->dev, "unsuccessful flow control! "); + return -enospc; + } + + if (wqe_info) + wqe_info->wqe_size_in_bu = wqe_size / gdma_wqe_bu_size; + + wqe_ptr = mana_gd_get_wqe_ptr(wq, wq->head); + wqe_ptr += mana_gd_write_client_oob(wqe_req, wq->type, client_oob_size, + sgl_data_size, wqe_ptr); + if (wqe_ptr >= (u8 *)wq->queue_mem_ptr + wq->queue_size) + wqe_ptr -= wq->queue_size; + + mana_gd_write_sgl(wq, wqe_ptr, wqe_req); + + wq->head += wqe_size / gdma_wqe_bu_size; + + return 0; +} + +int mana_gd_post_and_ring(struct gdma_queue *queue, + const struct gdma_wqe_request *wqe_req, + struct gdma_posted_wqe_info *wqe_info) +{ + struct gdma_context *gc = queue->gdma_dev->gdma_context; + int err; + + err = mana_gd_post_work_request(queue, wqe_req, wqe_info); + if (err) + return err; + + mana_gd_wq_ring_doorbell(gc, queue); + + return 0; +} + +static int mana_gd_read_cqe(struct gdma_queue *cq, struct gdma_comp *comp) +{ + unsigned int num_cqe = cq->queue_size / sizeof(struct gdma_cqe); + struct gdma_cqe *cq_cqe = cq->queue_mem_ptr; + u32 owner_bits, new_bits, old_bits; + struct gdma_cqe *cqe; + + cqe = &cq_cqe[cq->head % num_cqe]; + owner_bits = cqe->cqe_info.owner_bits; + + old_bits = (cq->head / num_cqe - 1) & gdma_cqe_owner_mask; + /* return 0 if no more entries. */ + if (owner_bits == old_bits) + return 0; + + new_bits = (cq->head / num_cqe) & gdma_cqe_owner_mask; + /* return -1 if overflow detected. */ + if (owner_bits != new_bits) + return -1; + + comp->wq_num = cqe->cqe_info.wq_num; + comp->is_sq = cqe->cqe_info.is_sq; + memcpy(comp->cqe_data, cqe->cqe_data, gdma_comp_data_size); + + return 1; +} + +int mana_gd_poll_cq(struct gdma_queue *cq, struct gdma_comp *comp, int num_cqe) +{ + int cqe_idx; + int ret; + + for (cqe_idx = 0; cqe_idx < num_cqe; cqe_idx++) { + ret = mana_gd_read_cqe(cq, &comp[cqe_idx]); + + if (ret < 0) { + cq->head -= cqe_idx; + return ret; + } + + if (ret == 0) + break; + + cq->head++; + } + + return cqe_idx; +} + +static irqreturn_t mana_gd_intr(int irq, void *arg) +{ + struct gdma_irq_context *gic = arg; + + if (gic->handler) + gic->handler(gic->arg); + + return irq_handled; +} + +int mana_gd_alloc_res_map(u32 res_avail, struct gdma_resource *r) +{ + r->map = bitmap_zalloc(res_avail, gfp_kernel); + if (!r->map) + return -enomem; + + r->size = res_avail; + spin_lock_init(&r->lock); + + return 0; +} + +void mana_gd_free_res_map(struct gdma_resource *r) +{ + bitmap_free(r->map); + r->map = null; + r->size = 0; +} + +static int mana_gd_setup_irqs(struct pci_dev *pdev) +{ + unsigned int max_queues_per_port = num_online_cpus(); + struct gdma_context *gc = pci_get_drvdata(pdev); + struct gdma_irq_context *gic; + unsigned int max_irqs; + int nvec, irq; + int err, i, j; + + if (max_queues_per_port > mana_max_num_queues) + max_queues_per_port = mana_max_num_queues; + + max_irqs = max_queues_per_port * max_ports_in_mana_dev; + + /* need 1 interrupt for the hardware communication channel (hwc) */ + max_irqs++; + + nvec = pci_alloc_irq_vectors(pdev, 2, max_irqs, pci_irq_msix); + if (nvec < 0) + return nvec; + + gc->irq_contexts = kcalloc(nvec, sizeof(struct gdma_irq_context), + gfp_kernel); + if (!gc->irq_contexts) { + err = -enomem; + goto free_irq_vector; + } + + for (i = 0; i < nvec; i++) { + gic = &gc->irq_contexts[i]; + gic->handler = null; + gic->arg = null; + + irq = pci_irq_vector(pdev, i); + if (irq < 0) { + err = irq; + goto free_irq; + } + + err = request_irq(irq, mana_gd_intr, 0, "mana_intr", gic); + if (err) + goto free_irq; + } + + err = mana_gd_alloc_res_map(nvec, &gc->msix_resource); + if (err) + goto free_irq; + + gc->max_num_msix = nvec; + gc->num_msix_usable = nvec; + + return 0; + +free_irq: + for (j = i - 1; j >= 0; j--) { + irq = pci_irq_vector(pdev, j); + gic = &gc->irq_contexts[j]; + free_irq(irq, gic); + } + + kfree(gc->irq_contexts); + gc->irq_contexts = null; +free_irq_vector: + pci_free_irq_vectors(pdev); + return err; +} + +static void mana_gd_remove_irqs(struct pci_dev *pdev) +{ + struct gdma_context *gc = pci_get_drvdata(pdev); + struct gdma_irq_context *gic; + int irq, i; + + if (gc->max_num_msix < 1) + return; + + mana_gd_free_res_map(&gc->msix_resource); + + for (i = 0; i < gc->max_num_msix; i++) { + irq = pci_irq_vector(pdev, i); + if (irq < 0) + continue; + + gic = &gc->irq_contexts[i]; + free_irq(irq, gic); + } + + pci_free_irq_vectors(pdev); + + gc->max_num_msix = 0; + gc->num_msix_usable = 0; + kfree(gc->irq_contexts); + gc->irq_contexts = null; +} + +static int mana_gd_probe(struct pci_dev *pdev, const struct pci_device_id *ent) +{ + struct gdma_context *gc; + void __iomem *bar0_va; + int bar = 0; + int err; + + err = pci_enable_device(pdev); + if (err) + return -enxio; + + pci_set_master(pdev); + + err = pci_request_regions(pdev, "mana"); + if (err) + goto disable_dev; + + err = dma_set_mask_and_coherent(&pdev->dev, dma_bit_mask(64)); + if (err) + goto release_region; + + err = -enomem; + gc = vzalloc(sizeof(*gc)); + if (!gc) + goto release_region; + + bar0_va = pci_iomap(pdev, bar, 0); + if (!bar0_va) + goto free_gc; + + gc->bar0_va = bar0_va; + gc->dev = &pdev->dev; + + pci_set_drvdata(pdev, gc); + + mana_gd_init_registers(pdev); + + mana_smc_init(&gc->shm_channel, gc->dev, gc->shm_base); + + err = mana_gd_setup_irqs(pdev); + if (err) + goto unmap_bar; + + mutex_init(&gc->eq_test_event_mutex); + + err = mana_hwc_create_channel(gc); + if (err) + goto remove_irq; + + err = mana_gd_verify_vf_version(pdev); + if (err) + goto remove_irq; + + err = mana_gd_query_max_resources(pdev); + if (err) + goto remove_irq; + + err = mana_gd_detect_devices(pdev); + if (err) + goto remove_irq; + + err = mana_probe(&gc->mana); + if (err) + goto clean_up_gdma; + + return 0; + +clean_up_gdma: + mana_hwc_destroy_channel(gc); + vfree(gc->cq_table); + gc->cq_table = null; +remove_irq: + mana_gd_remove_irqs(pdev); +unmap_bar: + pci_iounmap(pdev, bar0_va); +free_gc: + vfree(gc); +release_region: + pci_release_regions(pdev); +disable_dev: + pci_clear_master(pdev); + pci_disable_device(pdev); + dev_err(&pdev->dev, "gdma probe failed: err = %d ", err); + return err; +} + +static void mana_gd_remove(struct pci_dev *pdev) +{ + struct gdma_context *gc = pci_get_drvdata(pdev); + + mana_remove(&gc->mana); + + mana_hwc_destroy_channel(gc); + vfree(gc->cq_table); + gc->cq_table = null; + + mana_gd_remove_irqs(pdev); + + pci_iounmap(pdev, gc->bar0_va); + + vfree(gc); + + pci_release_regions(pdev); + pci_clear_master(pdev); + pci_disable_device(pdev); +} + +#ifndef pci_vendor_id_microsoft +#define pci_vendor_id_microsoft 0x1414 +#endif + +static const struct pci_device_id mana_id_table[] = { + { pci_device(pci_vendor_id_microsoft, 0x00ba) }, + { } +}; + +static struct pci_driver mana_driver = { + .name = "mana", + .id_table = mana_id_table, + .probe = mana_gd_probe, + .remove = mana_gd_remove, +}; + +module_pci_driver(mana_driver); + +module_device_table(pci, mana_id_table); + +module_license("dual bsd/gpl"); +module_description("microsoft azure network adapter driver"); diff --git a/drivers/net/ethernet/microsoft/mana/hw_channel.c b/drivers/net/ethernet/microsoft/mana/hw_channel.c --- /dev/null +++ b/drivers/net/ethernet/microsoft/mana/hw_channel.c +// spdx-license-identifier: gpl-2.0 or bsd-3-clause +/* copyright (c) 2021, microsoft corporation. */ + +#include "gdma.h" +#include "hw_channel.h" + +static int mana_hwc_get_msg_index(struct hw_channel_context *hwc, u16 *msg_id) +{ + struct gdma_resource *r = &hwc->inflight_msg_res; + unsigned long flags; + u32 index; + + down(&hwc->sema); + + spin_lock_irqsave(&r->lock, flags); + + index = find_first_zero_bit(hwc->inflight_msg_res.map, + hwc->inflight_msg_res.size); + + bitmap_set(hwc->inflight_msg_res.map, index, 1); + + spin_unlock_irqrestore(&r->lock, flags); + + *msg_id = index; + + return 0; +} + +static void mana_hwc_put_msg_index(struct hw_channel_context *hwc, u16 msg_id) +{ + struct gdma_resource *r = &hwc->inflight_msg_res; + unsigned long flags; + + spin_lock_irqsave(&r->lock, flags); + bitmap_clear(hwc->inflight_msg_res.map, msg_id, 1); + spin_unlock_irqrestore(&r->lock, flags); + + up(&hwc->sema); +} + +static int mana_hwc_verify_resp_msg(const struct hwc_caller_ctx *caller_ctx, + const struct gdma_resp_hdr *resp_msg, + u32 resp_len) +{ + if (resp_len < sizeof(*resp_msg)) + return -eproto; + + if (resp_len > caller_ctx->output_buflen) + return -eproto; + + return 0; +} + +static void mana_hwc_handle_resp(struct hw_channel_context *hwc, u32 resp_len, + const struct gdma_resp_hdr *resp_msg) +{ + struct hwc_caller_ctx *ctx; + int err = -eproto; + + if (!test_bit(resp_msg->response.hwc_msg_id, + hwc->inflight_msg_res.map)) { + dev_err(hwc->dev, "hwc_rx: invalid msg_id = %u ", + resp_msg->response.hwc_msg_id); + return; + } + + ctx = hwc->caller_ctx + resp_msg->response.hwc_msg_id; + err = mana_hwc_verify_resp_msg(ctx, resp_msg, resp_len); + if (err) + goto out; + + ctx->status_code = resp_msg->status; + + memcpy(ctx->output_buf, resp_msg, resp_len); +out: + ctx->error = err; + complete(&ctx->comp_event); +} + +static int mana_hwc_post_rx_wqe(const struct hwc_wq *hwc_rxq, + struct hwc_work_request *req) +{ + struct device *dev = hwc_rxq->hwc->dev; + struct gdma_sge *sge; + int err; + + sge = &req->sge; + sge->address = (u64)req->buf_sge_addr; + sge->mem_key = hwc_rxq->msg_buf->gpa_mkey; + sge->size = req->buf_len; + + memset(&req->wqe_req, 0, sizeof(struct gdma_wqe_request)); + req->wqe_req.sgl = sge; + req->wqe_req.num_sge = 1; + req->wqe_req.client_data_unit = 0; + + err = mana_gd_post_and_ring(hwc_rxq->gdma_wq, &req->wqe_req, null); + if (err) + dev_err(dev, "failed to post wqe on hwc rq: %d ", err); + return err; +} + +static void mana_hwc_init_event_handler(void *ctx, struct gdma_queue *q_self, + struct gdma_event *event) +{ + struct hw_channel_context *hwc = ctx; + struct gdma_dev *gd = hwc->gdma_dev; + union hwc_init_type_data type_data; + union hwc_init_eq_id_db eq_db; + u32 type, val; + + switch (event->type) { + case gdma_eqe_hwc_init_eq_id_db: + eq_db.as_uint32 = event->details[0]; + hwc->cq->gdma_eq->id = eq_db.eq_id; + gd->doorbell = eq_db.doorbell; + break; + + case gdma_eqe_hwc_init_data: + type_data.as_uint32 = event->details[0]; + type = type_data.type; + val = type_data.value; + + switch (type) { + case hwc_init_data_cqid: + hwc->cq->gdma_cq->id = val; + break; + + case hwc_init_data_rqid: + hwc->rxq->gdma_wq->id = val; + break; + + case hwc_init_data_sqid: + hwc->txq->gdma_wq->id = val; + break; + + case hwc_init_data_queue_depth: + hwc->hwc_init_q_depth_max = (u16)val; + break; + + case hwc_init_data_max_request: + hwc->hwc_init_max_req_msg_size = val; + break; + + case hwc_init_data_max_response: + hwc->hwc_init_max_resp_msg_size = val; + break; + + case hwc_init_data_max_num_cqs: + gd->gdma_context->max_num_cqs = val; + break; + + case hwc_init_data_pdid: + hwc->gdma_dev->pdid = val; + break; + + case hwc_init_data_gpa_mkey: + hwc->rxq->msg_buf->gpa_mkey = val; + hwc->txq->msg_buf->gpa_mkey = val; + break; + } + + break; + + case gdma_eqe_hwc_init_done: + complete(&hwc->hwc_init_eqe_comp); + break; + + default: + /* ignore unknown events, which should never happen. */ + break; + } +} + +static void mana_hwc_rx_event_handler(void *ctx, u32 gdma_rxq_id, + const struct hwc_rx_oob *rx_oob) +{ + struct hw_channel_context *hwc = ctx; + struct hwc_wq *hwc_rxq = hwc->rxq; + struct hwc_work_request *rx_req; + struct gdma_resp_hdr *resp; + struct gdma_wqe *dma_oob; + struct gdma_queue *rq; + struct gdma_sge *sge; + u64 rq_base_addr; + u64 rx_req_idx; + u8 *wqe; + + if (warn_on_once(hwc_rxq->gdma_wq->id != gdma_rxq_id)) + return; + + rq = hwc_rxq->gdma_wq; + wqe = mana_gd_get_wqe_ptr(rq, rx_oob->wqe_offset / gdma_wqe_bu_size); + dma_oob = (struct gdma_wqe *)wqe; + + sge = (struct gdma_sge *)(wqe + 8 + dma_oob->inline_oob_size_div4 * 4); + + /* select the rx work request for virtual address and for reposting. */ + rq_base_addr = hwc_rxq->msg_buf->mem_info.dma_handle; + rx_req_idx = (sge->address - rq_base_addr) / hwc->max_req_msg_size; + + rx_req = &hwc_rxq->msg_buf->reqs[rx_req_idx]; + resp = (struct gdma_resp_hdr *)rx_req->buf_va; + + if (resp->response.hwc_msg_id >= hwc->num_inflight_msg) { + dev_err(hwc->dev, "hwc rx: wrong msg_id=%u ", + resp->response.hwc_msg_id); + return; + } + + mana_hwc_handle_resp(hwc, rx_oob->tx_oob_data_size, resp); + + /* do no longer use 'resp', because the buffer is posted to the hw + * in the below mana_hwc_post_rx_wqe(). + */ + resp = null; + + mana_hwc_post_rx_wqe(hwc_rxq, rx_req); +} + +static void mana_hwc_tx_event_handler(void *ctx, u32 gdma_txq_id, + const struct hwc_rx_oob *rx_oob) +{ + struct hw_channel_context *hwc = ctx; + struct hwc_wq *hwc_txq = hwc->txq; + + warn_on_once(!hwc_txq || hwc_txq->gdma_wq->id != gdma_txq_id); +} + +static int mana_hwc_create_gdma_wq(struct hw_channel_context *hwc, + enum gdma_queue_type type, u64 queue_size, + struct gdma_queue **queue) +{ + struct gdma_queue_spec spec = {}; + + if (type != gdma_sq && type != gdma_rq) + return -einval; + + spec.type = type; + spec.monitor_avl_buf = false; + spec.queue_size = queue_size; + + return mana_gd_create_hwc_queue(hwc->gdma_dev, &spec, queue); +} + +static int mana_hwc_create_gdma_cq(struct hw_channel_context *hwc, + u64 queue_size, + void *ctx, gdma_cq_callback *cb, + struct gdma_queue *parent_eq, + struct gdma_queue **queue) +{ + struct gdma_queue_spec spec = {}; + + spec.type = gdma_cq; + spec.monitor_avl_buf = false; + spec.queue_size = queue_size; + spec.cq.context = ctx; + spec.cq.callback = cb; + spec.cq.parent_eq = parent_eq; + + return mana_gd_create_hwc_queue(hwc->gdma_dev, &spec, queue); +} + +static int mana_hwc_create_gdma_eq(struct hw_channel_context *hwc, + u64 queue_size, + void *ctx, gdma_eq_callback *cb, + struct gdma_queue **queue) +{ + struct gdma_queue_spec spec = {}; + + spec.type = gdma_eq; + spec.monitor_avl_buf = false; + spec.queue_size = queue_size; + spec.eq.context = ctx; + spec.eq.callback = cb; + spec.eq.log2_throttle_limit = default_log2_throttling_for_error_eq; + + return mana_gd_create_hwc_queue(hwc->gdma_dev, &spec, queue); +} + +static void mana_hwc_comp_event(void *ctx, struct gdma_queue *q_self) +{ + struct hwc_rx_oob comp_data = {}; + struct gdma_comp *completions; + struct hwc_cq *hwc_cq = ctx; + u32 comp_read, i; + + warn_on_once(hwc_cq->gdma_cq != q_self); + + completions = hwc_cq->comp_buf; + comp_read = mana_gd_poll_cq(q_self, completions, hwc_cq->queue_depth); + warn_on_once(comp_read <= 0 || comp_read > hwc_cq->queue_depth); + + for (i = 0; i < comp_read; ++i) { + comp_data = *(struct hwc_rx_oob *)completions[i].cqe_data; + + if (completions[i].is_sq) + hwc_cq->tx_event_handler(hwc_cq->tx_event_ctx, + completions[i].wq_num, + &comp_data); + else + hwc_cq->rx_event_handler(hwc_cq->rx_event_ctx, + completions[i].wq_num, + &comp_data); + } + + mana_gd_arm_cq(q_self); +} + +static void mana_hwc_destroy_cq(struct gdma_context *gc, struct hwc_cq *hwc_cq) +{ + if (!hwc_cq) + return; + + kfree(hwc_cq->comp_buf); + + if (hwc_cq->gdma_cq) + mana_gd_destroy_queue(gc, hwc_cq->gdma_cq); + + if (hwc_cq->gdma_eq) + mana_gd_destroy_queue(gc, hwc_cq->gdma_eq); + + kfree(hwc_cq); +} + +static int mana_hwc_create_cq(struct hw_channel_context *hwc, u16 q_depth, + gdma_eq_callback *callback, void *ctx, + hwc_rx_event_handler_t *rx_ev_hdlr, + void *rx_ev_ctx, + hwc_tx_event_handler_t *tx_ev_hdlr, + void *tx_ev_ctx, struct hwc_cq **hwc_cq_ptr) +{ + struct gdma_queue *eq, *cq; + struct gdma_comp *comp_buf; + struct hwc_cq *hwc_cq; + u32 eq_size, cq_size; + int err; + + eq_size = roundup_pow_of_two(gdma_eqe_size * q_depth); + if (eq_size < minimum_supported_page_size) + eq_size = minimum_supported_page_size; + + cq_size = roundup_pow_of_two(gdma_cqe_size * q_depth); + if (cq_size < minimum_supported_page_size) + cq_size = minimum_supported_page_size; + + hwc_cq = kzalloc(sizeof(*hwc_cq), gfp_kernel); + if (!hwc_cq) + return -enomem; + + err = mana_hwc_create_gdma_eq(hwc, eq_size, ctx, callback, &eq); + if (err) { + dev_err(hwc->dev, "failed to create hwc eq for rq: %d ", err); + goto out; + } + hwc_cq->gdma_eq = eq; + + err = mana_hwc_create_gdma_cq(hwc, cq_size, hwc_cq, mana_hwc_comp_event, + eq, &cq); + if (err) { + dev_err(hwc->dev, "failed to create hwc cq for rq: %d ", err); + goto out; + } + hwc_cq->gdma_cq = cq; + + comp_buf = kcalloc(q_depth, sizeof(struct gdma_comp), gfp_kernel); + if (!comp_buf) { + err = -enomem; + goto out; + } + + hwc_cq->hwc = hwc; + hwc_cq->comp_buf = comp_buf; + hwc_cq->queue_depth = q_depth; + hwc_cq->rx_event_handler = rx_ev_hdlr; + hwc_cq->rx_event_ctx = rx_ev_ctx; + hwc_cq->tx_event_handler = tx_ev_hdlr; + hwc_cq->tx_event_ctx = tx_ev_ctx; + + *hwc_cq_ptr = hwc_cq; + return 0; +out: + mana_hwc_destroy_cq(hwc->gdma_dev->gdma_context, hwc_cq); + return err; +} + +static int mana_hwc_alloc_dma_buf(struct hw_channel_context *hwc, u16 q_depth, + u32 max_msg_size, + struct hwc_dma_buf **dma_buf_ptr) +{ + struct gdma_context *gc = hwc->gdma_dev->gdma_context; + struct hwc_work_request *hwc_wr; + struct hwc_dma_buf *dma_buf; + struct gdma_mem_info *gmi; + void *virt_addr; + u32 buf_size; + u8 *base_pa; + int err; + u16 i; + + dma_buf = kzalloc(sizeof(*dma_buf) + + q_depth * sizeof(struct hwc_work_request), + gfp_kernel); + if (!dma_buf) + return -enomem; + + dma_buf->num_reqs = q_depth; + + buf_size = page_align(q_depth * max_msg_size); + + gmi = &dma_buf->mem_info; + err = mana_gd_alloc_memory(gc, buf_size, gmi); + if (err) { + dev_err(hwc->dev, "failed to allocate dma buffer: %d ", err); + goto out; + } + + virt_addr = dma_buf->mem_info.virt_addr; + base_pa = (u8 *)dma_buf->mem_info.dma_handle; + + for (i = 0; i < q_depth; i++) { + hwc_wr = &dma_buf->reqs[i]; + + hwc_wr->buf_va = virt_addr + i * max_msg_size; + hwc_wr->buf_sge_addr = base_pa + i * max_msg_size; + + hwc_wr->buf_len = max_msg_size; + } + + *dma_buf_ptr = dma_buf; + return 0; +out: + kfree(dma_buf); + return err; +} + +static void mana_hwc_dealloc_dma_buf(struct hw_channel_context *hwc, + struct hwc_dma_buf *dma_buf) +{ + if (!dma_buf) + return; + + mana_gd_free_memory(&dma_buf->mem_info); + + kfree(dma_buf); +} + +static void mana_hwc_destroy_wq(struct hw_channel_context *hwc, + struct hwc_wq *hwc_wq) +{ + if (!hwc_wq) + return; + + mana_hwc_dealloc_dma_buf(hwc, hwc_wq->msg_buf); + + if (hwc_wq->gdma_wq) + mana_gd_destroy_queue(hwc->gdma_dev->gdma_context, + hwc_wq->gdma_wq); + + kfree(hwc_wq); +} + +static int mana_hwc_create_wq(struct hw_channel_context *hwc, + enum gdma_queue_type q_type, u16 q_depth, + u32 max_msg_size, struct hwc_cq *hwc_cq, + struct hwc_wq **hwc_wq_ptr) +{ + struct gdma_queue *queue; + struct hwc_wq *hwc_wq; + u32 queue_size; + int err; + + warn_on(q_type != gdma_sq && q_type != gdma_rq); + + if (q_type == gdma_rq) + queue_size = roundup_pow_of_two(gdma_max_rqe_size * q_depth); + else + queue_size = roundup_pow_of_two(gdma_max_sqe_size * q_depth); + + if (queue_size < minimum_supported_page_size) + queue_size = minimum_supported_page_size; + + hwc_wq = kzalloc(sizeof(*hwc_wq), gfp_kernel); + if (!hwc_wq) + return -enomem; + + err = mana_hwc_create_gdma_wq(hwc, q_type, queue_size, &queue); + if (err) + goto out; + + err = mana_hwc_alloc_dma_buf(hwc, q_depth, max_msg_size, + &hwc_wq->msg_buf); + if (err) + goto out; + + hwc_wq->hwc = hwc; + hwc_wq->gdma_wq = queue; + hwc_wq->queue_depth = q_depth; + hwc_wq->hwc_cq = hwc_cq; + + *hwc_wq_ptr = hwc_wq; + return 0; +out: + if (err) + mana_hwc_destroy_wq(hwc, hwc_wq); + return err; +} + +static int mana_hwc_post_tx_wqe(const struct hwc_wq *hwc_txq, + struct hwc_work_request *req, + u32 dest_virt_rq_id, u32 dest_virt_rcq_id, + bool dest_pf) +{ + struct device *dev = hwc_txq->hwc->dev; + struct hwc_tx_oob *tx_oob; + struct gdma_sge *sge; + int err; + + if (req->msg_size == 0 || req->msg_size > req->buf_len) { + dev_err(dev, "wrong msg_size: %u, buf_len: %u ", + req->msg_size, req->buf_len); + return -einval; + } + + tx_oob = &req->tx_oob; + + tx_oob->vrq_id = dest_virt_rq_id; + tx_oob->dest_vfid = 0; + tx_oob->vrcq_id = dest_virt_rcq_id; + tx_oob->vscq_id = hwc_txq->hwc_cq->gdma_cq->id; + tx_oob->loopback = false; + tx_oob->lso_override = false; + tx_oob->dest_pf = dest_pf; + tx_oob->vsq_id = hwc_txq->gdma_wq->id; + + sge = &req->sge; + sge->address = (u64)req->buf_sge_addr; + sge->mem_key = hwc_txq->msg_buf->gpa_mkey; + sge->size = req->msg_size; + + memset(&req->wqe_req, 0, sizeof(struct gdma_wqe_request)); + req->wqe_req.sgl = sge; + req->wqe_req.num_sge = 1; + req->wqe_req.inline_oob_size = sizeof(struct hwc_tx_oob); + req->wqe_req.inline_oob_data = tx_oob; + req->wqe_req.client_data_unit = 0; + + err = mana_gd_post_and_ring(hwc_txq->gdma_wq, &req->wqe_req, null); + if (err) + dev_err(dev, "failed to post wqe on hwc sq: %d ", err); + return err; +} + +static int mana_hwc_init_inflight_msg(struct hw_channel_context *hwc, + u16 num_msg) +{ + int err; + + sema_init(&hwc->sema, num_msg); + + err = mana_gd_alloc_res_map(num_msg, &hwc->inflight_msg_res); + if (err) + dev_err(hwc->dev, "failed to init inflight_msg_res: %d ", err); + return err; +} + +static int mana_hwc_test_channel(struct hw_channel_context *hwc, u16 q_depth, + u32 max_req_msg_size, u32 max_resp_msg_size) +{ + struct gdma_context *gc = hwc->gdma_dev->gdma_context; + struct hwc_wq *hwc_rxq = hwc->rxq; + struct hwc_work_request *req; + struct hwc_caller_ctx *ctx; + int err; + int i; + + /* post all wqes on the rq */ + for (i = 0; i < q_depth; i++) { + req = &hwc_rxq->msg_buf->reqs[i]; + err = mana_hwc_post_rx_wqe(hwc_rxq, req); + if (err) + return err; + } + + ctx = kzalloc(q_depth * sizeof(struct hwc_caller_ctx), gfp_kernel); + if (!ctx) + return -enomem; + + for (i = 0; i < q_depth; ++i) + init_completion(&ctx[i].comp_event); + + hwc->caller_ctx = ctx; + + return mana_gd_test_eq(gc, hwc->cq->gdma_eq); +} + +static int mana_hwc_establish_channel(struct gdma_context *gc, u16 *q_depth, + u32 *max_req_msg_size, + u32 *max_resp_msg_size) +{ + struct hw_channel_context *hwc = gc->hwc.driver_data; + struct gdma_queue *rq = hwc->rxq->gdma_wq; + struct gdma_queue *sq = hwc->txq->gdma_wq; + struct gdma_queue *eq = hwc->cq->gdma_eq; + struct gdma_queue *cq = hwc->cq->gdma_cq; + int err; + + init_completion(&hwc->hwc_init_eqe_comp); + + err = mana_smc_setup_hwc(&gc->shm_channel, false, + eq->mem_info.dma_handle, + cq->mem_info.dma_handle, + rq->mem_info.dma_handle, + sq->mem_info.dma_handle, + eq->eq.msix_index); + if (err) + return err; + + if (!wait_for_completion_timeout(&hwc->hwc_init_eqe_comp, 60 * hz)) + return -etimedout; + + *q_depth = hwc->hwc_init_q_depth_max; + *max_req_msg_size = hwc->hwc_init_max_req_msg_size; + *max_resp_msg_size = hwc->hwc_init_max_resp_msg_size; + + if (warn_on(cq->id >= gc->max_num_cqs)) + return -eproto; + + gc->cq_table = vzalloc(gc->max_num_cqs * sizeof(struct gdma_queue *)); + if (!gc->cq_table) + return -enomem; + + gc->cq_table[cq->id] = cq; + + return 0; +} + +static int mana_hwc_init_queues(struct hw_channel_context *hwc, u16 q_depth, + u32 max_req_msg_size, u32 max_resp_msg_size) +{ + struct hwc_wq *hwc_rxq = null; + struct hwc_wq *hwc_txq = null; + struct hwc_cq *hwc_cq = null; + int err; + + err = mana_hwc_init_inflight_msg(hwc, q_depth); + if (err) + return err; + + /* cq is shared by sq and rq, so cq's queue depth is the sum of sq + * queue depth and rq queue depth. + */ + err = mana_hwc_create_cq(hwc, q_depth * 2, + mana_hwc_init_event_handler, hwc, + mana_hwc_rx_event_handler, hwc, + mana_hwc_tx_event_handler, hwc, &hwc_cq); + if (err) { + dev_err(hwc->dev, "failed to create hwc cq: %d ", err); + goto out; + } + hwc->cq = hwc_cq; + + err = mana_hwc_create_wq(hwc, gdma_rq, q_depth, max_req_msg_size, + hwc_cq, &hwc_rxq); + if (err) { + dev_err(hwc->dev, "failed to create hwc rq: %d ", err); + goto out; + } + hwc->rxq = hwc_rxq; + + err = mana_hwc_create_wq(hwc, gdma_sq, q_depth, max_resp_msg_size, + hwc_cq, &hwc_txq); + if (err) { + dev_err(hwc->dev, "failed to create hwc sq: %d ", err); + goto out; + } + hwc->txq = hwc_txq; + + hwc->num_inflight_msg = q_depth; + hwc->max_req_msg_size = max_req_msg_size; + + return 0; +out: + if (hwc_txq) + mana_hwc_destroy_wq(hwc, hwc_txq); + + if (hwc_rxq) + mana_hwc_destroy_wq(hwc, hwc_rxq); + + if (hwc_cq) + mana_hwc_destroy_cq(hwc->gdma_dev->gdma_context, hwc_cq); + + mana_gd_free_res_map(&hwc->inflight_msg_res); + return err; +} + +int mana_hwc_create_channel(struct gdma_context *gc) +{ + u32 max_req_msg_size, max_resp_msg_size; + struct gdma_dev *gd = &gc->hwc; + struct hw_channel_context *hwc; + u16 q_depth_max; + int err; + + hwc = kzalloc(sizeof(*hwc), gfp_kernel); + if (!hwc) + return -enomem; + + gd->gdma_context = gc; + gd->driver_data = hwc; + hwc->gdma_dev = gd; + hwc->dev = gc->dev; + + /* hwc's instance number is always 0. */ + gd->dev_id.as_uint32 = 0; + gd->dev_id.type = gdma_device_hwc; + + gd->pdid = invalid_pdid; + gd->doorbell = invalid_doorbell; + + err = mana_hwc_init_queues(hwc, hw_channel_vf_bootstrap_queue_depth, + hw_channel_max_request_size, + hw_channel_max_response_size); + if (err) { + dev_err(hwc->dev, "failed to initialize hwc: %d ", err); + goto out; + } + + err = mana_hwc_establish_channel(gc, &q_depth_max, &max_req_msg_size, + &max_resp_msg_size); + if (err) { + dev_err(hwc->dev, "failed to establish hwc: %d ", err); + goto out; + } + + err = mana_hwc_test_channel(gc->hwc.driver_data, + hw_channel_vf_bootstrap_queue_depth, + max_req_msg_size, max_resp_msg_size); + if (err) { + dev_err(hwc->dev, "failed to test hwc: %d ", err); + goto out; + } + + return 0; +out: + kfree(hwc); + return err; +} + +void mana_hwc_destroy_channel(struct gdma_context *gc) +{ + struct hw_channel_context *hwc = gc->hwc.driver_data; + struct hwc_caller_ctx *ctx; + + mana_smc_teardown_hwc(&gc->shm_channel, false); + + ctx = hwc->caller_ctx; + kfree(ctx); + hwc->caller_ctx = null; + + mana_hwc_destroy_wq(hwc, hwc->txq); + hwc->txq = null; + + mana_hwc_destroy_wq(hwc, hwc->rxq); + hwc->rxq = null; + + mana_hwc_destroy_cq(hwc->gdma_dev->gdma_context, hwc->cq); + hwc->cq = null; + + mana_gd_free_res_map(&hwc->inflight_msg_res); + + hwc->num_inflight_msg = 0; + + if (hwc->gdma_dev->pdid != invalid_pdid) { + hwc->gdma_dev->doorbell = invalid_doorbell; + hwc->gdma_dev->pdid = invalid_pdid; + } + + kfree(hwc); + gc->hwc.driver_data = null; + gc->hwc.gdma_context = null; +} + +int mana_hwc_send_request(struct hw_channel_context *hwc, u32 req_len, + const void *req, u32 resp_len, void *resp) +{ + struct hwc_work_request *tx_wr; + struct hwc_wq *txq = hwc->txq; + struct gdma_req_hdr *req_msg; + struct hwc_caller_ctx *ctx; + u16 msg_id; + int err; + + mana_hwc_get_msg_index(hwc, &msg_id); + + tx_wr = &txq->msg_buf->reqs[msg_id]; + + if (req_len > tx_wr->buf_len) { + dev_err(hwc->dev, "hwc: req msg size: %d > %d ", req_len, + tx_wr->buf_len); + err = -einval; + goto out; + } + + ctx = hwc->caller_ctx + msg_id; + ctx->output_buf = resp; + ctx->output_buflen = resp_len; + + req_msg = (struct gdma_req_hdr *)tx_wr->buf_va; + if (req) + memcpy(req_msg, req, req_len); + + req_msg->req.hwc_msg_id = msg_id; + + tx_wr->msg_size = req_len; + + err = mana_hwc_post_tx_wqe(txq, tx_wr, 0, 0, false); + if (err) { + dev_err(hwc->dev, "hwc: failed to post send wqe: %d ", err); + goto out; + } + + if (!wait_for_completion_timeout(&ctx->comp_event, 30 * hz)) { + dev_err(hwc->dev, "hwc: request timed out! "); + err = -etimedout; + goto out; + } + + if (ctx->error) { + err = ctx->error; + goto out; + } + + if (ctx->status_code) { + dev_err(hwc->dev, "hwc: failed hw_channel req: 0x%x ", + ctx->status_code); + err = -eproto; + goto out; + } +out: + mana_hwc_put_msg_index(hwc, msg_id); + return err; +} diff --git a/drivers/net/ethernet/microsoft/mana/hw_channel.h b/drivers/net/ethernet/microsoft/mana/hw_channel.h --- /dev/null +++ b/drivers/net/ethernet/microsoft/mana/hw_channel.h +/* spdx-license-identifier: gpl-2.0 or bsd-3-clause */ +/* copyright (c) 2021, microsoft corporation. */ + +#ifndef _hw_channel_h +#define _hw_channel_h + +#define default_log2_throttling_for_error_eq 4 + +#define hw_channel_max_request_size 0x1000 +#define hw_channel_max_response_size 0x1000 + +#define hw_channel_vf_bootstrap_queue_depth 1 + +#define hwc_init_data_cqid 1 +#define hwc_init_data_rqid 2 +#define hwc_init_data_sqid 3 +#define hwc_init_data_queue_depth 4 +#define hwc_init_data_max_request 5 +#define hwc_init_data_max_response 6 +#define hwc_init_data_max_num_cqs 7 +#define hwc_init_data_pdid 8 +#define hwc_init_data_gpa_mkey 9 + +/* structures labeled with "hw data" are exchanged with the hardware. all of + * them are naturally aligned and hence don't need __packed. + */ + +union hwc_init_eq_id_db { + u32 as_uint32; + + struct { + u32 eq_id : 16; + u32 doorbell : 16; + }; +}; /* hw data */ + +union hwc_init_type_data { + u32 as_uint32; + + struct { + u32 value : 24; + u32 type : 8; + }; +}; /* hw data */ + +struct hwc_rx_oob { + u32 type : 6; + u32 eom : 1; + u32 som : 1; + u32 vendor_err : 8; + u32 reserved1 : 16; + + u32 src_virt_wq : 24; + u32 src_vfid : 8; + + u32 reserved2; + + union { + u32 wqe_addr_low; + u32 wqe_offset; + }; + + u32 wqe_addr_high; + + u32 client_data_unit : 14; + u32 reserved3 : 18; + + u32 tx_oob_data_size; + + u32 chunk_offset : 21; + u32 reserved4 : 11; +}; /* hw data */ + +struct hwc_tx_oob { + u32 reserved1; + + u32 reserved2; + + u32 vrq_id : 24; + u32 dest_vfid : 8; + + u32 vrcq_id : 24; + u32 reserved3 : 8; + + u32 vscq_id : 24; + u32 loopback : 1; + u32 lso_override: 1; + u32 dest_pf : 1; + u32 reserved4 : 5; + + u32 vsq_id : 24; + u32 reserved5 : 8; +}; /* hw data */ + +struct hwc_work_request { + void *buf_va; + void *buf_sge_addr; + u32 buf_len; + u32 msg_size; + + struct gdma_wqe_request wqe_req; + struct hwc_tx_oob tx_oob; + + struct gdma_sge sge; +}; + +/* hwc_dma_buf represents the array of in-flight wqes. + * mem_info as know as the gdma mapped memory is partitioned and used by + * in-flight wqes. + * the number of wqes is determined by the number of in-flight messages. + */ +struct hwc_dma_buf { + struct gdma_mem_info mem_info; + + u32 gpa_mkey; + + u32 num_reqs; + struct hwc_work_request reqs[]; +}; + +typedef void hwc_rx_event_handler_t(void *ctx, u32 gdma_rxq_id, + const struct hwc_rx_oob *rx_oob); + +typedef void hwc_tx_event_handler_t(void *ctx, u32 gdma_txq_id, + const struct hwc_rx_oob *rx_oob); + +struct hwc_cq { + struct hw_channel_context *hwc; + + struct gdma_queue *gdma_cq; + struct gdma_queue *gdma_eq; + struct gdma_comp *comp_buf; + u16 queue_depth; + + hwc_rx_event_handler_t *rx_event_handler; + void *rx_event_ctx; + + hwc_tx_event_handler_t *tx_event_handler; + void *tx_event_ctx; +}; + +struct hwc_wq { + struct hw_channel_context *hwc; + + struct gdma_queue *gdma_wq; + struct hwc_dma_buf *msg_buf; + u16 queue_depth; + + struct hwc_cq *hwc_cq; +}; + +struct hwc_caller_ctx { + struct completion comp_event; + void *output_buf; + u32 output_buflen; + + u32 error; /* linux error code */ + u32 status_code; +}; + +struct hw_channel_context { + struct gdma_dev *gdma_dev; + struct device *dev; + + u16 num_inflight_msg; + u32 max_req_msg_size; + + u16 hwc_init_q_depth_max; + u32 hwc_init_max_req_msg_size; + u32 hwc_init_max_resp_msg_size; + + struct completion hwc_init_eqe_comp; + + struct hwc_wq *rxq; + struct hwc_wq *txq; + struct hwc_cq *cq; + + struct semaphore sema; + struct gdma_resource inflight_msg_res; + + struct hwc_caller_ctx *caller_ctx; +}; + +int mana_hwc_create_channel(struct gdma_context *gc); +void mana_hwc_destroy_channel(struct gdma_context *gc); + +int mana_hwc_send_request(struct hw_channel_context *hwc, u32 req_len, + const void *req, u32 resp_len, void *resp); + +#endif /* _hw_channel_h */ diff --git a/drivers/net/ethernet/microsoft/mana/mana.h b/drivers/net/ethernet/microsoft/mana/mana.h --- /dev/null +++ b/drivers/net/ethernet/microsoft/mana/mana.h +/* spdx-license-identifier: gpl-2.0 or bsd-3-clause */ +/* copyright (c) 2021, microsoft corporation. */ + +#ifndef _mana_h +#define _mana_h + +#include "gdma.h" +#include "hw_channel.h" + +/* microsoft azure network adapter (mana)'s definitions + * + * structures labeled with "hw data" are exchanged with the hardware. all of + * them are naturally aligned and hence don't need __packed. + */ + +/* mana protocol version */ +#define mana_major_version 0 +#define mana_minor_version 1 +#define mana_micro_version 1 + +typedef u64 mana_handle_t; +#define invalid_mana_handle ((mana_handle_t)-1) + +enum tri_state { + tri_state_unknown = -1, + tri_state_false = 0, + tri_state_true = 1 +}; + +/* number of entries for hardware indirection table must be in power of 2 */ +#define mana_indirect_table_size 64 +#define mana_indirect_table_mask (mana_indirect_table_size - 1) + +/* the toeplitz hash key's length in bytes: should be multiple of 8 */ +#define mana_hash_key_size 40 + +#define comp_entry_size 64 + +#define adapter_mtu_size 1500 +#define max_frame_size (adapter_mtu_size + 14) + +#define rx_buffers_per_queue 512 + +#define max_send_buffers_per_queue 256 + +#define eq_size (8 * page_size) +#define log2_eq_throttle 3 + +#define max_ports_in_mana_dev 16 + +struct mana_stats { + u64 packets; + u64 bytes; + struct u64_stats_sync syncp; +}; + +struct mana_txq { + struct gdma_queue *gdma_sq; + + union { + u32 gdma_txq_id; + struct { + u32 reserved1 : 10; + u32 vsq_frame : 14; + u32 reserved2 : 8; + }; + }; + + u16 vp_offset; + + struct net_device *ndev; + + /* the skbs are sent to the hw and we are waiting for the cqes. */ + struct sk_buff_head pending_skbs; + struct netdev_queue *net_txq; + + atomic_t pending_sends; + + struct mana_stats stats; +}; + +/* skb data and frags dma mappings */ +struct mana_skb_head { + dma_addr_t dma_handle[max_skb_frags + 1]; + + u32 size[max_skb_frags + 1]; +}; + +#define mana_headroom sizeof(struct mana_skb_head) + +enum mana_tx_pkt_format { + mana_short_pkt_fmt = 0, + mana_long_pkt_fmt = 1, +}; + +struct mana_tx_short_oob { + u32 pkt_fmt : 2; + u32 is_outer_ipv4 : 1; + u32 is_outer_ipv6 : 1; + u32 comp_iphdr_csum : 1; + u32 comp_tcp_csum : 1; + u32 comp_udp_csum : 1; + u32 supress_txcqe_gen : 1; + u32 vcq_num : 24; + + u32 trans_off : 10; /* transport header offset */ + u32 vsq_frame : 14; + u32 short_vp_offset : 8; +}; /* hw data */ + +struct mana_tx_long_oob { + u32 is_encap : 1; + u32 inner_is_ipv6 : 1; + u32 inner_tcp_opt : 1; + u32 inject_vlan_pri_tag : 1; + u32 reserved1 : 12; + u32 pcp : 3; /* 802.1q */ + u32 dei : 1; /* 802.1q */ + u32 vlan_id : 12; /* 802.1q */ + + u32 inner_frame_offset : 10; + u32 inner_ip_rel_offset : 6; + u32 long_vp_offset : 12; + u32 reserved2 : 4; + + u32 reserved3; + u32 reserved4; +}; /* hw data */ + +struct mana_tx_oob { + struct mana_tx_short_oob s_oob; + struct mana_tx_long_oob l_oob; +}; /* hw data */ + +enum mana_cq_type { + mana_cq_type_rx, + mana_cq_type_tx, +}; + +enum mana_cqe_type { + cqe_invalid = 0, + cqe_rx_okay = 1, + cqe_rx_coalesced_4 = 2, + cqe_rx_object_fence = 3, + cqe_rx_truncated = 4, + + cqe_tx_okay = 32, + cqe_tx_sa_drop = 33, + cqe_tx_mtu_drop = 34, + cqe_tx_invalid_oob = 35, + cqe_tx_invalid_eth_type = 36, + cqe_tx_hdr_processing_error = 37, + cqe_tx_vf_disabled = 38, + cqe_tx_vport_idx_out_of_range = 39, + cqe_tx_vport_disabled = 40, + cqe_tx_vlan_tagging_violation = 41, +}; + +#define mana_cqe_completion 1 + +struct mana_cqe_header { + u32 cqe_type : 6; + u32 client_type : 2; + u32 vendor_err : 24; +}; /* hw data */ + +/* ndis hash types */ +#define ndis_hash_ipv4 bit(0) +#define ndis_hash_tcp_ipv4 bit(1) +#define ndis_hash_udp_ipv4 bit(2) +#define ndis_hash_ipv6 bit(3) +#define ndis_hash_tcp_ipv6 bit(4) +#define ndis_hash_udp_ipv6 bit(5) +#define ndis_hash_ipv6_ex bit(6) +#define ndis_hash_tcp_ipv6_ex bit(7) +#define ndis_hash_udp_ipv6_ex bit(8) + +#define mana_hash_l3 (ndis_hash_ipv4 | ndis_hash_ipv6 | ndis_hash_ipv6_ex) +#define mana_hash_l4 \ + (ndis_hash_tcp_ipv4 | ndis_hash_udp_ipv4 | ndis_hash_tcp_ipv6 | \ + ndis_hash_udp_ipv6 | ndis_hash_tcp_ipv6_ex | ndis_hash_udp_ipv6_ex) + +struct mana_rxcomp_perpkt_info { + u32 pkt_len : 16; + u32 reserved1 : 16; + u32 reserved2; + u32 pkt_hash; +}; /* hw data */ + +#define mana_rxcomp_oob_num_ppi 4 + +/* receive completion oob */ +struct mana_rxcomp_oob { + struct mana_cqe_header cqe_hdr; + + u32 rx_vlan_id : 12; + u32 rx_vlantag_present : 1; + u32 rx_outer_iphdr_csum_succeed : 1; + u32 rx_outer_iphdr_csum_fail : 1; + u32 reserved1 : 1; + u32 rx_hashtype : 9; + u32 rx_iphdr_csum_succeed : 1; + u32 rx_iphdr_csum_fail : 1; + u32 rx_tcp_csum_succeed : 1; + u32 rx_tcp_csum_fail : 1; + u32 rx_udp_csum_succeed : 1; + u32 rx_udp_csum_fail : 1; + u32 reserved2 : 1; + + struct mana_rxcomp_perpkt_info ppi[mana_rxcomp_oob_num_ppi]; + + u32 rx_wqe_offset; +}; /* hw data */ + +struct mana_tx_comp_oob { + struct mana_cqe_header cqe_hdr; + + u32 tx_data_offset; + + u32 tx_sgl_offset : 5; + u32 tx_wqe_offset : 27; + + u32 reserved[12]; +}; /* hw data */ + +struct mana_rxq; + +struct mana_cq { + struct gdma_queue *gdma_cq; + + /* cache the cq id (used to verify if each cqe comes to the right cq. */ + u32 gdma_id; + + /* type of the cq: tx or rx */ + enum mana_cq_type type; + + /* pointer to the mana_rxq that is pushing rx cqes to the queue. + * only and must be non-null if type is mana_cq_type_rx. + */ + struct mana_rxq *rxq; + + /* pointer to the mana_txq that is pushing tx cqes to the queue. + * only and must be non-null if type is mana_cq_type_tx. + */ + struct mana_txq *txq; + + /* pointer to a buffer which the cq handler can copy the cqe's into. */ + struct gdma_comp *gdma_comp_buf; +}; + +#define gdma_max_rqe_sges 15 + +struct mana_recv_buf_oob { + /* a valid gdma work request representing the data buffer. */ + struct gdma_wqe_request wqe_req; + + void *buf_va; + dma_addr_t buf_dma_addr; + + /* sgl of the buffer going to be sent has part of the work request. */ + u32 num_sge; + struct gdma_sge sgl[gdma_max_rqe_sges]; + + /* required to store the result of mana_gd_post_work_request. + * gdma_posted_wqe_info.wqe_size_in_bu is required for progressing the + * work queue when the wqe is consumed. + */ + struct gdma_posted_wqe_info wqe_inf; +}; + +struct mana_rxq { + struct gdma_queue *gdma_rq; + /* cache the gdma receive queue id */ + u32 gdma_id; + + /* index of rq in the vport, not gdma receive queue id */ + u32 rxq_idx; + + u32 datasize; + + mana_handle_t rxobj; + + struct mana_cq rx_cq; + + struct net_device *ndev; + + /* total number of receive buffers to be allocated */ + u32 num_rx_buf; + + u32 buf_index; + + struct mana_stats stats; + + /* must be the last member: + * each receive buffer has an associated mana_recv_buf_oob. + */ + struct mana_recv_buf_oob rx_oobs[]; +}; + +struct mana_tx_qp { + struct mana_txq txq; + + struct mana_cq tx_cq; + + mana_handle_t tx_object; +}; + +struct mana_ethtool_stats { + u64 stop_queue; + u64 wake_queue; +}; + +struct mana_context { + struct gdma_dev *gdma_dev; + + u16 num_ports; + + struct net_device *ports[max_ports_in_mana_dev]; +}; + +struct mana_port_context { + struct mana_context *ac; + struct net_device *ndev; + + u8 mac_addr[eth_alen]; + + struct mana_eq *eqs; + + enum tri_state rss_state; + + mana_handle_t default_rxobj; + bool tx_shortform_allowed; + u16 tx_vp_offset; + + struct mana_tx_qp *tx_qp; + + /* indirection table for rx & tx. the values are queue indexes */ + u32 indir_table[mana_indirect_table_size]; + + /* indirection table containing rxobject handles */ + mana_handle_t rxobj_table[mana_indirect_table_size]; + + /* hash key used by the nic */ + u8 hashkey[mana_hash_key_size]; + + /* this points to an array of num_queues of rq pointers. */ + struct mana_rxq **rxqs; + + /* create num_queues eqs, sqs, sq-cqs, rqs and rq-cqs, respectively. */ + unsigned int max_queues; + unsigned int num_queues; + + mana_handle_t port_handle; + + u16 port_idx; + + bool port_is_up; + bool port_st_save; /* saved port state */ + + struct mana_ethtool_stats eth_stats; +}; + +int mana_config_rss(struct mana_port_context *ac, enum tri_state rx, + bool update_hash, bool update_tab); + +int mana_alloc_queues(struct net_device *ndev); +int mana_attach(struct net_device *ndev); +int mana_detach(struct net_device *ndev, bool from_close); + +int mana_probe(struct gdma_dev *gd); +void mana_remove(struct gdma_dev *gd); + +extern const struct ethtool_ops mana_ethtool_ops; + +struct mana_obj_spec { + u32 queue_index; + u64 gdma_region; + u32 queue_size; + u32 attached_eq; + u32 modr_ctx_id; +}; + +enum mana_command_code { + mana_query_dev_config = 0x20001, + mana_query_gf_stat = 0x20002, + mana_config_vport_tx = 0x20003, + mana_create_wq_obj = 0x20004, + mana_destroy_wq_obj = 0x20005, + mana_fence_rq = 0x20006, + mana_config_vport_rx = 0x20007, + mana_query_vport_config = 0x20008, +}; + +/* query device configuration */ +struct mana_query_device_cfg_req { + struct gdma_req_hdr hdr; + + /* driver capability flags */ + u64 drv_cap_flags1; + u64 drv_cap_flags2; + u64 drv_cap_flags3; + u64 drv_cap_flags4; + + u32 proto_major_ver; + u32 proto_minor_ver; + u32 proto_micro_ver; + + u32 reserved; +}; /* hw data */ + +struct mana_query_device_cfg_resp { + struct gdma_resp_hdr hdr; + + u64 pf_cap_flags1; + u64 pf_cap_flags2; + u64 pf_cap_flags3; + u64 pf_cap_flags4; + + u16 max_num_vports; + u16 reserved; + u32 max_num_eqs; +}; /* hw data */ + +/* query vport configuration */ +struct mana_query_vport_cfg_req { + struct gdma_req_hdr hdr; + u32 vport_index; +}; /* hw data */ + +struct mana_query_vport_cfg_resp { + struct gdma_resp_hdr hdr; + u32 max_num_sq; + u32 max_num_rq; + u32 num_indirection_ent; + u32 reserved1; + u8 mac_addr[6]; + u8 reserved2[2]; + mana_handle_t vport; +}; /* hw data */ + +/* configure vport */ +struct mana_config_vport_req { + struct gdma_req_hdr hdr; + mana_handle_t vport; + u32 pdid; + u32 doorbell_pageid; +}; /* hw data */ + +struct mana_config_vport_resp { + struct gdma_resp_hdr hdr; + u16 tx_vport_offset; + u8 short_form_allowed; + u8 reserved; +}; /* hw data */ + +/* create wq object */ +struct mana_create_wqobj_req { + struct gdma_req_hdr hdr; + mana_handle_t vport; + u32 wq_type; + u32 reserved; + u64 wq_gdma_region; + u64 cq_gdma_region; + u32 wq_size; + u32 cq_size; + u32 cq_moderation_ctx_id; + u32 cq_parent_qid; +}; /* hw data */ + +struct mana_create_wqobj_resp { + struct gdma_resp_hdr hdr; + u32 wq_id; + u32 cq_id; + mana_handle_t wq_obj; +}; /* hw data */ + +/* destroy wq object */ +struct mana_destroy_wqobj_req { + struct gdma_req_hdr hdr; + u32 wq_type; + u32 reserved; + mana_handle_t wq_obj_handle; +}; /* hw data */ + +struct mana_destroy_wqobj_resp { + struct gdma_resp_hdr hdr; +}; /* hw data */ + +/* fence rq */ +struct mana_fence_rq_req { + struct gdma_req_hdr hdr; + mana_handle_t wq_obj_handle; +}; /* hw data */ + +struct mana_fence_rq_resp { + struct gdma_resp_hdr hdr; +}; /* hw data */ + +/* configure vport rx steering */ +struct mana_cfg_rx_steer_req { + struct gdma_req_hdr hdr; + mana_handle_t vport; + u16 num_indir_entries; + u16 indir_tab_offset; + u32 rx_enable; + u32 rss_enable; + u8 update_default_rxobj; + u8 update_hashkey; + u8 update_indir_tab; + u8 reserved; + mana_handle_t default_rxobj; + u8 hashkey[mana_hash_key_size]; +}; /* hw data */ + +struct mana_cfg_rx_steer_resp { + struct gdma_resp_hdr hdr; +}; /* hw data */ + +#define mana_max_num_queues 16 + +#define mana_short_vport_offset_max ((1u << 8) - 1) + +struct mana_tx_package { + struct gdma_wqe_request wqe_req; + struct gdma_sge sgl_array[5]; + struct gdma_sge *sgl_ptr; + + struct mana_tx_oob tx_oob; + + struct gdma_posted_wqe_info wqe_info; +}; + +#endif /* _mana_h */ diff --git a/drivers/net/ethernet/microsoft/mana/mana_en.c b/drivers/net/ethernet/microsoft/mana/mana_en.c --- /dev/null +++ b/drivers/net/ethernet/microsoft/mana/mana_en.c +// spdx-license-identifier: gpl-2.0 or bsd-3-clause +/* copyright (c) 2021, microsoft corporation. */ + +#include <linux/inetdevice.h> +#include <linux/etherdevice.h> +#include <linux/ethtool.h> +#include <linux/mm.h> + +#include <net/checksum.h> +#include <net/ip6_checksum.h> + +#include "mana.h" + +/* microsoft azure network adapter (mana) functions */ + +static int mana_open(struct net_device *ndev) +{ + struct mana_port_context *apc = netdev_priv(ndev); + int err; + + err = mana_alloc_queues(ndev); + if (err) + return err; + + apc->port_is_up = true; + + /* ensure port state updated before txq state */ + smp_wmb(); + + netif_carrier_on(ndev); + netif_tx_wake_all_queues(ndev); + + return 0; +} + +static int mana_close(struct net_device *ndev) +{ + struct mana_port_context *apc = netdev_priv(ndev); + + if (!apc->port_is_up) + return 0; + + return mana_detach(ndev, true); +} + +static bool mana_can_tx(struct gdma_queue *wq) +{ + return mana_gd_wq_avail_space(wq) >= max_tx_wqe_size; +} + +static unsigned int mana_checksum_info(struct sk_buff *skb) +{ + if (skb->protocol == htons(eth_p_ip)) { + struct iphdr *ip = ip_hdr(skb); + + if (ip->protocol == ipproto_tcp) + return ipproto_tcp; + + if (ip->protocol == ipproto_udp) + return ipproto_udp; + } else if (skb->protocol == htons(eth_p_ipv6)) { + struct ipv6hdr *ip6 = ipv6_hdr(skb); + + if (ip6->nexthdr == ipproto_tcp) + return ipproto_tcp; + + if (ip6->nexthdr == ipproto_udp) + return ipproto_udp; + } + + /* no csum offloading */ + return 0; +} + +static int mana_map_skb(struct sk_buff *skb, struct mana_port_context *apc, + struct mana_tx_package *tp) +{ + struct mana_skb_head *ash = (struct mana_skb_head *)skb->head; + struct gdma_dev *gd = apc->ac->gdma_dev; + struct gdma_context *gc; + struct device *dev; + skb_frag_t *frag; + dma_addr_t da; + int i; + + gc = gd->gdma_context; + dev = gc->dev; + da = dma_map_single(dev, skb->data, skb_headlen(skb), dma_to_device); + + if (dma_mapping_error(dev, da)) + return -enomem; + + ash->dma_handle[0] = da; + ash->size[0] = skb_headlen(skb); + + tp->wqe_req.sgl[0].address = ash->dma_handle[0]; + tp->wqe_req.sgl[0].mem_key = gd->gpa_mkey; + tp->wqe_req.sgl[0].size = ash->size[0]; + + for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) { + frag = &skb_shinfo(skb)->frags[i]; + da = skb_frag_dma_map(dev, frag, 0, skb_frag_size(frag), + dma_to_device); + + if (dma_mapping_error(dev, da)) + goto frag_err; + + ash->dma_handle[i + 1] = da; + ash->size[i + 1] = skb_frag_size(frag); + + tp->wqe_req.sgl[i + 1].address = ash->dma_handle[i + 1]; + tp->wqe_req.sgl[i + 1].mem_key = gd->gpa_mkey; + tp->wqe_req.sgl[i + 1].size = ash->size[i + 1]; + } + + return 0; + +frag_err: + for (i = i - 1; i >= 0; i--) + dma_unmap_page(dev, ash->dma_handle[i + 1], ash->size[i + 1], + dma_to_device); + + dma_unmap_single(dev, ash->dma_handle[0], ash->size[0], dma_to_device); + + return -enomem; +} + +static int mana_start_xmit(struct sk_buff *skb, struct net_device *ndev) +{ + enum mana_tx_pkt_format pkt_fmt = mana_short_pkt_fmt; + struct mana_port_context *apc = netdev_priv(ndev); + u16 txq_idx = skb_get_queue_mapping(skb); + struct gdma_dev *gd = apc->ac->gdma_dev; + bool ipv4 = false, ipv6 = false; + struct mana_tx_package pkg = {}; + struct netdev_queue *net_txq; + struct mana_stats *tx_stats; + struct gdma_queue *gdma_sq; + unsigned int csum_type; + struct mana_txq *txq; + struct mana_cq *cq; + int err, len; + + if (unlikely(!apc->port_is_up)) + goto tx_drop; + + if (skb_cow_head(skb, mana_headroom)) + goto tx_drop_count; + + txq = &apc->tx_qp[txq_idx].txq; + gdma_sq = txq->gdma_sq; + cq = &apc->tx_qp[txq_idx].tx_cq; + + pkg.tx_oob.s_oob.vcq_num = cq->gdma_id; + pkg.tx_oob.s_oob.vsq_frame = txq->vsq_frame; + + if (txq->vp_offset > mana_short_vport_offset_max) { + pkg.tx_oob.l_oob.long_vp_offset = txq->vp_offset; + pkt_fmt = mana_long_pkt_fmt; + } else { + pkg.tx_oob.s_oob.short_vp_offset = txq->vp_offset; + } + + pkg.tx_oob.s_oob.pkt_fmt = pkt_fmt; + + if (pkt_fmt == mana_short_pkt_fmt) + pkg.wqe_req.inline_oob_size = sizeof(struct mana_tx_short_oob); + else + pkg.wqe_req.inline_oob_size = sizeof(struct mana_tx_oob); + + pkg.wqe_req.inline_oob_data = &pkg.tx_oob; + pkg.wqe_req.flags = 0; + pkg.wqe_req.client_data_unit = 0; + + pkg.wqe_req.num_sge = 1 + skb_shinfo(skb)->nr_frags; + warn_on_once(pkg.wqe_req.num_sge > 30); + + if (pkg.wqe_req.num_sge <= array_size(pkg.sgl_array)) { + pkg.wqe_req.sgl = pkg.sgl_array; + } else { + pkg.sgl_ptr = kmalloc_array(pkg.wqe_req.num_sge, + sizeof(struct gdma_sge), + gfp_atomic); + if (!pkg.sgl_ptr) + goto tx_drop_count; + + pkg.wqe_req.sgl = pkg.sgl_ptr; + } + + if (skb->protocol == htons(eth_p_ip)) + ipv4 = true; + else if (skb->protocol == htons(eth_p_ipv6)) + ipv6 = true; + + if (skb_is_gso(skb)) { + pkg.tx_oob.s_oob.is_outer_ipv4 = ipv4; + pkg.tx_oob.s_oob.is_outer_ipv6 = ipv6; + + pkg.tx_oob.s_oob.comp_iphdr_csum = 1; + pkg.tx_oob.s_oob.comp_tcp_csum = 1; + pkg.tx_oob.s_oob.trans_off = skb_transport_offset(skb); + + pkg.wqe_req.client_data_unit = skb_shinfo(skb)->gso_size; + pkg.wqe_req.flags = gdma_wr_oob_in_sgl | gdma_wr_pad_by_sge0; + if (ipv4) { + ip_hdr(skb)->tot_len = 0; + ip_hdr(skb)->check = 0; + tcp_hdr(skb)->check = + ~csum_tcpudp_magic(ip_hdr(skb)->saddr, + ip_hdr(skb)->daddr, 0, + ipproto_tcp, 0); + } else { + ipv6_hdr(skb)->payload_len = 0; + tcp_hdr(skb)->check = + ~csum_ipv6_magic(&ipv6_hdr(skb)->saddr, + &ipv6_hdr(skb)->daddr, 0, + ipproto_tcp, 0); + } + } else if (skb->ip_summed == checksum_partial) { + csum_type = mana_checksum_info(skb); + + if (csum_type == ipproto_tcp) { + pkg.tx_oob.s_oob.is_outer_ipv4 = ipv4; + pkg.tx_oob.s_oob.is_outer_ipv6 = ipv6; + + pkg.tx_oob.s_oob.comp_tcp_csum = 1; + pkg.tx_oob.s_oob.trans_off = skb_transport_offset(skb); + + } else if (csum_type == ipproto_udp) { + pkg.tx_oob.s_oob.is_outer_ipv4 = ipv4; + pkg.tx_oob.s_oob.is_outer_ipv6 = ipv6; + + pkg.tx_oob.s_oob.comp_udp_csum = 1; + } else { + /* can't do offload of this type of checksum */ + if (skb_checksum_help(skb)) + goto free_sgl_ptr; + } + } + + if (mana_map_skb(skb, apc, &pkg)) + goto free_sgl_ptr; + + skb_queue_tail(&txq->pending_skbs, skb); + + len = skb->len; + net_txq = netdev_get_tx_queue(ndev, txq_idx); + + err = mana_gd_post_work_request(gdma_sq, &pkg.wqe_req, + (struct gdma_posted_wqe_info *)skb->cb); + if (!mana_can_tx(gdma_sq)) { + netif_tx_stop_queue(net_txq); + apc->eth_stats.stop_queue++; + } + + if (err) { + (void)skb_dequeue_tail(&txq->pending_skbs); + netdev_warn(ndev, "failed to post tx oob: %d ", err); + err = netdev_tx_busy; + goto tx_busy; + } + + err = netdev_tx_ok; + atomic_inc(&txq->pending_sends); + + mana_gd_wq_ring_doorbell(gd->gdma_context, gdma_sq); + + /* skb may be freed after mana_gd_post_work_request. do not use it. */ + skb = null; + + tx_stats = &txq->stats; + u64_stats_update_begin(&tx_stats->syncp); + tx_stats->packets++; + tx_stats->bytes += len; + u64_stats_update_end(&tx_stats->syncp); + +tx_busy: + if (netif_tx_queue_stopped(net_txq) && mana_can_tx(gdma_sq)) { + netif_tx_wake_queue(net_txq); + apc->eth_stats.wake_queue++; + } + + kfree(pkg.sgl_ptr); + return err; + +free_sgl_ptr: + kfree(pkg.sgl_ptr); +tx_drop_count: + ndev->stats.tx_dropped++; +tx_drop: + dev_kfree_skb_any(skb); + return netdev_tx_ok; +} + +static void mana_get_stats64(struct net_device *ndev, + struct rtnl_link_stats64 *st) +{ + struct mana_port_context *apc = netdev_priv(ndev); + unsigned int num_queues = apc->num_queues; + struct mana_stats *stats; + unsigned int start; + u64 packets, bytes; + int q; + + if (!apc->port_is_up) + return; + + netdev_stats_to_stats64(st, &ndev->stats); + + for (q = 0; q < num_queues; q++) { + stats = &apc->rxqs[q]->stats; + + do { + start = u64_stats_fetch_begin_irq(&stats->syncp); + packets = stats->packets; + bytes = stats->bytes; + } while (u64_stats_fetch_retry_irq(&stats->syncp, start)); + + st->rx_packets += packets; + st->rx_bytes += bytes; + } + + for (q = 0; q < num_queues; q++) { + stats = &apc->tx_qp[q].txq.stats; + + do { + start = u64_stats_fetch_begin_irq(&stats->syncp); + packets = stats->packets; + bytes = stats->bytes; + } while (u64_stats_fetch_retry_irq(&stats->syncp, start)); + + st->tx_packets += packets; + st->tx_bytes += bytes; + } +} + +static int mana_get_tx_queue(struct net_device *ndev, struct sk_buff *skb, + int old_q) +{ + struct mana_port_context *apc = netdev_priv(ndev); + u32 hash = skb_get_hash(skb); + struct sock *sk = skb->sk; + int txq; + + txq = apc->indir_table[hash & mana_indirect_table_mask]; + + if (txq != old_q && sk && sk_fullsock(sk) && + rcu_access_pointer(sk->sk_dst_cache)) + sk_tx_queue_set(sk, txq); + + return txq; +} + +static u16 mana_select_queue(struct net_device *ndev, struct sk_buff *skb, + struct net_device *sb_dev) +{ + int txq; + + if (ndev->real_num_tx_queues == 1) + return 0; + + txq = sk_tx_queue_get(skb->sk); + + if (txq < 0 || skb->ooo_okay || txq >= ndev->real_num_tx_queues) { + if (skb_rx_queue_recorded(skb)) + txq = skb_get_rx_queue(skb); + else + txq = mana_get_tx_queue(ndev, skb, txq); + } + + return txq; +} + +static const struct net_device_ops mana_devops = { + .ndo_open = mana_open, + .ndo_stop = mana_close, + .ndo_select_queue = mana_select_queue, + .ndo_start_xmit = mana_start_xmit, + .ndo_validate_addr = eth_validate_addr, + .ndo_get_stats64 = mana_get_stats64, +}; + +static void mana_cleanup_port_context(struct mana_port_context *apc) +{ + kfree(apc->rxqs); + apc->rxqs = null; +} + +static int mana_init_port_context(struct mana_port_context *apc) +{ + apc->rxqs = kcalloc(apc->num_queues, sizeof(struct mana_rxq *), + gfp_kernel); + + return !apc->rxqs ? -enomem : 0; +} + +static int mana_send_request(struct mana_context *ac, void *in_buf, + u32 in_len, void *out_buf, u32 out_len) +{ + struct gdma_context *gc = ac->gdma_dev->gdma_context; + struct gdma_resp_hdr *resp = out_buf; + struct gdma_req_hdr *req = in_buf; + struct device *dev = gc->dev; + static atomic_t activity_id; + int err; + + req->dev_id = gc->mana.dev_id; + req->activity_id = atomic_inc_return(&activity_id); + + err = mana_gd_send_request(gc, in_len, in_buf, out_len, + out_buf); + if (err || resp->status) { + dev_err(dev, "failed to send mana message: %d, 0x%x ", + err, resp->status); + return err ? err : -eproto; + } + + if (req->dev_id.as_uint32 != resp->dev_id.as_uint32 || + req->activity_id != resp->activity_id) { + dev_err(dev, "unexpected mana message response: %x,%x,%x,%x ", + req->dev_id.as_uint32, resp->dev_id.as_uint32, + req->activity_id, resp->activity_id); + return -eproto; + } + + return 0; +} + +static int mana_verify_resp_hdr(const struct gdma_resp_hdr *resp_hdr, + const enum mana_command_code expected_code, + const u32 min_size) +{ + if (resp_hdr->response.msg_type != expected_code) + return -eproto; + + if (resp_hdr->response.msg_version < gdma_message_v1) + return -eproto; + + if (resp_hdr->response.msg_size < min_size) + return -eproto; + + return 0; +} + +static int mana_query_device_cfg(struct mana_context *ac, u32 proto_major_ver, + u32 proto_minor_ver, u32 proto_micro_ver, + u16 *max_num_vports) +{ + struct gdma_context *gc = ac->gdma_dev->gdma_context; + struct mana_query_device_cfg_resp resp = {}; + struct mana_query_device_cfg_req req = {}; + struct device *dev = gc->dev; + int err = 0; + + mana_gd_init_req_hdr(&req.hdr, mana_query_dev_config, + sizeof(req), sizeof(resp)); + req.proto_major_ver = proto_major_ver; + req.proto_minor_ver = proto_minor_ver; + req.proto_micro_ver = proto_micro_ver; + + err = mana_send_request(ac, &req, sizeof(req), &resp, sizeof(resp)); + if (err) { + dev_err(dev, "failed to query config: %d", err); + return err; + } + + err = mana_verify_resp_hdr(&resp.hdr, mana_query_dev_config, + sizeof(resp)); + if (err || resp.hdr.status) { + dev_err(dev, "invalid query result: %d, 0x%x ", err, + resp.hdr.status); + if (!err) + err = -eproto; + return err; + } + + *max_num_vports = resp.max_num_vports; + + return 0; +} + +static int mana_query_vport_cfg(struct mana_port_context *apc, u32 vport_index, + u32 *max_sq, u32 *max_rq, u32 *num_indir_entry) +{ + struct mana_query_vport_cfg_resp resp = {}; + struct mana_query_vport_cfg_req req = {}; + int err; + + mana_gd_init_req_hdr(&req.hdr, mana_query_vport_config, + sizeof(req), sizeof(resp)); + + req.vport_index = vport_index; + + err = mana_send_request(apc->ac, &req, sizeof(req), &resp, + sizeof(resp)); + if (err) + return err; + + err = mana_verify_resp_hdr(&resp.hdr, mana_query_vport_config, + sizeof(resp)); + if (err) + return err; + + if (resp.hdr.status) + return -eproto; + + *max_sq = resp.max_num_sq; + *max_rq = resp.max_num_rq; + *num_indir_entry = resp.num_indirection_ent; + + apc->port_handle = resp.vport; + ether_addr_copy(apc->mac_addr, resp.mac_addr); + + return 0; +} + +static int mana_cfg_vport(struct mana_port_context *apc, u32 protection_dom_id, + u32 doorbell_pg_id) +{ + struct mana_config_vport_resp resp = {}; + struct mana_config_vport_req req = {}; + int err; + + mana_gd_init_req_hdr(&req.hdr, mana_config_vport_tx, + sizeof(req), sizeof(resp)); + req.vport = apc->port_handle; + req.pdid = protection_dom_id; + req.doorbell_pageid = doorbell_pg_id; + + err = mana_send_request(apc->ac, &req, sizeof(req), &resp, + sizeof(resp)); + if (err) { + netdev_err(apc->ndev, "failed to configure vport: %d ", err); + goto out; + } + + err = mana_verify_resp_hdr(&resp.hdr, mana_config_vport_tx, + sizeof(resp)); + if (err || resp.hdr.status) { + netdev_err(apc->ndev, "failed to configure vport: %d, 0x%x ", + err, resp.hdr.status); + if (!err) + err = -eproto; + + goto out; + } + + apc->tx_shortform_allowed = resp.short_form_allowed; + apc->tx_vp_offset = resp.tx_vport_offset; +out: + return err; +} + +static int mana_cfg_vport_steering(struct mana_port_context *apc, + enum tri_state rx, + bool update_default_rxobj, bool update_key, + bool update_tab) +{ + u16 num_entries = mana_indirect_table_size; + struct mana_cfg_rx_steer_req *req = null; + struct mana_cfg_rx_steer_resp resp = {}; + struct net_device *ndev = apc->ndev; + mana_handle_t *req_indir_tab; + u32 req_buf_size; + int err; + + req_buf_size = sizeof(*req) + sizeof(mana_handle_t) * num_entries; + req = kzalloc(req_buf_size, gfp_kernel); + if (!req) + return -enomem; + + mana_gd_init_req_hdr(&req->hdr, mana_config_vport_rx, req_buf_size, + sizeof(resp)); + + req->vport = apc->port_handle; + req->num_indir_entries = num_entries; + req->indir_tab_offset = sizeof(*req); + req->rx_enable = rx; + req->rss_enable = apc->rss_state; + req->update_default_rxobj = update_default_rxobj; + req->update_hashkey = update_key; + req->update_indir_tab = update_tab; + req->default_rxobj = apc->default_rxobj; + + if (update_key) + memcpy(&req->hashkey, apc->hashkey, mana_hash_key_size); + + if (update_tab) { + req_indir_tab = (mana_handle_t *)(req + 1); + memcpy(req_indir_tab, apc->rxobj_table, + req->num_indir_entries * sizeof(mana_handle_t)); + } + + err = mana_send_request(apc->ac, req, req_buf_size, &resp, + sizeof(resp)); + if (err) { + netdev_err(ndev, "failed to configure vport rx: %d ", err); + goto out; + } + + err = mana_verify_resp_hdr(&resp.hdr, mana_config_vport_rx, + sizeof(resp)); + if (err) { + netdev_err(ndev, "vport rx configuration failed: %d ", err); + goto out; + } + + if (resp.hdr.status) { + netdev_err(ndev, "vport rx configuration failed: 0x%x ", + resp.hdr.status); + err = -eproto; + } +out: + kfree(req); + return err; +} + +static int mana_create_wq_obj(struct mana_port_context *apc, + mana_handle_t vport, + u32 wq_type, struct mana_obj_spec *wq_spec, + struct mana_obj_spec *cq_spec, + mana_handle_t *wq_obj) +{ + struct mana_create_wqobj_resp resp = {}; + struct mana_create_wqobj_req req = {}; + struct net_device *ndev = apc->ndev; + int err; + + mana_gd_init_req_hdr(&req.hdr, mana_create_wq_obj, + sizeof(req), sizeof(resp)); + req.vport = vport; + req.wq_type = wq_type; + req.wq_gdma_region = wq_spec->gdma_region; + req.cq_gdma_region = cq_spec->gdma_region; + req.wq_size = wq_spec->queue_size; + req.cq_size = cq_spec->queue_size; + req.cq_moderation_ctx_id = cq_spec->modr_ctx_id; + req.cq_parent_qid = cq_spec->attached_eq; + + err = mana_send_request(apc->ac, &req, sizeof(req), &resp, + sizeof(resp)); + if (err) { + netdev_err(ndev, "failed to create wq object: %d ", err); + goto out; + } + + err = mana_verify_resp_hdr(&resp.hdr, mana_create_wq_obj, + sizeof(resp)); + if (err || resp.hdr.status) { + netdev_err(ndev, "failed to create wq object: %d, 0x%x ", err, + resp.hdr.status); + if (!err) + err = -eproto; + goto out; + } + + if (resp.wq_obj == invalid_mana_handle) { + netdev_err(ndev, "got an invalid wq object handle "); + err = -eproto; + goto out; + } + + *wq_obj = resp.wq_obj; + wq_spec->queue_index = resp.wq_id; + cq_spec->queue_index = resp.cq_id; + + return 0; +out: + return err; +} + +static void mana_destroy_wq_obj(struct mana_port_context *apc, u32 wq_type, + mana_handle_t wq_obj) +{ + struct mana_destroy_wqobj_resp resp = {}; + struct mana_destroy_wqobj_req req = {}; + struct net_device *ndev = apc->ndev; + int err; + + mana_gd_init_req_hdr(&req.hdr, mana_destroy_wq_obj, + sizeof(req), sizeof(resp)); + req.wq_type = wq_type; + req.wq_obj_handle = wq_obj; + + err = mana_send_request(apc->ac, &req, sizeof(req), &resp, + sizeof(resp)); + if (err) { + netdev_err(ndev, "failed to destroy wq object: %d ", err); + return; + } + + err = mana_verify_resp_hdr(&resp.hdr, mana_destroy_wq_obj, + sizeof(resp)); + if (err || resp.hdr.status) + netdev_err(ndev, "failed to destroy wq object: %d, 0x%x ", err, + resp.hdr.status); +} + +static void mana_init_cqe_poll_buf(struct gdma_comp *cqe_poll_buf) +{ + int i; + + for (i = 0; i < cqe_polling_buffer; i++) + memset(&cqe_poll_buf[i], 0, sizeof(struct gdma_comp)); +} + +static void mana_destroy_eq(struct gdma_context *gc, + struct mana_port_context *apc) +{ + struct gdma_queue *eq; + int i; + + if (!apc->eqs) + return; + + for (i = 0; i < apc->num_queues; i++) { + eq = apc->eqs[i].eq; + if (!eq) + continue; + + mana_gd_destroy_queue(gc, eq); + } + + kfree(apc->eqs); + apc->eqs = null; +} + +static int mana_create_eq(struct mana_port_context *apc) +{ + struct gdma_dev *gd = apc->ac->gdma_dev; + struct gdma_queue_spec spec = {}; + int err; + int i; + + apc->eqs = kcalloc(apc->num_queues, sizeof(struct mana_eq), + gfp_kernel); + if (!apc->eqs) + return -enomem; + + spec.type = gdma_eq; + spec.monitor_avl_buf = false; + spec.queue_size = eq_size; + spec.eq.callback = null; + spec.eq.context = apc->eqs; + spec.eq.log2_throttle_limit = log2_eq_throttle; + spec.eq.ndev = apc->ndev; + + for (i = 0; i < apc->num_queues; i++) { + mana_init_cqe_poll_buf(apc->eqs[i].cqe_poll); + + err = mana_gd_create_mana_eq(gd, &spec, &apc->eqs[i].eq); + if (err) + goto out; + } + + return 0; +out: + mana_destroy_eq(gd->gdma_context, apc); + return err; +} + +static int mana_move_wq_tail(struct gdma_queue *wq, u32 num_units) +{ + u32 used_space_old; + u32 used_space_new; + + used_space_old = wq->head - wq->tail; + used_space_new = wq->head - (wq->tail + num_units); + + if (warn_on_once(used_space_new > used_space_old)) + return -erange; + + wq->tail += num_units; + return 0; +} + +static void mana_unmap_skb(struct sk_buff *skb, struct mana_port_context *apc) +{ + struct mana_skb_head *ash = (struct mana_skb_head *)skb->head; + struct gdma_context *gc = apc->ac->gdma_dev->gdma_context; + struct device *dev = gc->dev; + int i; + + dma_unmap_single(dev, ash->dma_handle[0], ash->size[0], dma_to_device); + + for (i = 1; i < skb_shinfo(skb)->nr_frags + 1; i++) + dma_unmap_page(dev, ash->dma_handle[i], ash->size[i], + dma_to_device); +} + +static void mana_poll_tx_cq(struct mana_cq *cq) +{ + struct gdma_queue *gdma_eq = cq->gdma_cq->cq.parent; + struct gdma_comp *completions = cq->gdma_comp_buf; + struct gdma_posted_wqe_info *wqe_info; + unsigned int pkt_transmitted = 0; + unsigned int wqe_unit_cnt = 0; + struct mana_txq *txq = cq->txq; + struct mana_port_context *apc; + struct netdev_queue *net_txq; + struct gdma_queue *gdma_wq; + unsigned int avail_space; + struct net_device *ndev; + struct sk_buff *skb; + bool txq_stopped; + int comp_read; + int i; + + ndev = txq->ndev; + apc = netdev_priv(ndev); + + comp_read = mana_gd_poll_cq(cq->gdma_cq, completions, + cqe_polling_buffer); + + for (i = 0; i < comp_read; i++) { + struct mana_tx_comp_oob *cqe_oob; + + if (warn_on_once(!completions[i].is_sq)) + return; + + cqe_oob = (struct mana_tx_comp_oob *)completions[i].cqe_data; + if (warn_on_once(cqe_oob->cqe_hdr.client_type != + mana_cqe_completion)) + return; + + switch (cqe_oob->cqe_hdr.cqe_type) { + case cqe_tx_okay: + break; + + case cqe_tx_sa_drop: + case cqe_tx_mtu_drop: + case cqe_tx_invalid_oob: + case cqe_tx_invalid_eth_type: + case cqe_tx_hdr_processing_error: + case cqe_tx_vf_disabled: + case cqe_tx_vport_idx_out_of_range: + case cqe_tx_vport_disabled: + case cqe_tx_vlan_tagging_violation: + warn_once(1, "tx: cqe error %d: ignored. ", + cqe_oob->cqe_hdr.cqe_type); + break; + + default: + /* if the cqe type is unexpected, log an error, assert, + * and go through the error path. + */ + warn_once(1, "tx: unexpected cqe type %d: hw bug? ", + cqe_oob->cqe_hdr.cqe_type); + return; + } + + if (warn_on_once(txq->gdma_txq_id != completions[i].wq_num)) + return; + + skb = skb_dequeue(&txq->pending_skbs); + if (warn_on_once(!skb)) + return; + + wqe_info = (struct gdma_posted_wqe_info *)skb->cb; + wqe_unit_cnt += wqe_info->wqe_size_in_bu; + + mana_unmap_skb(skb, apc); + + napi_consume_skb(skb, gdma_eq->eq.budget); + + pkt_transmitted++; + } + + if (warn_on_once(wqe_unit_cnt == 0)) + return; + + mana_move_wq_tail(txq->gdma_sq, wqe_unit_cnt); + + gdma_wq = txq->gdma_sq; + avail_space = mana_gd_wq_avail_space(gdma_wq); + + /* ensure tail updated before checking q stop */ + smp_mb(); + + net_txq = txq->net_txq; + txq_stopped = netif_tx_queue_stopped(net_txq); + + /* ensure checking txq_stopped before apc->port_is_up. */ + smp_rmb(); + + if (txq_stopped && apc->port_is_up && avail_space >= max_tx_wqe_size) { + netif_tx_wake_queue(net_txq); + apc->eth_stats.wake_queue++; + } + + if (atomic_sub_return(pkt_transmitted, &txq->pending_sends) < 0) + warn_on_once(1); +} + +static void mana_post_pkt_rxq(struct mana_rxq *rxq) +{ + struct mana_recv_buf_oob *recv_buf_oob; + u32 curr_index; + int err; + + curr_index = rxq->buf_index++; + if (rxq->buf_index == rxq->num_rx_buf) + rxq->buf_index = 0; + + recv_buf_oob = &rxq->rx_oobs[curr_index]; + + err = mana_gd_post_and_ring(rxq->gdma_rq, &recv_buf_oob->wqe_req, + &recv_buf_oob->wqe_inf); + if (warn_on_once(err)) + return; + + warn_on_once(recv_buf_oob->wqe_inf.wqe_size_in_bu != 1); +} + +static void mana_rx_skb(void *buf_va, struct mana_rxcomp_oob *cqe, + struct mana_rxq *rxq) +{ + struct mana_stats *rx_stats = &rxq->stats; + struct net_device *ndev = rxq->ndev; + uint pkt_len = cqe->ppi[0].pkt_len; + struct mana_port_context *apc; + u16 rxq_idx = rxq->rxq_idx; + struct napi_struct *napi; + struct gdma_queue *eq; + struct sk_buff *skb; + u32 hash_value; + + apc = netdev_priv(ndev); + eq = apc->eqs[rxq_idx].eq; + eq->eq.work_done++; + napi = &eq->eq.napi; + + if (!buf_va) { + ++ndev->stats.rx_dropped; + return; + } + + skb = build_skb(buf_va, page_size); + + if (!skb) { + free_page((unsigned long)buf_va); + ++ndev->stats.rx_dropped; + return; + } + + skb_put(skb, pkt_len); + skb->dev = napi->dev; + + skb->protocol = eth_type_trans(skb, ndev); + skb_checksum_none_assert(skb); + skb_record_rx_queue(skb, rxq_idx); + + if ((ndev->features & netif_f_rxcsum) && cqe->rx_iphdr_csum_succeed) { + if (cqe->rx_tcp_csum_succeed || cqe->rx_udp_csum_succeed) + skb->ip_summed = checksum_unnecessary; + } + + if (cqe->rx_hashtype != 0 && (ndev->features & netif_f_rxhash)) { + hash_value = cqe->ppi[0].pkt_hash; + + if (cqe->rx_hashtype & mana_hash_l4) + skb_set_hash(skb, hash_value, pkt_hash_type_l4); + else + skb_set_hash(skb, hash_value, pkt_hash_type_l3); + } + + napi_gro_receive(napi, skb); + + u64_stats_update_begin(&rx_stats->syncp); + rx_stats->packets++; + rx_stats->bytes += pkt_len; + u64_stats_update_end(&rx_stats->syncp); +} + +static void mana_process_rx_cqe(struct mana_rxq *rxq, struct mana_cq *cq, + struct gdma_comp *cqe) +{ + struct mana_rxcomp_oob *oob = (struct mana_rxcomp_oob *)cqe->cqe_data; + struct gdma_context *gc = rxq->gdma_rq->gdma_dev->gdma_context; + struct net_device *ndev = rxq->ndev; + struct mana_recv_buf_oob *rxbuf_oob; + struct device *dev = gc->dev; + void *new_buf, *old_buf; + struct page *new_page; + u32 curr, pktlen; + dma_addr_t da; + + switch (oob->cqe_hdr.cqe_type) { + case cqe_rx_okay: + break; + + case cqe_rx_truncated: + netdev_err(ndev, "dropped a truncated packet "); + return; + + case cqe_rx_coalesced_4: + netdev_err(ndev, "rx coalescing is unsupported "); + return; + + case cqe_rx_object_fence: + netdev_err(ndev, "rx fencing is unsupported "); + return; + + default: + netdev_err(ndev, "unknown rx cqe type = %d ", + oob->cqe_hdr.cqe_type); + return; + } + + if (oob->cqe_hdr.cqe_type != cqe_rx_okay) + return; + + pktlen = oob->ppi[0].pkt_len; + + if (pktlen == 0) { + /* data packets should never have packetlength of zero */ + netdev_err(ndev, "rx pkt len=0, rq=%u, cq=%u, rxobj=0x%llx ", + rxq->gdma_id, cq->gdma_id, rxq->rxobj); + return; + } + + curr = rxq->buf_index; + rxbuf_oob = &rxq->rx_oobs[curr]; + warn_on_once(rxbuf_oob->wqe_inf.wqe_size_in_bu != 1); + + new_page = alloc_page(gfp_atomic); + + if (new_page) { + da = dma_map_page(dev, new_page, 0, rxq->datasize, + dma_from_device); + + if (dma_mapping_error(dev, da)) { + __free_page(new_page); + new_page = null; + } + } + + new_buf = new_page ? page_to_virt(new_page) : null; + + if (new_buf) { + dma_unmap_page(dev, rxbuf_oob->buf_dma_addr, rxq->datasize, + dma_from_device); + + old_buf = rxbuf_oob->buf_va; + + /* refresh the rxbuf_oob with the new page */ + rxbuf_oob->buf_va = new_buf; + rxbuf_oob->buf_dma_addr = da; + rxbuf_oob->sgl[0].address = rxbuf_oob->buf_dma_addr; + } else { + old_buf = null; /* drop the packet if no memory */ + } + + mana_rx_skb(old_buf, oob, rxq); + + mana_move_wq_tail(rxq->gdma_rq, rxbuf_oob->wqe_inf.wqe_size_in_bu); + + mana_post_pkt_rxq(rxq); +} + +static void mana_poll_rx_cq(struct mana_cq *cq) +{ + struct gdma_comp *comp = cq->gdma_comp_buf; + u32 comp_read, i; + + comp_read = mana_gd_poll_cq(cq->gdma_cq, comp, cqe_polling_buffer); + warn_on_once(comp_read > cqe_polling_buffer); + + for (i = 0; i < comp_read; i++) { + if (warn_on_once(comp[i].is_sq)) + return; + + /* verify recv cqe references the right rxq */ + if (warn_on_once(comp[i].wq_num != cq->rxq->gdma_id)) + return; + + mana_process_rx_cqe(cq->rxq, cq, &comp[i]); + } +} + +static void mana_cq_handler(void *context, struct gdma_queue *gdma_queue) +{ + struct mana_cq *cq = context; + + warn_on_once(cq->gdma_cq != gdma_queue); + + if (cq->type == mana_cq_type_rx) + mana_poll_rx_cq(cq); + else + mana_poll_tx_cq(cq); + + mana_gd_arm_cq(gdma_queue); +} + +static void mana_deinit_cq(struct mana_port_context *apc, struct mana_cq *cq) +{ + struct gdma_dev *gd = apc->ac->gdma_dev; + + if (!cq->gdma_cq) + return; + + mana_gd_destroy_queue(gd->gdma_context, cq->gdma_cq); +} + +static void mana_deinit_txq(struct mana_port_context *apc, struct mana_txq *txq) +{ + struct gdma_dev *gd = apc->ac->gdma_dev; + + if (!txq->gdma_sq) + return; + + mana_gd_destroy_queue(gd->gdma_context, txq->gdma_sq); +} + +static void mana_destroy_txq(struct mana_port_context *apc) +{ + int i; + + if (!apc->tx_qp) + return; + + for (i = 0; i < apc->num_queues; i++) { + mana_destroy_wq_obj(apc, gdma_sq, apc->tx_qp[i].tx_object); + + mana_deinit_cq(apc, &apc->tx_qp[i].tx_cq); + + mana_deinit_txq(apc, &apc->tx_qp[i].txq); + } + + kfree(apc->tx_qp); + apc->tx_qp = null; +} + +static int mana_create_txq(struct mana_port_context *apc, + struct net_device *net) +{ + struct gdma_dev *gd = apc->ac->gdma_dev; + struct mana_obj_spec wq_spec; + struct mana_obj_spec cq_spec; + struct gdma_queue_spec spec; + struct gdma_context *gc; + struct mana_txq *txq; + struct mana_cq *cq; + u32 txq_size; + u32 cq_size; + int err; + int i; + + apc->tx_qp = kcalloc(apc->num_queues, sizeof(struct mana_tx_qp), + gfp_kernel); + if (!apc->tx_qp) + return -enomem; + + /* the minimum size of the wqe is 32 bytes, hence + * max_send_buffers_per_queue represents the maximum number of wqes + * the sq can store. this value is then used to size other queues + * to prevent overflow. + */ + txq_size = max_send_buffers_per_queue * 32; + build_bug_on(!page_aligned(txq_size)); + + cq_size = max_send_buffers_per_queue * comp_entry_size; + cq_size = page_align(cq_size); + + gc = gd->gdma_context; + + for (i = 0; i < apc->num_queues; i++) { + apc->tx_qp[i].tx_object = invalid_mana_handle; + + /* create sq */ + txq = &apc->tx_qp[i].txq; + + u64_stats_init(&txq->stats.syncp); + txq->ndev = net; + txq->net_txq = netdev_get_tx_queue(net, i); + txq->vp_offset = apc->tx_vp_offset; + skb_queue_head_init(&txq->pending_skbs); + + memset(&spec, 0, sizeof(spec)); + spec.type = gdma_sq; + spec.monitor_avl_buf = true; + spec.queue_size = txq_size; + err = mana_gd_create_mana_wq_cq(gd, &spec, &txq->gdma_sq); + if (err) + goto out; + + /* create sq's cq */ + cq = &apc->tx_qp[i].tx_cq; + cq->gdma_comp_buf = apc->eqs[i].cqe_poll; + cq->type = mana_cq_type_tx; + + cq->txq = txq; + + memset(&spec, 0, sizeof(spec)); + spec.type = gdma_cq; + spec.monitor_avl_buf = false; + spec.queue_size = cq_size; + spec.cq.callback = mana_cq_handler; + spec.cq.parent_eq = apc->eqs[i].eq; + spec.cq.context = cq; + err = mana_gd_create_mana_wq_cq(gd, &spec, &cq->gdma_cq); + if (err) + goto out; + + memset(&wq_spec, 0, sizeof(wq_spec)); + memset(&cq_spec, 0, sizeof(cq_spec)); + + wq_spec.gdma_region = txq->gdma_sq->mem_info.gdma_region; + wq_spec.queue_size = txq->gdma_sq->queue_size; + + cq_spec.gdma_region = cq->gdma_cq->mem_info.gdma_region; + cq_spec.queue_size = cq->gdma_cq->queue_size; + cq_spec.modr_ctx_id = 0; + cq_spec.attached_eq = cq->gdma_cq->cq.parent->id; + + err = mana_create_wq_obj(apc, apc->port_handle, gdma_sq, + &wq_spec, &cq_spec, + &apc->tx_qp[i].tx_object); + + if (err) + goto out; + + txq->gdma_sq->id = wq_spec.queue_index; + cq->gdma_cq->id = cq_spec.queue_index; + + txq->gdma_sq->mem_info.gdma_region = gdma_invalid_dma_region; + cq->gdma_cq->mem_info.gdma_region = gdma_invalid_dma_region; + + txq->gdma_txq_id = txq->gdma_sq->id; + + cq->gdma_id = cq->gdma_cq->id; + + if (warn_on(cq->gdma_id >= gc->max_num_cqs)) + return -einval; + + gc->cq_table[cq->gdma_id] = cq->gdma_cq; + + mana_gd_arm_cq(cq->gdma_cq); + } + + return 0; +out: + mana_destroy_txq(apc); + return err; +} + +static void mana_napi_sync_for_rx(struct mana_rxq *rxq) +{ + struct net_device *ndev = rxq->ndev; + struct mana_port_context *apc; + u16 rxq_idx = rxq->rxq_idx; + struct napi_struct *napi; + struct gdma_queue *eq; + + apc = netdev_priv(ndev); + eq = apc->eqs[rxq_idx].eq; + napi = &eq->eq.napi; + + napi_synchronize(napi); +} + +static void mana_destroy_rxq(struct mana_port_context *apc, + struct mana_rxq *rxq, bool validate_state) + +{ + struct gdma_context *gc = apc->ac->gdma_dev->gdma_context; + struct mana_recv_buf_oob *rx_oob; + struct device *dev = gc->dev; + int i; + + if (!rxq) + return; + + if (validate_state) + mana_napi_sync_for_rx(rxq); + + mana_destroy_wq_obj(apc, gdma_rq, rxq->rxobj); + + mana_deinit_cq(apc, &rxq->rx_cq); + + for (i = 0; i < rxq->num_rx_buf; i++) { + rx_oob = &rxq->rx_oobs[i]; + + if (!rx_oob->buf_va) + continue; + + dma_unmap_page(dev, rx_oob->buf_dma_addr, rxq->datasize, + dma_from_device); + + free_page((unsigned long)rx_oob->buf_va); + rx_oob->buf_va = null; + } + + if (rxq->gdma_rq) + mana_gd_destroy_queue(gc, rxq->gdma_rq); + + kfree(rxq); +} + +#define mana_wqe_header_size 16 +#define mana_wqe_sge_size 16 + +static int mana_alloc_rx_wqe(struct mana_port_context *apc, + struct mana_rxq *rxq, u32 *rxq_size, u32 *cq_size) +{ + struct gdma_context *gc = apc->ac->gdma_dev->gdma_context; + struct mana_recv_buf_oob *rx_oob; + struct device *dev = gc->dev; + struct page *page; + dma_addr_t da; + u32 buf_idx; + + warn_on(rxq->datasize == 0 || rxq->datasize > page_size); + + *rxq_size = 0; + *cq_size = 0; + + for (buf_idx = 0; buf_idx < rxq->num_rx_buf; buf_idx++) { + rx_oob = &rxq->rx_oobs[buf_idx]; + memset(rx_oob, 0, sizeof(*rx_oob)); + + page = alloc_page(gfp_kernel); + if (!page) + return -enomem; + + da = dma_map_page(dev, page, 0, rxq->datasize, dma_from_device); + + if (dma_mapping_error(dev, da)) { + __free_page(page); + return -enomem; + } + + rx_oob->buf_va = page_to_virt(page); + rx_oob->buf_dma_addr = da; + + rx_oob->num_sge = 1; + rx_oob->sgl[0].address = rx_oob->buf_dma_addr; + rx_oob->sgl[0].size = rxq->datasize; + rx_oob->sgl[0].mem_key = apc->ac->gdma_dev->gpa_mkey; + + rx_oob->wqe_req.sgl = rx_oob->sgl; + rx_oob->wqe_req.num_sge = rx_oob->num_sge; + rx_oob->wqe_req.inline_oob_size = 0; + rx_oob->wqe_req.inline_oob_data = null; + rx_oob->wqe_req.flags = 0; + rx_oob->wqe_req.client_data_unit = 0; + + *rxq_size += align(mana_wqe_header_size + + mana_wqe_sge_size * rx_oob->num_sge, 32); + *cq_size += comp_entry_size; + } + + return 0; +} + +static int mana_push_wqe(struct mana_rxq *rxq) +{ + struct mana_recv_buf_oob *rx_oob; + u32 buf_idx; + int err; + + for (buf_idx = 0; buf_idx < rxq->num_rx_buf; buf_idx++) { + rx_oob = &rxq->rx_oobs[buf_idx]; + + err = mana_gd_post_and_ring(rxq->gdma_rq, &rx_oob->wqe_req, + &rx_oob->wqe_inf); + if (err) + return -enospc; + } + + return 0; +} + +static struct mana_rxq *mana_create_rxq(struct mana_port_context *apc, + u32 rxq_idx, struct mana_eq *eq, + struct net_device *ndev) +{ + struct gdma_dev *gd = apc->ac->gdma_dev; + struct mana_obj_spec wq_spec; + struct mana_obj_spec cq_spec; + struct gdma_queue_spec spec; + struct mana_cq *cq = null; + struct gdma_context *gc; + u32 cq_size, rq_size; + struct mana_rxq *rxq; + int err; + + gc = gd->gdma_context; + + rxq = kzalloc(sizeof(*rxq) + + rx_buffers_per_queue * sizeof(struct mana_recv_buf_oob), + gfp_kernel); + if (!rxq) + return null; + + rxq->ndev = ndev; + rxq->num_rx_buf = rx_buffers_per_queue; + rxq->rxq_idx = rxq_idx; + rxq->datasize = align(max_frame_size, 64); + rxq->rxobj = invalid_mana_handle; + + err = mana_alloc_rx_wqe(apc, rxq, &rq_size, &cq_size); + if (err) + goto out; + + rq_size = page_align(rq_size); + cq_size = page_align(cq_size); + + /* create rq */ + memset(&spec, 0, sizeof(spec)); + spec.type = gdma_rq; + spec.monitor_avl_buf = true; + spec.queue_size = rq_size; + err = mana_gd_create_mana_wq_cq(gd, &spec, &rxq->gdma_rq); + if (err) + goto out; + + /* create rq's cq */ + cq = &rxq->rx_cq; + cq->gdma_comp_buf = eq->cqe_poll; + cq->type = mana_cq_type_rx; + cq->rxq = rxq; + + memset(&spec, 0, sizeof(spec)); + spec.type = gdma_cq; + spec.monitor_avl_buf = false; + spec.queue_size = cq_size; + spec.cq.callback = mana_cq_handler; + spec.cq.parent_eq = eq->eq; + spec.cq.context = cq; + err = mana_gd_create_mana_wq_cq(gd, &spec, &cq->gdma_cq); + if (err) + goto out; + + memset(&wq_spec, 0, sizeof(wq_spec)); + memset(&cq_spec, 0, sizeof(cq_spec)); + wq_spec.gdma_region = rxq->gdma_rq->mem_info.gdma_region; + wq_spec.queue_size = rxq->gdma_rq->queue_size; + + cq_spec.gdma_region = cq->gdma_cq->mem_info.gdma_region; + cq_spec.queue_size = cq->gdma_cq->queue_size; + cq_spec.modr_ctx_id = 0; + cq_spec.attached_eq = cq->gdma_cq->cq.parent->id; + + err = mana_create_wq_obj(apc, apc->port_handle, gdma_rq, + &wq_spec, &cq_spec, &rxq->rxobj); + if (err) + goto out; + + rxq->gdma_rq->id = wq_spec.queue_index; + cq->gdma_cq->id = cq_spec.queue_index; + + rxq->gdma_rq->mem_info.gdma_region = gdma_invalid_dma_region; + cq->gdma_cq->mem_info.gdma_region = gdma_invalid_dma_region; + + rxq->gdma_id = rxq->gdma_rq->id; + cq->gdma_id = cq->gdma_cq->id; + + err = mana_push_wqe(rxq); + if (err) + goto out; + + if (cq->gdma_id >= gc->max_num_cqs) + goto out; + + gc->cq_table[cq->gdma_id] = cq->gdma_cq; + + mana_gd_arm_cq(cq->gdma_cq); +out: + if (!err) + return rxq; + + netdev_err(ndev, "failed to create rxq: err = %d ", err); + + mana_destroy_rxq(apc, rxq, false); + + if (cq) + mana_deinit_cq(apc, cq); + + return null; +} + +static int mana_add_rx_queues(struct mana_port_context *apc, + struct net_device *ndev) +{ + struct mana_rxq *rxq; + int err = 0; + int i; + + for (i = 0; i < apc->num_queues; i++) { + rxq = mana_create_rxq(apc, i, &apc->eqs[i], ndev); + if (!rxq) { + err = -enomem; + goto out; + } + + u64_stats_init(&rxq->stats.syncp); + + apc->rxqs[i] = rxq; + } + + apc->default_rxobj = apc->rxqs[0]->rxobj; +out: + return err; +} + +static void mana_destroy_vport(struct mana_port_context *apc) +{ + struct mana_rxq *rxq; + u32 rxq_idx; + + for (rxq_idx = 0; rxq_idx < apc->num_queues; rxq_idx++) { + rxq = apc->rxqs[rxq_idx]; + if (!rxq) + continue; + + mana_destroy_rxq(apc, rxq, true); + apc->rxqs[rxq_idx] = null; + } + + mana_destroy_txq(apc); +} + +static int mana_create_vport(struct mana_port_context *apc, + struct net_device *net) +{ + struct gdma_dev *gd = apc->ac->gdma_dev; + int err; + + apc->default_rxobj = invalid_mana_handle; + + err = mana_cfg_vport(apc, gd->pdid, gd->doorbell); + if (err) + return err; + + return mana_create_txq(apc, net); +} + +static void mana_rss_table_init(struct mana_port_context *apc) +{ + int i; + + for (i = 0; i < mana_indirect_table_size; i++) + apc->indir_table[i] = + ethtool_rxfh_indir_default(i, apc->num_queues); +} + +int mana_config_rss(struct mana_port_context *apc, enum tri_state rx, + bool update_hash, bool update_tab) +{ + u32 queue_idx; + int i; + + if (update_tab) { + for (i = 0; i < mana_indirect_table_size; i++) { + queue_idx = apc->indir_table[i]; + apc->rxobj_table[i] = apc->rxqs[queue_idx]->rxobj; + } + } + + return mana_cfg_vport_steering(apc, rx, true, update_hash, update_tab); +} + +static int mana_init_port(struct net_device *ndev) +{ + struct mana_port_context *apc = netdev_priv(ndev); + u32 max_txq, max_rxq, max_queues; + int port_idx = apc->port_idx; + u32 num_indirect_entries; + int err; + + err = mana_init_port_context(apc); + if (err) + return err; + + err = mana_query_vport_cfg(apc, port_idx, &max_txq, &max_rxq, + &num_indirect_entries); + if (err) { + netdev_err(ndev, "failed to query info for vport 0 "); + goto reset_apc; + } + + max_queues = min_t(u32, max_txq, max_rxq); + if (apc->max_queues > max_queues) + apc->max_queues = max_queues; + + if (apc->num_queues > apc->max_queues) + apc->num_queues = apc->max_queues; + + ether_addr_copy(ndev->dev_addr, apc->mac_addr); + + return 0; + +reset_apc: + kfree(apc->rxqs); + apc->rxqs = null; + return err; +} + +int mana_alloc_queues(struct net_device *ndev) +{ + struct mana_port_context *apc = netdev_priv(ndev); + struct gdma_dev *gd = apc->ac->gdma_dev; + int err; + + err = mana_create_eq(apc); + if (err) + return err; + + err = mana_create_vport(apc, ndev); + if (err) + goto destroy_eq; + + err = netif_set_real_num_tx_queues(ndev, apc->num_queues); + if (err) + goto destroy_vport; + + err = mana_add_rx_queues(apc, ndev); + if (err) + goto destroy_vport; + + apc->rss_state = apc->num_queues > 1 ? tri_state_true : tri_state_false; + + err = netif_set_real_num_rx_queues(ndev, apc->num_queues); + if (err) + goto destroy_vport; + + mana_rss_table_init(apc); + + err = mana_config_rss(apc, tri_state_true, true, true); + if (err) + goto destroy_vport; + + return 0; + +destroy_vport: + mana_destroy_vport(apc); +destroy_eq: + mana_destroy_eq(gd->gdma_context, apc); + return err; +} + +int mana_attach(struct net_device *ndev) +{ + struct mana_port_context *apc = netdev_priv(ndev); + int err; + + assert_rtnl(); + + err = mana_init_port(ndev); + if (err) + return err; + + err = mana_alloc_queues(ndev); + if (err) { + kfree(apc->rxqs); + apc->rxqs = null; + return err; + } + + netif_device_attach(ndev); + + apc->port_is_up = apc->port_st_save; + + /* ensure port state updated before txq state */ + smp_wmb(); + + if (apc->port_is_up) { + netif_carrier_on(ndev); + netif_tx_wake_all_queues(ndev); + } + + return 0; +} + +static int mana_dealloc_queues(struct net_device *ndev) +{ + struct mana_port_context *apc = netdev_priv(ndev); + struct mana_txq *txq; + int i, err; + + if (apc->port_is_up) + return -einval; + + /* no packet can be transmitted now since apc->port_is_up is false. + * there is still a tiny chance that mana_poll_tx_cq() can re-enable + * a txq because it may not timely see apc->port_is_up being cleared + * to false, but it doesn't matter since mana_start_xmit() drops any + * new packets due to apc->port_is_up being false. + * + * drain all the in-flight tx packets + */ + for (i = 0; i < apc->num_queues; i++) { + txq = &apc->tx_qp[i].txq; + + while (atomic_read(&txq->pending_sends) > 0) + usleep_range(1000, 2000); + } + + /* we're 100% sure the queues can no longer be woken up, because + * we're sure now mana_poll_tx_cq() can't be running. + */ + + apc->rss_state = tri_state_false; + err = mana_config_rss(apc, tri_state_false, false, false); + if (err) { + netdev_err(ndev, "failed to disable vport: %d ", err); + return err; + } + + /* todo: implement rx fencing */ + ssleep(1); + + mana_destroy_vport(apc); + + mana_destroy_eq(apc->ac->gdma_dev->gdma_context, apc); + + return 0; +} + +int mana_detach(struct net_device *ndev, bool from_close) +{ + struct mana_port_context *apc = netdev_priv(ndev); + int err; + + assert_rtnl(); + + apc->port_st_save = apc->port_is_up; + apc->port_is_up = false; + + /* ensure port state updated before txq state */ + smp_wmb(); + + netif_tx_disable(ndev); + netif_carrier_off(ndev); + + if (apc->port_st_save) { + err = mana_dealloc_queues(ndev); + if (err) + return err; + } + + if (!from_close) { + netif_device_detach(ndev); + mana_cleanup_port_context(apc); + } + + return 0; +} + +static int mana_probe_port(struct mana_context *ac, int port_idx, + struct net_device **ndev_storage) +{ + struct gdma_context *gc = ac->gdma_dev->gdma_context; + struct mana_port_context *apc; + struct net_device *ndev; + int err; + + ndev = alloc_etherdev_mq(sizeof(struct mana_port_context), + gc->max_num_queues); + if (!ndev) + return -enomem; + + *ndev_storage = ndev; + + apc = netdev_priv(ndev); + apc->ac = ac; + apc->ndev = ndev; + apc->max_queues = gc->max_num_queues; + apc->num_queues = min_t(uint, gc->max_num_queues, mana_max_num_queues); + apc->port_handle = invalid_mana_handle; + apc->port_idx = port_idx; + + ndev->netdev_ops = &mana_devops; + ndev->ethtool_ops = &mana_ethtool_ops; + ndev->mtu = eth_data_len; + ndev->max_mtu = ndev->mtu; + ndev->min_mtu = ndev->mtu; + ndev->needed_headroom = mana_headroom; + set_netdev_dev(ndev, gc->dev); + + netif_carrier_off(ndev); + + netdev_rss_key_fill(apc->hashkey, mana_hash_key_size); + + err = mana_init_port(ndev); + if (err) + goto free_net; + + netdev_lockdep_set_classes(ndev); + + ndev->hw_features = netif_f_sg | netif_f_ip_csum | netif_f_ipv6_csum; + ndev->hw_features |= netif_f_rxcsum; + ndev->hw_features |= netif_f_tso | netif_f_tso6; + ndev->hw_features |= netif_f_rxhash; + ndev->features = ndev->hw_features; + ndev->vlan_features = 0; + + err = register_netdev(ndev); + if (err) { + netdev_err(ndev, "unable to register netdev. "); + goto reset_apc; + } + + return 0; + +reset_apc: + kfree(apc->rxqs); + apc->rxqs = null; +free_net: + *ndev_storage = null; + netdev_err(ndev, "failed to probe vport %d: %d ", port_idx, err); + free_netdev(ndev); + return err; +} + +int mana_probe(struct gdma_dev *gd) +{ + struct gdma_context *gc = gd->gdma_context; + struct device *dev = gc->dev; + struct mana_context *ac; + int err; + int i; + + dev_info(dev, + "microsoft azure network adapter protocol version: %d.%d.%d ", + mana_major_version, mana_minor_version, mana_micro_version); + + err = mana_gd_register_device(gd); + if (err) + return err; + + ac = kzalloc(sizeof(*ac), gfp_kernel); + if (!ac) + return -enomem; + + ac->gdma_dev = gd; + ac->num_ports = 1; + gd->driver_data = ac; + + err = mana_query_device_cfg(ac, mana_major_version, mana_minor_version, + mana_micro_version, &ac->num_ports); + if (err) + goto out; + + if (ac->num_ports > max_ports_in_mana_dev) + ac->num_ports = max_ports_in_mana_dev; + + for (i = 0; i < ac->num_ports; i++) { + err = mana_probe_port(ac, i, &ac->ports[i]); + if (err) + break; + } +out: + if (err) + mana_remove(gd); + + return err; +} + +void mana_remove(struct gdma_dev *gd) +{ + struct gdma_context *gc = gd->gdma_context; + struct mana_context *ac = gd->driver_data; + struct device *dev = gc->dev; + struct net_device *ndev; + int i; + + for (i = 0; i < ac->num_ports; i++) { + ndev = ac->ports[i]; + if (!ndev) { + if (i == 0) + dev_err(dev, "no net device to remove "); + goto out; + } + + /* all cleanup actions should stay after rtnl_lock(), otherwise + * other functions may access partially cleaned up data. + */ + rtnl_lock(); + + mana_detach(ndev, false); + + unregister_netdevice(ndev); + + rtnl_unlock(); + + free_netdev(ndev); + } +out: + mana_gd_deregister_device(gd); + gd->driver_data = null; + gd->gdma_context = null; + kfree(ac); +} diff --git a/drivers/net/ethernet/microsoft/mana/mana_ethtool.c b/drivers/net/ethernet/microsoft/mana/mana_ethtool.c --- /dev/null +++ b/drivers/net/ethernet/microsoft/mana/mana_ethtool.c +// spdx-license-identifier: gpl-2.0 or bsd-3-clause +/* copyright (c) 2021, microsoft corporation. */ + +#include <linux/inetdevice.h> +#include <linux/etherdevice.h> +#include <linux/ethtool.h> + +#include "mana.h" + +static const struct { + char name[eth_gstring_len]; + u16 offset; +} mana_eth_stats[] = { + {"stop_queue", offsetof(struct mana_ethtool_stats, stop_queue)}, + {"wake_queue", offsetof(struct mana_ethtool_stats, wake_queue)}, +}; + +static int mana_get_sset_count(struct net_device *ndev, int stringset) +{ + struct mana_port_context *apc = netdev_priv(ndev); + unsigned int num_queues = apc->num_queues; + + if (stringset != eth_ss_stats) + return -einval; + + return array_size(mana_eth_stats) + num_queues * 4; +} + +static void mana_get_strings(struct net_device *ndev, u32 stringset, u8 *data) +{ + struct mana_port_context *apc = netdev_priv(ndev); + unsigned int num_queues = apc->num_queues; + u8 *p = data; + int i; + + if (stringset != eth_ss_stats) + return; + + for (i = 0; i < array_size(mana_eth_stats); i++) { + memcpy(p, mana_eth_stats[i].name, eth_gstring_len); + p += eth_gstring_len; + } + + for (i = 0; i < num_queues; i++) { + sprintf(p, "rx_%d_packets", i); + p += eth_gstring_len; + sprintf(p, "rx_%d_bytes", i); + p += eth_gstring_len; + } + + for (i = 0; i < num_queues; i++) { + sprintf(p, "tx_%d_packets", i); + p += eth_gstring_len; + sprintf(p, "tx_%d_bytes", i); + p += eth_gstring_len; + } +} + +static void mana_get_ethtool_stats(struct net_device *ndev, + struct ethtool_stats *e_stats, u64 *data) +{ + struct mana_port_context *apc = netdev_priv(ndev); + unsigned int num_queues = apc->num_queues; + void *eth_stats = &apc->eth_stats; + struct mana_stats *stats; + unsigned int start; + u64 packets, bytes; + int q, i = 0; + + if (!apc->port_is_up) + return; + + for (q = 0; q < array_size(mana_eth_stats); q++) + data[i++] = *(u64 *)(eth_stats + mana_eth_stats[q].offset); + + for (q = 0; q < num_queues; q++) { + stats = &apc->rxqs[q]->stats; + + do { + start = u64_stats_fetch_begin_irq(&stats->syncp); + packets = stats->packets; + bytes = stats->bytes; + } while (u64_stats_fetch_retry_irq(&stats->syncp, start)); + + data[i++] = packets; + data[i++] = bytes; + } + + for (q = 0; q < num_queues; q++) { + stats = &apc->tx_qp[q].txq.stats; + + do { + start = u64_stats_fetch_begin_irq(&stats->syncp); + packets = stats->packets; + bytes = stats->bytes; + } while (u64_stats_fetch_retry_irq(&stats->syncp, start)); + + data[i++] = packets; + data[i++] = bytes; + } +} + +static int mana_get_rxnfc(struct net_device *ndev, struct ethtool_rxnfc *cmd, + u32 *rules) +{ + struct mana_port_context *apc = netdev_priv(ndev); + + switch (cmd->cmd) { + case ethtool_grxrings: + cmd->data = apc->num_queues; + return 0; + } + + return -eopnotsupp; +} + +static u32 mana_get_rxfh_key_size(struct net_device *ndev) +{ + return mana_hash_key_size; +} + +static u32 mana_rss_indir_size(struct net_device *ndev) +{ + return mana_indirect_table_size; +} + +static int mana_get_rxfh(struct net_device *ndev, u32 *indir, u8 *key, + u8 *hfunc) +{ + struct mana_port_context *apc = netdev_priv(ndev); + int i; + + if (hfunc) + *hfunc = eth_rss_hash_top; /* toeplitz */ + + if (indir) { + for (i = 0; i < mana_indirect_table_size; i++) + indir[i] = apc->indir_table[i]; + } + + if (key) + memcpy(key, apc->hashkey, mana_hash_key_size); + + return 0; +} + +static int mana_set_rxfh(struct net_device *ndev, const u32 *indir, + const u8 *key, const u8 hfunc) +{ + struct mana_port_context *apc = netdev_priv(ndev); + bool update_hash = false, update_table = false; + u32 save_table[mana_indirect_table_size]; + u8 save_key[mana_hash_key_size]; + int i, err; + + if (!apc->port_is_up) + return -eopnotsupp; + + if (hfunc != eth_rss_hash_no_change && hfunc != eth_rss_hash_top) + return -eopnotsupp; + + if (indir) { + for (i = 0; i < mana_indirect_table_size; i++) + if (indir[i] >= apc->num_queues) + return -einval; + + update_table = true; + for (i = 0; i < mana_indirect_table_size; i++) { + save_table[i] = apc->indir_table[i]; + apc->indir_table[i] = indir[i]; + } + } + + if (key) { + update_hash = true; + memcpy(save_key, apc->hashkey, mana_hash_key_size); + memcpy(apc->hashkey, key, mana_hash_key_size); + } + + err = mana_config_rss(apc, tri_state_true, update_hash, update_table); + + if (err) { /* recover to original values */ + if (update_table) { + for (i = 0; i < mana_indirect_table_size; i++) + apc->indir_table[i] = save_table[i]; + } + + if (update_hash) + memcpy(apc->hashkey, save_key, mana_hash_key_size); + + mana_config_rss(apc, tri_state_true, update_hash, update_table); + } + + return err; +} + +static void mana_get_channels(struct net_device *ndev, + struct ethtool_channels *channel) +{ + struct mana_port_context *apc = netdev_priv(ndev); + + channel->max_combined = apc->max_queues; + channel->combined_count = apc->num_queues; +} + +static int mana_set_channels(struct net_device *ndev, + struct ethtool_channels *channels) +{ + struct mana_port_context *apc = netdev_priv(ndev); + unsigned int new_count = channels->combined_count; + unsigned int old_count = apc->num_queues; + int err, err2; + + if (!apc->port_is_up) + return -eopnotsupp; + + err = mana_detach(ndev, false); + if (err) { + netdev_err(ndev, "mana_detach failed: %d ", err); + return err; + } + + apc->num_queues = new_count; + err = mana_attach(ndev); + if (!err) + return 0; + + netdev_err(ndev, "mana_attach failed: %d ", err); + + /* try to roll it back to the old configuration. */ + apc->num_queues = old_count; + err2 = mana_attach(ndev); + if (err2) + netdev_err(ndev, "mana re-attach failed: %d ", err2); + + return err; +} + +const struct ethtool_ops mana_ethtool_ops = { + .get_ethtool_stats = mana_get_ethtool_stats, + .get_sset_count = mana_get_sset_count, + .get_strings = mana_get_strings, + .get_rxnfc = mana_get_rxnfc, + .get_rxfh_key_size = mana_get_rxfh_key_size, + .get_rxfh_indir_size = mana_rss_indir_size, + .get_rxfh = mana_get_rxfh, + .set_rxfh = mana_set_rxfh, + .get_channels = mana_get_channels, + .set_channels = mana_set_channels, +}; diff --git a/drivers/net/ethernet/microsoft/mana/shm_channel.c b/drivers/net/ethernet/microsoft/mana/shm_channel.c --- /dev/null +++ b/drivers/net/ethernet/microsoft/mana/shm_channel.c +// spdx-license-identifier: gpl-2.0 or bsd-3-clause +/* copyright (c) 2021, microsoft corporation. */ + +#include <linux/delay.h> +#include <linux/device.h> +#include <linux/io.h> +#include <linux/mm.h> + +#include "shm_channel.h" + +#define page_frame_l48_width_bytes 6 +#define page_frame_l48_width_bits (page_frame_l48_width_bytes * 8) +#define page_frame_l48_mask 0x0000ffffffffffff +#define page_frame_h4_width_bits 4 +#define vector_mask 0xffff +#define shmem_vf_reset_state ((u32)-1) + +#define smc_msg_type_establish_hwc 1 +#define smc_msg_type_establish_hwc_version 0 + +#define smc_msg_type_destroy_hwc 2 +#define smc_msg_type_destroy_hwc_version 0 + +#define smc_msg_direction_request 0 +#define smc_msg_direction_response 1 + +/* structures labeled with "hw data" are exchanged with the hardware. all of + * them are naturally aligned and hence don't need __packed. + */ + +/* shared memory channel protocol header + * + * msg_type: set on request and response; response matches request. + * msg_version: newer pf writes back older response (matching request) + * older pf acts on latest version known and sets that version in result + * (less than request). + * direction: 0 for request, vf->pf; 1 for response, pf->vf. + * status: 0 on request, + * operation result on response (success = 0, failure = 1 or greater). + * reset_vf: if set on either establish or destroy request, indicates perform + * flr before/after the operation. + * owner_is_pf: 1 indicates pf owned, 0 indicates vf owned. + */ +union smc_proto_hdr { + u32 as_uint32; + + struct { + u8 msg_type : 3; + u8 msg_version : 3; + u8 reserved_1 : 1; + u8 direction : 1; + + u8 status; + + u8 reserved_2; + + u8 reset_vf : 1; + u8 reserved_3 : 6; + u8 owner_is_pf : 1; + }; +}; /* hw data */ + +#define smc_aperture_bits 256 +#define smc_basic_unit (sizeof(u32)) +#define smc_aperture_dwords (smc_aperture_bits / (smc_basic_unit * 8)) +#define smc_last_dword (smc_aperture_dwords - 1) + +static int mana_smc_poll_register(void __iomem *base, bool reset) +{ + void __iomem *ptr = base + smc_last_dword * smc_basic_unit; + u32 last_dword; + int i; + + /* poll the hardware for the ownership bit. this should be pretty fast, + * but let's do it in a loop just in case the hardware or the pf + * driver are temporarily busy. + */ + for (i = 0; i < 20 * 1000; i++) { + last_dword = readl(ptr); + + /* shmem reads as 0xffffffff in the reset case */ + if (reset && last_dword == shmem_vf_reset_state) + return 0; + + /* if bit_31 is set, the pf currently owns the smc. */ + if (!(last_dword & bit(31))) + return 0; + + usleep_range(1000, 2000); + } + + return -etimedout; +} + +static int mana_smc_read_response(struct shm_channel *sc, u32 msg_type, + u32 msg_version, bool reset_vf) +{ + void __iomem *base = sc->base; + union smc_proto_hdr hdr; + int err; + + /* wait for pf to respond. */ + err = mana_smc_poll_register(base, reset_vf); + if (err) + return err; + + hdr.as_uint32 = readl(base + smc_last_dword * smc_basic_unit); + + if (reset_vf && hdr.as_uint32 == shmem_vf_reset_state) + return 0; + + /* validate protocol fields from the pf driver */ + if (hdr.msg_type != msg_type || hdr.msg_version > msg_version || + hdr.direction != smc_msg_direction_response) { + dev_err(sc->dev, "wrong smc response 0x%x, type=%d, ver=%d ", + hdr.as_uint32, msg_type, msg_version); + return -eproto; + } + + /* validate the operation result */ + if (hdr.status != 0) { + dev_err(sc->dev, "smc operation failed: 0x%x ", hdr.status); + return -eproto; + } + + return 0; +} + +void mana_smc_init(struct shm_channel *sc, struct device *dev, + void __iomem *base) +{ + sc->dev = dev; + sc->base = base; +} + +int mana_smc_setup_hwc(struct shm_channel *sc, bool reset_vf, u64 eq_addr, + u64 cq_addr, u64 rq_addr, u64 sq_addr, + u32 eq_msix_index) +{ + union smc_proto_hdr *hdr; + u16 all_addr_h4bits = 0; + u16 frame_addr_seq = 0; + u64 frame_addr = 0; + u8 shm_buf[32]; + u64 *shmem; + u32 *dword; + u8 *ptr; + int err; + int i; + + /* ensure vf already has possession of shared memory */ + err = mana_smc_poll_register(sc->base, false); + if (err) { + dev_err(sc->dev, "timeout when setting up hwc: %d ", err); + return err; + } + + if (!page_aligned(eq_addr) || !page_aligned(cq_addr) || + !page_aligned(rq_addr) || !page_aligned(sq_addr)) + return -einval; + + if ((eq_msix_index & vector_mask) != eq_msix_index) + return -einval; + + /* scheme for packing four addresses and extra info into 256 bits. + * + * addresses must be page frame aligned, so only frame address bits + * are transferred. + * + * 52-bit frame addresses are split into the lower 48 bits and upper + * 4 bits. lower 48 bits of 4 address are written sequentially from + * the start of the 256-bit shared memory region followed by 16 bits + * containing the upper 4 bits of the 4 addresses in sequence. + * + * a 16 bit eq vector number fills out the next-to-last 32-bit dword. + * + * the final 32-bit dword is used for protocol control information as + * defined in smc_proto_hdr. + */ + + memset(shm_buf, 0, sizeof(shm_buf)); + ptr = shm_buf; + + /* eq addr: low 48 bits of frame address */ + shmem = (u64 *)ptr; + frame_addr = phys_pfn(eq_addr); + *shmem = frame_addr & page_frame_l48_mask; + all_addr_h4bits |= (frame_addr >> page_frame_l48_width_bits) << + (frame_addr_seq++ * page_frame_h4_width_bits); + ptr += page_frame_l48_width_bytes; + + /* cq addr: low 48 bits of frame address */ + shmem = (u64 *)ptr; + frame_addr = phys_pfn(cq_addr); + *shmem = frame_addr & page_frame_l48_mask; + all_addr_h4bits |= (frame_addr >> page_frame_l48_width_bits) << + (frame_addr_seq++ * page_frame_h4_width_bits); + ptr += page_frame_l48_width_bytes; + + /* rq addr: low 48 bits of frame address */ + shmem = (u64 *)ptr; + frame_addr = phys_pfn(rq_addr); + *shmem = frame_addr & page_frame_l48_mask; + all_addr_h4bits |= (frame_addr >> page_frame_l48_width_bits) << + (frame_addr_seq++ * page_frame_h4_width_bits); + ptr += page_frame_l48_width_bytes; + + /* sq addr: low 48 bits of frame address */ + shmem = (u64 *)ptr; + frame_addr = phys_pfn(sq_addr); + *shmem = frame_addr & page_frame_l48_mask; + all_addr_h4bits |= (frame_addr >> page_frame_l48_width_bits) << + (frame_addr_seq++ * page_frame_h4_width_bits); + ptr += page_frame_l48_width_bytes; + + /* high 4 bits of the four frame addresses */ + *((u16 *)ptr) = all_addr_h4bits; + ptr += sizeof(u16); + + /* eq msix vector number */ + *((u16 *)ptr) = (u16)eq_msix_index; + ptr += sizeof(u16); + + /* 32-bit protocol header in final dword */ + *((u32 *)ptr) = 0; + + hdr = (union smc_proto_hdr *)ptr; + hdr->msg_type = smc_msg_type_establish_hwc; + hdr->msg_version = smc_msg_type_establish_hwc_version; + hdr->direction = smc_msg_direction_request; + hdr->reset_vf = reset_vf; + + /* write 256-message buffer to shared memory (final 32-bit write + * triggers hw to set possession bit to pf). + */ + dword = (u32 *)shm_buf; + for (i = 0; i < smc_aperture_dwords; i++) + writel(*dword++, sc->base + i * smc_basic_unit); + + /* read shmem response (polling for vf possession) and validate. + * for setup, waiting for response on shared memory is not strictly + * necessary, since wait occurs later for results to appear in eqe's. + */ + err = mana_smc_read_response(sc, smc_msg_type_establish_hwc, + smc_msg_type_establish_hwc_version, + reset_vf); + if (err) { + dev_err(sc->dev, "error when setting up hwc: %d ", err); + return err; + } + + return 0; +} + +int mana_smc_teardown_hwc(struct shm_channel *sc, bool reset_vf) +{ + union smc_proto_hdr hdr = {}; + int err; + + /* ensure already has possession of shared memory */ + err = mana_smc_poll_register(sc->base, false); + if (err) { + dev_err(sc->dev, "timeout when tearing down hwc "); + return err; + } + + /* set up protocol header for hwc destroy message */ + hdr.msg_type = smc_msg_type_destroy_hwc; + hdr.msg_version = smc_msg_type_destroy_hwc_version; + hdr.direction = smc_msg_direction_request; + hdr.reset_vf = reset_vf; + + /* write message in high 32 bits of 256-bit shared memory, causing hw + * to set possession bit to pf. + */ + writel(hdr.as_uint32, sc->base + smc_last_dword * smc_basic_unit); + + /* read shmem response (polling for vf possession) and validate. + * for teardown, waiting for response is required to ensure hardware + * invalidates mst entries before software frees memory. + */ + err = mana_smc_read_response(sc, smc_msg_type_destroy_hwc, + smc_msg_type_destroy_hwc_version, + reset_vf); + if (err) { + dev_err(sc->dev, "error when tearing down hwc: %d ", err); + return err; + } + + return 0; +} diff --git a/drivers/net/ethernet/microsoft/mana/shm_channel.h b/drivers/net/ethernet/microsoft/mana/shm_channel.h --- /dev/null +++ b/drivers/net/ethernet/microsoft/mana/shm_channel.h +/* spdx-license-identifier: gpl-2.0 or bsd-3-clause */ +/* copyright (c) 2021, microsoft corporation. */ + +#ifndef _shm_channel_h +#define _shm_channel_h + +struct shm_channel { + struct device *dev; + void __iomem *base; +}; + +void mana_smc_init(struct shm_channel *sc, struct device *dev, + void __iomem *base); + +int mana_smc_setup_hwc(struct shm_channel *sc, bool reset_vf, u64 eq_addr, + u64 cq_addr, u64 rq_addr, u64 sq_addr, + u32 eq_msix_index); + +int mana_smc_teardown_hwc(struct shm_channel *sc, bool reset_vf); + +#endif /* _shm_channel_h */
|
Networking
|
ca9c54d2d6a5ab2430c4eda364c77125d62e5e0f
|
dexuan cui stephen hemminger stephen networkplumber org
|
drivers
|
net
|
ethernet, mana, microsoft
|
bluetooth: add a new usb id for rtl8822ce
|
some models of the rtl8822ce utilize a different usb id. add this new one to the bluetooth driver.
|
this release includes the landlock security module, which aims to make easier to sandbox applications; support for the clang control flow integrity, which aims to abort the program upon detecting certain forms of undefined behavior; support for randomising the stack address offset in each syscall; support for concurrent tbl flushing; preparatory apple m1 support; support for incoming amd and intel graphics chips; bpf support for calling kernel functions directly; a virtio sound driver for improved sound experience on virtualized guests; io_uring support for multi shot mode and a misc cgroup for miscellaneous resources. as always, there are many other features, new drivers, improvements and fixes.
|
add a new usb id for rtl8822ce
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'security', 'networking', 'architectures arm x86 mips powerpc riscv s390 ia64 xtensa']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'remote processors', 'clock', 'phy ("physical layer" framework)', 'various']
|
['bluetooth']
|
['c']
| 1
| 2
| 0
|
--- diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c --- a/drivers/bluetooth/btusb.c +++ b/drivers/bluetooth/btusb.c + { usb_device(0x0bda, 0xc822), .driver_info = btusb_realtek | + btusb_wideband_speech },
|
Networking
|
4d96d3b0efee6416ef0d61b76aaac6f4a2e15b12
|
larry finger
|
drivers
|
bluetooth
| |
bluetooth: btusb: support 0cb5:c547 realtek 8822ce device
|
some xiaomi redmibook laptop models use the 0cb5:c547 usb identifier for their bluetooth device, so load the appropriate firmware for realtek 8822ce.
|
this release includes the landlock security module, which aims to make easier to sandbox applications; support for the clang control flow integrity, which aims to abort the program upon detecting certain forms of undefined behavior; support for randomising the stack address offset in each syscall; support for concurrent tbl flushing; preparatory apple m1 support; support for incoming amd and intel graphics chips; bpf support for calling kernel functions directly; a virtio sound driver for improved sound experience on virtualized guests; io_uring support for multi shot mode and a misc cgroup for miscellaneous resources. as always, there are many other features, new drivers, improvements and fixes.
|
support 0cb5:c547 realtek 8822ce device
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'security', 'networking', 'architectures arm x86 mips powerpc riscv s390 ia64 xtensa']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'remote processors', 'clock', 'phy ("physical layer" framework)', 'various']
|
['bluetooth', 'btusb']
|
['c']
| 1
| 2
| 0
|
-device(0cb5:c547) from /sys/kernel/debug/usb/devices --- diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c --- a/drivers/bluetooth/btusb.c +++ b/drivers/bluetooth/btusb.c + { usb_device(0x0cb5, 0xc547), .driver_info = btusb_realtek | + btusb_wideband_speech },
|
Networking
|
3edc5782fb64c97946f4f321141cb4f46c9da825
|
rasmus moorats
|
drivers
|
bluetooth
| |
fddi: defxx: implement dynamic csr i/o address space selection
|
recent versions of the pci express specification have deprecated support for i/o transactions and actually some pcie host bridges, such as power systems host bridge 4 (phb4), do not implement them. conversely a defea adapter can have its mmio decoding disabled with ecu (eisa configuration utility) and therefore not available for us with the resource allocation infrastructure we implement.
|
this release includes the landlock security module, which aims to make easier to sandbox applications; support for the clang control flow integrity, which aims to abort the program upon detecting certain forms of undefined behavior; support for randomising the stack address offset in each syscall; support for concurrent tbl flushing; preparatory apple m1 support; support for incoming amd and intel graphics chips; bpf support for calling kernel functions directly; a virtio sound driver for improved sound experience on virtualized guests; io_uring support for multi shot mode and a misc cgroup for miscellaneous resources. as always, there are many other features, new drivers, improvements and fixes.
|
implement dynamic csr i/o address space selection
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'security', 'networking', 'architectures arm x86 mips powerpc riscv s390 ia64 xtensa']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'remote processors', 'clock', 'phy ("physical layer" framework)', 'various']
|
['fddi', 'defxx']
|
['h', 'kconfig', 'c']
| 3
| 25
| 57
|
--- diff --git a/drivers/net/fddi/kconfig b/drivers/net/fddi/kconfig --- a/drivers/net/fddi/kconfig +++ b/drivers/net/fddi/kconfig -config defxx_mmio - bool - prompt "use mmio instead of iop" if pci || eisa - depends on defxx - default n if eisa - default y - help - this instructs the driver to use eisa or pci memory-mapped i/o - (mmio) as appropriate instead of programmed i/o ports (iop). - enabling this gives an improvement in processing time in parts - of the driver, but it requires a memory window to be configured - for eisa (defea) adapters that may not always be available. - conversely some pcie host bridges do not support iop, so mmio - may be required to access pci (defpa) adapters on downstream pci - buses with some systems. turbochannel does not have the concept - of i/o ports, so mmio is always used for these (defta) adapters. - - if unsure, say n. - diff --git a/drivers/net/fddi/defxx.c b/drivers/net/fddi/defxx.c --- a/drivers/net/fddi/defxx.c +++ b/drivers/net/fddi/defxx.c + * 10 mar 2021 macro dynamic mmio vs port i/o. -#define drv_version "v1.11" -#define drv_reldate "2014/07/01" +#define drv_version "v1.12" +#define drv_reldate "2021/03/10" -#ifdef config_defxx_mmio -#define dfx_mmio 1 +#if defined(config_eisa) || defined(config_pci) +#define dfx_use_mmio bp->mmio -#define dfx_mmio 0 +#define dfx_use_mmio true - int dfx_bus_tc = dfx_bus_tc(bdev); - int dfx_use_mmio = dfx_mmio || dfx_bus_tc; - int dfx_bus_tc = dfx_bus_tc(bdev); - int dfx_use_mmio = dfx_mmio || dfx_bus_tc; - * bdev - pointer to device information + * bp - pointer to board information -static void dfx_get_bars(struct device *bdev, +static void dfx_get_bars(dfx_board_t *bp, + struct device *bdev = bp->bus_dev; - int dfx_use_mmio = dfx_mmio || dfx_bus_tc; -static void dfx_register_res_alloc_err(const char *print_name, bool mmio, - bool eisa) -{ - pr_err("%s: cannot use %s, no address set, aborting ", - print_name, mmio ? "mmio" : "i/o"); - pr_err("%s: recompile driver with "config_defxx_mmio=%c" ", - print_name, mmio ? 'n' : 'y'); - if (eisa && mmio) - pr_err("%s: or run ecu and set adapter's mmio location ", - print_name); -} - - int dfx_bus_tc = dfx_bus_tc(bdev); - int dfx_use_mmio = dfx_mmio || dfx_bus_tc; - dfx_get_bars(bdev, bar_start, bar_len); + bp->mmio = true; + + dfx_get_bars(bp, bar_start, bar_len); - dfx_register_res_alloc_err(print_name, dfx_use_mmio, - dfx_bus_eisa); - err = -enxio; - goto err_out_disable; + bp->mmio = false; + dfx_get_bars(bp, bar_start, bar_len); - if (dfx_use_mmio) + if (dfx_use_mmio) { - else + if (!region && (dfx_bus_eisa || dfx_bus_pci)) { + bp->mmio = false; + dfx_get_bars(bp, bar_start, bar_len); + } + } + if (!dfx_use_mmio) - int dfx_use_mmio = dfx_mmio || dfx_bus_tc; - int dfx_use_mmio = dfx_mmio || dfx_bus_tc; - int dfx_bus_tc = dfx_bus_tc(bdev); - int dfx_use_mmio = dfx_mmio || dfx_bus_tc; - dfx_get_bars(bdev, bar_start, bar_len); + dfx_get_bars(bp, bar_start, bar_len); diff --git a/drivers/net/fddi/defxx.h b/drivers/net/fddi/defxx.h --- a/drivers/net/fddi/defxx.h +++ b/drivers/net/fddi/defxx.h + * 10 mar 2021 macro dynamic mmio vs port i/o. + /* whether to use mmio or port i/o. */ + bool mmio;
|
Networking
|
795e272e54746e97fde54454873d384d5012cc9d
|
maciej w rozycki
|
drivers
|
net
|
fddi
|
rdma/hns: add support for xrc on hip09
|
the hip09 supports xrc transport service, it greatly saves the number of qps required to connect all processes in a large cluster.
|
this release includes the landlock security module, which aims to make easier to sandbox applications; support for the clang control flow integrity, which aims to abort the program upon detecting certain forms of undefined behavior; support for randomising the stack address offset in each syscall; support for concurrent tbl flushing; preparatory apple m1 support; support for incoming amd and intel graphics chips; bpf support for calling kernel functions directly; a virtio sound driver for improved sound experience on virtualized guests; io_uring support for multi shot mode and a misc cgroup for miscellaneous resources. as always, there are many other features, new drivers, improvements and fixes.
|
add support for xrc on hip09
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'security', 'networking', 'architectures arm x86 mips powerpc riscv s390 ia64 xtensa']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'remote processors', 'clock', 'phy ("physical layer" framework)', 'various']
|
['hns ']
|
['h', 'c']
| 9
| 258
| 70
|
--- diff --git a/drivers/infiniband/hw/hns/hns_roce_alloc.c b/drivers/infiniband/hw/hns/hns_roce_alloc.c --- a/drivers/infiniband/hw/hns/hns_roce_alloc.c +++ b/drivers/infiniband/hw/hns/hns_roce_alloc.c + if (hr_dev->caps.flags & hns_roce_cap_flag_xrc) + hns_roce_cleanup_xrcd_table(hr_dev); + diff --git a/drivers/infiniband/hw/hns/hns_roce_device.h b/drivers/infiniband/hw/hns/hns_roce_device.h --- a/drivers/infiniband/hw/hns/hns_roce_device.h +++ b/drivers/infiniband/hw/hns/hns_roce_device.h + serv_type_xrc = 5, + hns_roce_event_type_xrcd_violation = 0x16, + hns_roce_event_type_invalid_xrceth = 0x17, + hns_roce_cap_flag_xrc = bit(6), +struct hns_roce_xrcd { + struct ib_xrcd ibxrcd; + u32 xrcdn; +}; + + u32 xrcdn; + u32 xrcdn; + + u32 num_xrcds; + u32 reserved_xrcds; + struct hns_roce_bitmap xrcd_bitmap; +static inline struct hns_roce_xrcd *to_hr_xrcd(struct ib_xrcd *ibxrcd) +{ + return container_of(ibxrcd, struct hns_roce_xrcd, ibxrcd); +} + +int hns_roce_init_xrcd_table(struct hns_roce_dev *hr_dev); +void hns_roce_cleanup_xrcd_table(struct hns_roce_dev *hr_dev); +int hns_roce_alloc_xrcd(struct ib_xrcd *ib_xrcd, struct ib_udata *udata); +int hns_roce_dealloc_xrcd(struct ib_xrcd *ib_xrcd, struct ib_udata *udata); + diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c --- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c +++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c + caps->num_xrcds = hns_roce_v2_max_xrcd_num; + caps->reserved_xrcds = hns_roce_v2_rsv_xrcd_num; - hns_roce_cap_flag_qp_flow_ctrl; + hns_roce_cap_flag_qp_flow_ctrl | hns_roce_cap_flag_xrc; + caps->num_xrcds = hns_roce_v2_max_xrcd_num; + caps->reserved_xrcds = hns_roce_v2_rsv_xrcd_num; +static inline int get_cqn(struct ib_cq *ib_cq) +{ + return ib_cq ? to_hr_cq(ib_cq)->cqn : 0; +} + +static inline int get_pdn(struct ib_pd *ib_pd) +{ + return ib_pd ? to_hr_pd(ib_pd)->pdn : 0; +} + - v2_qpc_byte_4_tst_s, to_hr_qp_type(hr_qp->ibqp.qp_type)); + v2_qpc_byte_4_tst_s, to_hr_qp_type(ibqp->qp_type)); - v2_qpc_byte_16_pd_s, to_hr_pd(ibqp->pd)->pdn); + v2_qpc_byte_16_pd_s, get_pdn(ibqp->pd)); + if (ibqp->qp_type == ib_qpt_xrc_tgt) + context->qkey_xrcd = cpu_to_le32(hr_qp->xrcdn); + - v2_qpc_byte_80_rx_cqn_s, to_hr_cq(ibqp->recv_cq)->cqn); + v2_qpc_byte_80_rx_cqn_s, get_cqn(ibqp->recv_cq)); + + roce_set_bit(context->byte_76_srqn_op_en, + v2_qpc_byte_76_srq_en_s, 1); - roce_set_bit(context->byte_76_srqn_op_en, - v2_qpc_byte_76_srq_en_s, 1); - v2_qpc_byte_252_tx_cqn_s, to_hr_cq(ibqp->send_cq)->cqn); + v2_qpc_byte_252_tx_cqn_s, get_cqn(ibqp->send_cq)); - v2_qpc_byte_4_tst_s, to_hr_qp_type(hr_qp->ibqp.qp_type)); + v2_qpc_byte_4_tst_s, to_hr_qp_type(ibqp->qp_type)); + if (ibqp->qp_type == ib_qpt_xrc_tgt) { + context->qkey_xrcd = cpu_to_le32(hr_qp->xrcdn); + qpc_mask->qkey_xrcd = 0; + } + - v2_qpc_byte_16_pd_s, to_hr_pd(ibqp->pd)->pdn); + v2_qpc_byte_16_pd_s, get_pdn(ibqp->pd)); + - v2_qpc_byte_80_rx_cqn_s, to_hr_cq(ibqp->recv_cq)->cqn); + v2_qpc_byte_80_rx_cqn_s, get_cqn(ibqp->recv_cq)); - v2_qpc_byte_252_tx_cqn_s, to_hr_cq(ibqp->send_cq)->cqn); + v2_qpc_byte_252_tx_cqn_s, get_cqn(ibqp->send_cq)); - /* rc&uc&ud required attr */ - /* rc&uc required attr */ +static void clear_qp(struct hns_roce_qp *hr_qp) +{ + struct ib_qp *ibqp = &hr_qp->ibqp; + + if (ibqp->send_cq) + hns_roce_v2_cq_clean(to_hr_cq(ibqp->send_cq), + hr_qp->qpn, null); + + if (ibqp->recv_cq && ibqp->recv_cq != ibqp->send_cq) + hns_roce_v2_cq_clean(to_hr_cq(ibqp->recv_cq), + hr_qp->qpn, ibqp->srq ? + to_hr_srq(ibqp->srq) : null); + + if (hr_qp->rq.wqe_cnt) + *hr_qp->rdb.db_record = 0; + + hr_qp->rq.head = 0; + hr_qp->rq.tail = 0; + hr_qp->sq.head = 0; + hr_qp->sq.tail = 0; + hr_qp->next_sge = 0; +} + - spin_lock_irqsave(&hr_qp->sq.lock, sq_flag); - hr_qp->state = ib_qps_err; - roce_set_field(context->byte_160_sq_ci_pi, - v2_qpc_byte_160_sq_producer_idx_m, - v2_qpc_byte_160_sq_producer_idx_s, - hr_qp->sq.head); - roce_set_field(qpc_mask->byte_160_sq_ci_pi, - v2_qpc_byte_160_sq_producer_idx_m, - v2_qpc_byte_160_sq_producer_idx_s, 0); - spin_unlock_irqrestore(&hr_qp->sq.lock, sq_flag); - - if (!ibqp->srq) { + if (ibqp->qp_type != ib_qpt_xrc_tgt) { + spin_lock_irqsave(&hr_qp->sq.lock, sq_flag); + hr_qp->state = ib_qps_err; + roce_set_field(context->byte_160_sq_ci_pi, + v2_qpc_byte_160_sq_producer_idx_m, + v2_qpc_byte_160_sq_producer_idx_s, + hr_qp->sq.head); + roce_set_field(qpc_mask->byte_160_sq_ci_pi, + v2_qpc_byte_160_sq_producer_idx_m, + v2_qpc_byte_160_sq_producer_idx_s, 0); + spin_unlock_irqrestore(&hr_qp->sq.lock, sq_flag); + } + + if (!ibqp->srq && ibqp->qp_type != ib_qpt_xrc_ini && + ibqp->qp_type != ib_qpt_xrc_tgt) { + hr_qp->state = ib_qps_err; - ibqp->srq ? 1 : 0); + ((to_hr_qp_type(hr_qp->ibqp.qp_type) == serv_type_xrc) || + ibqp->srq) ? 1 : 0); - if (new_state == ib_qps_reset && !ibqp->uobject) { - hns_roce_v2_cq_clean(to_hr_cq(ibqp->recv_cq), hr_qp->qpn, - ibqp->srq ? to_hr_srq(ibqp->srq) : null); - if (ibqp->send_cq != ibqp->recv_cq) - hns_roce_v2_cq_clean(to_hr_cq(ibqp->send_cq), - hr_qp->qpn, null); - - hr_qp->rq.head = 0; - hr_qp->rq.tail = 0; - hr_qp->sq.head = 0; - hr_qp->sq.tail = 0; - hr_qp->next_sge = 0; - if (hr_qp->rq.wqe_cnt) - *hr_qp->rdb.db_record = 0; - } + if (new_state == ib_qps_reset && !ibqp->uobject) + clear_qp(hr_qp); - hr_qp->ibqp.qp_type == ib_qpt_uc) { + hr_qp->ibqp.qp_type == ib_qpt_uc || + hr_qp->ibqp.qp_type == ib_qpt_xrc_ini || + hr_qp->ibqp.qp_type == ib_qpt_xrc_tgt) { + +static inline int modify_qp_is_ok(struct hns_roce_qp *hr_qp) +{ + return ((hr_qp->ibqp.qp_type == ib_qpt_rc || + hr_qp->ibqp.qp_type == ib_qpt_ud || + hr_qp->ibqp.qp_type == ib_qpt_xrc_ini || + hr_qp->ibqp.qp_type == ib_qpt_xrc_tgt) && + hr_qp->state != ib_qps_reset); +} + - if ((hr_qp->ibqp.qp_type == ib_qpt_rc || - hr_qp->ibqp.qp_type == ib_qpt_ud) && - hr_qp->state != ib_qps_reset) { + if (modify_qp_is_ok(hr_qp)) { - hr_reg_write(ctx, srqc_xrcd, 0); + hr_reg_write(ctx, srqc_xrcd, srq->xrcdn); + case hns_roce_event_type_xrcd_violation: + ibdev_err(ibdev, "xrc domain violation error. "); + break; + case hns_roce_event_type_invalid_xrceth: + ibdev_err(ibdev, "invalid xrceth error. "); + break; + case hns_roce_event_type_xrcd_violation: + case hns_roce_event_type_invalid_xrceth: diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.h b/drivers/infiniband/hw/hns/hns_roce_hw_v2.h --- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.h +++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.h +#define hns_roce_v2_max_xrcd_num 0x1000000 +#define hns_roce_v2_rsv_xrcd_num 0 diff --git a/drivers/infiniband/hw/hns/hns_roce_main.c b/drivers/infiniband/hw/hns/hns_roce_main.c --- a/drivers/infiniband/hw/hns/hns_roce_main.c +++ b/drivers/infiniband/hw/hns/hns_roce_main.c + if (hr_dev->caps.flags & hns_roce_cap_flag_xrc) + props->device_cap_flags |= ib_device_xrc; + + resp.srq_tab_size = hr_dev->caps.num_srqs; +static const struct ib_device_ops hns_roce_dev_xrcd_ops = { + .alloc_xrcd = hns_roce_alloc_xrcd, + .dealloc_xrcd = hns_roce_dealloc_xrcd, + + init_rdma_obj_size(ib_xrcd, hns_roce_xrcd, ibxrcd), +}; + - /* mw */ - /* frmr */ - /* srq */ + if (hr_dev->caps.flags & hns_roce_cap_flag_xrc) + ib_set_device_ops(ib_dev, &hns_roce_dev_xrcd_ops); + + if (hr_dev->caps.flags & hns_roce_cap_flag_xrc) { + ret = hns_roce_init_xrcd_table(hr_dev); + if (ret) { + dev_err(dev, "failed to init xrcd table, ret = %d. ", + ret); + goto err_pd_table_free; + } + } + - goto err_pd_table_free; + goto err_xrcd_table_free; +err_xrcd_table_free: + if (hr_dev->caps.flags & hns_roce_cap_flag_xrc) + hns_roce_cleanup_xrcd_table(hr_dev); + diff --git a/drivers/infiniband/hw/hns/hns_roce_pd.c b/drivers/infiniband/hw/hns/hns_roce_pd.c --- a/drivers/infiniband/hw/hns/hns_roce_pd.c +++ b/drivers/infiniband/hw/hns/hns_roce_pd.c + +static int hns_roce_xrcd_alloc(struct hns_roce_dev *hr_dev, u32 *xrcdn) +{ + return hns_roce_bitmap_alloc(&hr_dev->xrcd_bitmap, + (unsigned long *)xrcdn); +} + +static void hns_roce_xrcd_free(struct hns_roce_dev *hr_dev, + u32 xrcdn) +{ + hns_roce_bitmap_free(&hr_dev->xrcd_bitmap, xrcdn, bitmap_no_rr); +} + +int hns_roce_init_xrcd_table(struct hns_roce_dev *hr_dev) +{ + return hns_roce_bitmap_init(&hr_dev->xrcd_bitmap, + hr_dev->caps.num_xrcds, + hr_dev->caps.num_xrcds - 1, + hr_dev->caps.reserved_xrcds, 0); +} + +void hns_roce_cleanup_xrcd_table(struct hns_roce_dev *hr_dev) +{ + hns_roce_bitmap_cleanup(&hr_dev->xrcd_bitmap); +} + +int hns_roce_alloc_xrcd(struct ib_xrcd *ib_xrcd, struct ib_udata *udata) +{ + struct hns_roce_dev *hr_dev = to_hr_dev(ib_xrcd->device); + struct hns_roce_xrcd *xrcd = to_hr_xrcd(ib_xrcd); + int ret; + + if (!(hr_dev->caps.flags & hns_roce_cap_flag_xrc)) + return -einval; + + ret = hns_roce_xrcd_alloc(hr_dev, &xrcd->xrcdn); + if (ret) { + dev_err(hr_dev->dev, "failed to alloc xrcdn, ret = %d. ", ret); + return ret; + } + + return 0; +} + +int hns_roce_dealloc_xrcd(struct ib_xrcd *ib_xrcd, struct ib_udata *udata) +{ + hns_roce_xrcd_free(to_hr_dev(ib_xrcd->device), + to_hr_xrcd(ib_xrcd)->xrcdn); + + return 0; +} diff --git a/drivers/infiniband/hw/hns/hns_roce_qp.c b/drivers/infiniband/hw/hns/hns_roce_qp.c --- a/drivers/infiniband/hw/hns/hns_roce_qp.c +++ b/drivers/infiniband/hw/hns/hns_roce_qp.c - event_type == hns_roce_event_type_local_wq_access_error)) { + event_type == hns_roce_event_type_local_wq_access_error || + event_type == hns_roce_event_type_xrcd_violation || + event_type == hns_roce_event_type_invalid_xrceth)) { + case hns_roce_event_type_xrcd_violation: + case hns_roce_event_type_invalid_xrceth: - list_del(&hr_qp->sq_node); - list_del(&hr_qp->rq_node); + + if (hr_qp->ibqp.qp_type != ib_qpt_xrc_tgt) + list_del(&hr_qp->sq_node); + + if (hr_qp->ibqp.qp_type != ib_qpt_xrc_ini && + hr_qp->ibqp.qp_type != ib_qpt_xrc_tgt) + list_del(&hr_qp->rq_node); + case ib_qpt_xrc_ini: + case ib_qpt_xrc_tgt: + if (!(hr_dev->caps.flags & hns_roce_cap_flag_xrc)) + goto out; + break; - fallthrough; + break; - struct hns_roce_dev *hr_dev = to_hr_dev(pd->device); - struct ib_device *ibdev = &hr_dev->ib_dev; + struct ib_device *ibdev = pd ? pd->device : init_attr->xrcd->device; + struct hns_roce_dev *hr_dev = to_hr_dev(ibdev); + if (init_attr->qp_type == ib_qpt_xrc_ini) + init_attr->recv_cq = null; + + if (init_attr->qp_type == ib_qpt_xrc_tgt) { + hr_qp->xrcdn = to_hr_xrcd(init_attr->xrcd)->xrcdn; + init_attr->recv_cq = null; + init_attr->send_cq = null; + } + - int transport_type; - - if (qp_type == ib_qpt_rc) - transport_type = serv_type_rc; - else if (qp_type == ib_qpt_uc) - transport_type = serv_type_uc; - else if (qp_type == ib_qpt_ud) - transport_type = serv_type_ud; - else if (qp_type == ib_qpt_gsi) - transport_type = serv_type_ud; - else - transport_type = -1; - - return transport_type; + switch (qp_type) { + case ib_qpt_rc: + return serv_type_rc; + case ib_qpt_uc: + return serv_type_uc; + case ib_qpt_ud: + case ib_qpt_gsi: + return serv_type_ud; + case ib_qpt_xrc_ini: + case ib_qpt_xrc_tgt: + return serv_type_xrc; + default: + return -1; + } diff --git a/drivers/infiniband/hw/hns/hns_roce_srq.c b/drivers/infiniband/hw/hns/hns_roce_srq.c --- a/drivers/infiniband/hw/hns/hns_roce_srq.c +++ b/drivers/infiniband/hw/hns/hns_roce_srq.c + + srq->xrcdn = (init_attr->srq_type == ib_srqt_xrc) ? + to_hr_xrcd(init_attr->ext.xrc.xrcd)->xrcdn : 0; diff --git a/include/uapi/rdma/hns-abi.h b/include/uapi/rdma/hns-abi.h --- a/include/uapi/rdma/hns-abi.h +++ b/include/uapi/rdma/hns-abi.h + __u32 srq_tab_size; + __u32 reserved;
|
Networking
|
32548870d438aba3c4a13f07efb73a8b86de507d
|
wenpeng liang
|
include
|
uapi
|
hns, hw, rdma
|
rdma/hns: simplify function's resource related command
|
use hr_reg_write/read() to simplify codes about configuring function's resource. and because the design of pf/vf fields is same, they can be defined only once.
|
this release includes the landlock security module, which aims to make easier to sandbox applications; support for the clang control flow integrity, which aims to abort the program upon detecting certain forms of undefined behavior; support for randomising the stack address offset in each syscall; support for concurrent tbl flushing; preparatory apple m1 support; support for incoming amd and intel graphics chips; bpf support for calling kernel functions directly; a virtio sound driver for improved sound experience on virtualized guests; io_uring support for multi shot mode and a misc cgroup for miscellaneous resources. as always, there are many other features, new drivers, improvements and fixes.
|
support roce on virtual functions of hip09
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'security', 'networking', 'architectures arm x86 mips powerpc riscv s390 ia64 xtensa']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'remote processors', 'clock', 'phy ("physical layer" framework)', 'various']
|
['hns ']
|
['h', 'c']
| 3
| 89
| 305
|
--- diff --git a/drivers/infiniband/hw/hns/hns_roce_common.h b/drivers/infiniband/hw/hns/hns_roce_common.h --- a/drivers/infiniband/hw/hns/hns_roce_common.h +++ b/drivers/infiniband/hw/hns/hns_roce_common.h +#define _hr_reg_read(ptr, field_type, field_h, field_l) \ + ({ \ + const field_type *_ptr = ptr; \ + build_bug_on(((field_h) / 32) != ((field_l) / 32)); \ + field_get(genmask((field_h) % 32, (field_l) % 32), \ + le32_to_cpu(*((__le32 *)_ptr + (field_h) / 32))); \ + }) + +#define hr_reg_read(ptr, field) _hr_reg_read(ptr, field) + diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c --- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c +++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c - struct hns_roce_cfg_global_param *req; + struct hns_roce_cmq_req *req = (struct hns_roce_cmq_req *)desc.data; - req = (struct hns_roce_cfg_global_param *)desc.data; - memset(req, 0, sizeof(*req)); - roce_set_field(req->time_cfg_udp_port, - cfg_global_param_data_0_rocee_time_1us_cfg_m, - cfg_global_param_data_0_rocee_time_1us_cfg_s, 0x3e8); - roce_set_field(req->time_cfg_udp_port, - cfg_global_param_data_0_rocee_udp_port_m, - cfg_global_param_data_0_rocee_udp_port_s, - roce_v2_udp_dport); + hr_reg_write(req, cfg_global_param_1us_cycles, 0x3e8); + hr_reg_write(req, cfg_global_param_udp_port, roce_v2_udp_dport); - struct hns_roce_pf_res_a *req_a; - struct hns_roce_pf_res_b *req_b; + struct hns_roce_cmq_req *r_a = (struct hns_roce_cmq_req *)desc[0].data; + struct hns_roce_cmq_req *r_b = (struct hns_roce_cmq_req *)desc[1].data; + enum hns_roce_opcode_type opcode = hns_roce_opc_query_pf_res; + struct hns_roce_caps *caps = &hr_dev->caps; - hns_roce_cmq_setup_basic_desc(&desc[0], hns_roce_opc_query_pf_res, - true); + hns_roce_cmq_setup_basic_desc(&desc[0], opcode, true); - - hns_roce_cmq_setup_basic_desc(&desc[1], hns_roce_opc_query_pf_res, - true); + hns_roce_cmq_setup_basic_desc(&desc[1], opcode, true); - req_a = (struct hns_roce_pf_res_a *)desc[0].data; - req_b = (struct hns_roce_pf_res_b *)desc[1].data; - - hr_dev->caps.qpc_bt_num = roce_get_field(req_a->qpc_bt_idx_num, - pf_res_data_1_pf_qpc_bt_num_m, - pf_res_data_1_pf_qpc_bt_num_s); - hr_dev->caps.srqc_bt_num = roce_get_field(req_a->srqc_bt_idx_num, - pf_res_data_2_pf_srqc_bt_num_m, - pf_res_data_2_pf_srqc_bt_num_s); - hr_dev->caps.cqc_bt_num = roce_get_field(req_a->cqc_bt_idx_num, - pf_res_data_3_pf_cqc_bt_num_m, - pf_res_data_3_pf_cqc_bt_num_s); - hr_dev->caps.mpt_bt_num = roce_get_field(req_a->mpt_bt_idx_num, - pf_res_data_4_pf_mpt_bt_num_m, - pf_res_data_4_pf_mpt_bt_num_s); - - hr_dev->caps.sl_num = roce_get_field(req_b->qid_idx_sl_num, - pf_res_data_3_pf_sl_num_m, - pf_res_data_3_pf_sl_num_s); - hr_dev->caps.sccc_bt_num = roce_get_field(req_b->sccc_bt_idx_num, - pf_res_data_4_pf_sccc_bt_num_m, - pf_res_data_4_pf_sccc_bt_num_s); - - hr_dev->caps.gmv_bt_num = roce_get_field(req_b->gmv_idx_num, - pf_res_data_5_pf_gmv_bt_num_m, - pf_res_data_5_pf_gmv_bt_num_s); + caps->qpc_bt_num = hr_reg_read(r_a, func_res_a_qpc_bt_num); + caps->srqc_bt_num = hr_reg_read(r_a, func_res_a_srqc_bt_num); + caps->cqc_bt_num = hr_reg_read(r_a, func_res_a_cqc_bt_num); + caps->mpt_bt_num = hr_reg_read(r_a, func_res_a_mpt_bt_num); + caps->sccc_bt_num = hr_reg_read(r_b, func_res_b_sccc_bt_num); + caps->sl_num = hr_reg_read(r_b, func_res_b_qid_num); + caps->gmv_bt_num = hr_reg_read(r_b, func_res_b_gmv_bt_num); - struct hns_roce_pf_timer_res_a *req_a; + struct hns_roce_cmq_req *req = (struct hns_roce_cmq_req *)desc.data; + struct hns_roce_caps *caps = &hr_dev->caps; - req_a = (struct hns_roce_pf_timer_res_a *)desc.data; - - hr_dev->caps.qpc_timer_bt_num = - roce_get_field(req_a->qpc_timer_bt_idx_num, - pf_res_data_1_pf_qpc_timer_bt_num_m, - pf_res_data_1_pf_qpc_timer_bt_num_s); - hr_dev->caps.cqc_timer_bt_num = - roce_get_field(req_a->cqc_timer_bt_idx_num, - pf_res_data_2_pf_cqc_timer_bt_num_m, - pf_res_data_2_pf_cqc_timer_bt_num_s); + caps->qpc_timer_bt_num = hr_reg_read(req, pf_timer_res_qpc_item_num); + caps->cqc_timer_bt_num = hr_reg_read(req, pf_timer_res_cqc_item_num); - struct hns_roce_vf_res_a *req_a; - struct hns_roce_vf_res_b *req_b; - - req_a = (struct hns_roce_vf_res_a *)desc[0].data; - req_b = (struct hns_roce_vf_res_b *)desc[1].data; + struct hns_roce_cmq_req *r_a = (struct hns_roce_cmq_req *)desc[0].data; + struct hns_roce_cmq_req *r_b = (struct hns_roce_cmq_req *)desc[1].data; + enum hns_roce_opcode_type opcode = hns_roce_opc_alloc_vf_res; - hns_roce_cmq_setup_basic_desc(&desc[0], hns_roce_opc_alloc_vf_res, - false); + hns_roce_cmq_setup_basic_desc(&desc[0], opcode, false); + hns_roce_cmq_setup_basic_desc(&desc[1], opcode, false); - hns_roce_cmq_setup_basic_desc(&desc[1], hns_roce_opc_alloc_vf_res, - false); - - roce_set_field(req_a->vf_qpc_bt_idx_num, - vf_res_a_data_1_vf_qpc_bt_idx_m, - vf_res_a_data_1_vf_qpc_bt_idx_s, 0); - roce_set_field(req_a->vf_qpc_bt_idx_num, - vf_res_a_data_1_vf_qpc_bt_num_m, - vf_res_a_data_1_vf_qpc_bt_num_s, hns_roce_vf_qpc_bt_num); - - roce_set_field(req_a->vf_srqc_bt_idx_num, - vf_res_a_data_2_vf_srqc_bt_idx_m, - vf_res_a_data_2_vf_srqc_bt_idx_s, 0); - roce_set_field(req_a->vf_srqc_bt_idx_num, - vf_res_a_data_2_vf_srqc_bt_num_m, - vf_res_a_data_2_vf_srqc_bt_num_s, - hns_roce_vf_srqc_bt_num); - - roce_set_field(req_a->vf_cqc_bt_idx_num, - vf_res_a_data_3_vf_cqc_bt_idx_m, - vf_res_a_data_3_vf_cqc_bt_idx_s, 0); - roce_set_field(req_a->vf_cqc_bt_idx_num, - vf_res_a_data_3_vf_cqc_bt_num_m, - vf_res_a_data_3_vf_cqc_bt_num_s, hns_roce_vf_cqc_bt_num); - - roce_set_field(req_a->vf_mpt_bt_idx_num, - vf_res_a_data_4_vf_mpt_bt_idx_m, - vf_res_a_data_4_vf_mpt_bt_idx_s, 0); - roce_set_field(req_a->vf_mpt_bt_idx_num, - vf_res_a_data_4_vf_mpt_bt_num_m, - vf_res_a_data_4_vf_mpt_bt_num_s, hns_roce_vf_mpt_bt_num); - - roce_set_field(req_a->vf_eqc_bt_idx_num, vf_res_a_data_5_vf_eqc_idx_m, - vf_res_a_data_5_vf_eqc_idx_s, 0); - roce_set_field(req_a->vf_eqc_bt_idx_num, vf_res_a_data_5_vf_eqc_num_m, - vf_res_a_data_5_vf_eqc_num_s, hns_roce_vf_eqc_num); - - roce_set_field(req_b->vf_smac_idx_num, vf_res_b_data_1_vf_smac_idx_m, - vf_res_b_data_1_vf_smac_idx_s, 0); - roce_set_field(req_b->vf_smac_idx_num, vf_res_b_data_1_vf_smac_num_m, - vf_res_b_data_1_vf_smac_num_s, hns_roce_vf_smac_num); - - roce_set_field(req_b->vf_sgid_idx_num, vf_res_b_data_2_vf_sgid_idx_m, - vf_res_b_data_2_vf_sgid_idx_s, 0); - roce_set_field(req_b->vf_sgid_idx_num, vf_res_b_data_2_vf_sgid_num_m, - vf_res_b_data_2_vf_sgid_num_s, hns_roce_vf_sgid_num); - - roce_set_field(req_b->vf_qid_idx_sl_num, vf_res_b_data_3_vf_qid_idx_m, - vf_res_b_data_3_vf_qid_idx_s, 0); - roce_set_field(req_b->vf_qid_idx_sl_num, vf_res_b_data_3_vf_sl_num_m, - vf_res_b_data_3_vf_sl_num_s, hns_roce_vf_sl_num); - - roce_set_field(req_b->vf_sccc_idx_num, vf_res_b_data_4_vf_sccc_bt_idx_m, - vf_res_b_data_4_vf_sccc_bt_idx_s, 0); - roce_set_field(req_b->vf_sccc_idx_num, vf_res_b_data_4_vf_sccc_bt_num_m, - vf_res_b_data_4_vf_sccc_bt_num_s, - hns_roce_vf_sccc_bt_num); + hr_reg_write(r_a, func_res_a_qpc_bt_num, hns_roce_vf_qpc_bt_num); + hr_reg_write(r_a, func_res_a_qpc_bt_idx, 0); + hr_reg_write(r_a, func_res_a_srqc_bt_num, hns_roce_vf_srqc_bt_num); + hr_reg_write(r_a, func_res_a_srqc_bt_idx, 0); + hr_reg_write(r_a, func_res_a_cqc_bt_num, hns_roce_vf_cqc_bt_num); + hr_reg_write(r_a, func_res_a_cqc_bt_idx, 0); + hr_reg_write(r_a, func_res_a_mpt_bt_num, hns_roce_vf_mpt_bt_num); + hr_reg_write(r_a, func_res_a_mpt_bt_idx, 0); + hr_reg_write(r_a, func_res_a_eqc_bt_num, hns_roce_vf_eqc_num); + hr_reg_write(r_a, func_res_a_eqc_bt_idx, 0); + hr_reg_write(r_b, func_res_b_smac_num, hns_roce_vf_smac_num); + hr_reg_write(r_b, func_res_b_smac_idx, 0); + hr_reg_write(r_b, func_res_b_sgid_num, hns_roce_vf_sgid_num); + hr_reg_write(r_b, func_res_b_sgid_idx, 0); + hr_reg_write(r_b, func_res_v_qid_num, hns_roce_vf_sl_num); + hr_reg_write(r_b, func_res_b_qid_idx, 0); + hr_reg_write(r_b, func_res_b_sccc_bt_num, hns_roce_vf_sccc_bt_num); + hr_reg_write(r_b, func_res_b_sccc_bt_idx, 0); diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.h b/drivers/infiniband/hw/hns/hns_roce_hw_v2.h --- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.h +++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.h -struct hns_roce_cfg_global_param { - __le32 time_cfg_udp_port; - __le32 rsv[5]; -}; - -#define cfg_global_param_data_0_rocee_time_1us_cfg_s 0 -#define cfg_global_param_data_0_rocee_time_1us_cfg_m genmask(9, 0) - -#define cfg_global_param_data_0_rocee_udp_port_s 16 -#define cfg_global_param_data_0_rocee_udp_port_m genmask(31, 16) - -struct hns_roce_pf_res_a { - __le32 rsv; - __le32 qpc_bt_idx_num; - __le32 srqc_bt_idx_num; - __le32 cqc_bt_idx_num; - __le32 mpt_bt_idx_num; - __le32 eqc_bt_idx_num; -}; - -#define pf_res_data_1_pf_qpc_bt_idx_s 0 -#define pf_res_data_1_pf_qpc_bt_idx_m genmask(10, 0) - -#define pf_res_data_1_pf_qpc_bt_num_s 16 -#define pf_res_data_1_pf_qpc_bt_num_m genmask(27, 16) - -#define pf_res_data_2_pf_srqc_bt_idx_s 0 -#define pf_res_data_2_pf_srqc_bt_idx_m genmask(8, 0) - -#define pf_res_data_2_pf_srqc_bt_num_s 16 -#define pf_res_data_2_pf_srqc_bt_num_m genmask(25, 16) - -#define pf_res_data_3_pf_cqc_bt_idx_s 0 -#define pf_res_data_3_pf_cqc_bt_idx_m genmask(8, 0) - -#define pf_res_data_3_pf_cqc_bt_num_s 16 -#define pf_res_data_3_pf_cqc_bt_num_m genmask(25, 16) - -#define pf_res_data_4_pf_mpt_bt_idx_s 0 -#define pf_res_data_4_pf_mpt_bt_idx_m genmask(8, 0) - -#define pf_res_data_4_pf_mpt_bt_num_s 16 -#define pf_res_data_4_pf_mpt_bt_num_m genmask(25, 16) - -#define pf_res_data_5_pf_eqc_bt_idx_s 0 -#define pf_res_data_5_pf_eqc_bt_idx_m genmask(8, 0) - -#define pf_res_data_5_pf_eqc_bt_num_s 16 -#define pf_res_data_5_pf_eqc_bt_num_m genmask(25, 16) - -struct hns_roce_pf_res_b { - __le32 rsv0; - __le32 smac_idx_num; - __le32 sgid_idx_num; - __le32 qid_idx_sl_num; - __le32 sccc_bt_idx_num; - __le32 gmv_idx_num; -}; - -#define pf_res_data_1_pf_smac_idx_s 0 -#define pf_res_data_1_pf_smac_idx_m genmask(7, 0) - -#define pf_res_data_1_pf_smac_num_s 8 -#define pf_res_data_1_pf_smac_num_m genmask(16, 8) - -#define pf_res_data_2_pf_sgid_idx_s 0 -#define pf_res_data_2_pf_sgid_idx_m genmask(7, 0) - -#define pf_res_data_2_pf_sgid_num_s 8 -#define pf_res_data_2_pf_sgid_num_m genmask(16, 8) - -#define pf_res_data_3_pf_qid_idx_s 0 -#define pf_res_data_3_pf_qid_idx_m genmask(9, 0) - -#define pf_res_data_3_pf_sl_num_s 16 -#define pf_res_data_3_pf_sl_num_m genmask(26, 16) - -#define pf_res_data_4_pf_sccc_bt_idx_s 0 -#define pf_res_data_4_pf_sccc_bt_idx_m genmask(8, 0) - -#define pf_res_data_4_pf_sccc_bt_num_s 9 -#define pf_res_data_4_pf_sccc_bt_num_m genmask(17, 9) - -#define pf_res_data_5_pf_gmv_bt_idx_s 0 -#define pf_res_data_5_pf_gmv_bt_idx_m genmask(7, 0) - -#define pf_res_data_5_pf_gmv_bt_num_s 8 -#define pf_res_data_5_pf_gmv_bt_num_m genmask(16, 8) - -struct hns_roce_pf_timer_res_a { - __le32 rsv0; - __le32 qpc_timer_bt_idx_num; - __le32 cqc_timer_bt_idx_num; - __le32 rsv[3]; -}; +/* fields of hns_roce_opc_cfg_global_param */ +#define cfg_global_param_1us_cycles cmq_req_field_loc(9, 0) +#define cfg_global_param_udp_port cmq_req_field_loc(31, 16) -#define pf_res_data_1_pf_qpc_timer_bt_idx_s 0 -#define pf_res_data_1_pf_qpc_timer_bt_idx_m genmask(11, 0) - -#define pf_res_data_1_pf_qpc_timer_bt_num_s 16 -#define pf_res_data_1_pf_qpc_timer_bt_num_m genmask(28, 16) - -#define pf_res_data_2_pf_cqc_timer_bt_idx_s 0 -#define pf_res_data_2_pf_cqc_timer_bt_idx_m genmask(10, 0) - -#define pf_res_data_2_pf_cqc_timer_bt_num_s 16 -#define pf_res_data_2_pf_cqc_timer_bt_num_m genmask(27, 16) - -struct hns_roce_vf_res_a { - __le32 vf_id; - __le32 vf_qpc_bt_idx_num; - __le32 vf_srqc_bt_idx_num; - __le32 vf_cqc_bt_idx_num; - __le32 vf_mpt_bt_idx_num; - __le32 vf_eqc_bt_idx_num; -}; - -#define vf_res_a_data_1_vf_qpc_bt_idx_s 0 -#define vf_res_a_data_1_vf_qpc_bt_idx_m genmask(10, 0) - -#define vf_res_a_data_1_vf_qpc_bt_num_s 16 -#define vf_res_a_data_1_vf_qpc_bt_num_m genmask(27, 16) - -#define vf_res_a_data_2_vf_srqc_bt_idx_s 0 -#define vf_res_a_data_2_vf_srqc_bt_idx_m genmask(8, 0) - -#define vf_res_a_data_2_vf_srqc_bt_num_s 16 -#define vf_res_a_data_2_vf_srqc_bt_num_m genmask(25, 16) - -#define vf_res_a_data_3_vf_cqc_bt_idx_s 0 -#define vf_res_a_data_3_vf_cqc_bt_idx_m genmask(8, 0) - -#define vf_res_a_data_3_vf_cqc_bt_num_s 16 -#define vf_res_a_data_3_vf_cqc_bt_num_m genmask(25, 16) - -#define vf_res_a_data_4_vf_mpt_bt_idx_s 0 -#define vf_res_a_data_4_vf_mpt_bt_idx_m genmask(8, 0) - -#define vf_res_a_data_4_vf_mpt_bt_num_s 16 -#define vf_res_a_data_4_vf_mpt_bt_num_m genmask(25, 16) - -#define vf_res_a_data_5_vf_eqc_idx_s 0 -#define vf_res_a_data_5_vf_eqc_idx_m genmask(8, 0) - -#define vf_res_a_data_5_vf_eqc_num_s 16 -#define vf_res_a_data_5_vf_eqc_num_m genmask(25, 16) - -struct hns_roce_vf_res_b { - __le32 rsv0; - __le32 vf_smac_idx_num; - __le32 vf_sgid_idx_num; - __le32 vf_qid_idx_sl_num; - __le32 vf_sccc_idx_num; - __le32 vf_gmv_idx_num; -}; - -#define vf_res_b_data_0_vf_id_s 0 -#define vf_res_b_data_0_vf_id_m genmask(7, 0) - -#define vf_res_b_data_1_vf_smac_idx_s 0 -#define vf_res_b_data_1_vf_smac_idx_m genmask(7, 0) - -#define vf_res_b_data_1_vf_smac_num_s 8 -#define vf_res_b_data_1_vf_smac_num_m genmask(16, 8) - -#define vf_res_b_data_2_vf_sgid_idx_s 0 -#define vf_res_b_data_2_vf_sgid_idx_m genmask(7, 0) - -#define vf_res_b_data_2_vf_sgid_num_s 8 -#define vf_res_b_data_2_vf_sgid_num_m genmask(16, 8) - -#define vf_res_b_data_3_vf_qid_idx_s 0 -#define vf_res_b_data_3_vf_qid_idx_m genmask(9, 0) - -#define vf_res_b_data_3_vf_sl_num_s 16 -#define vf_res_b_data_3_vf_sl_num_m genmask(19, 16) - -#define vf_res_b_data_4_vf_sccc_bt_idx_s 0 -#define vf_res_b_data_4_vf_sccc_bt_idx_m genmask(8, 0) - -#define vf_res_b_data_4_vf_sccc_bt_num_s 9 -#define vf_res_b_data_4_vf_sccc_bt_num_m genmask(17, 9) - -#define vf_res_b_data_5_vf_gmv_bt_idx_s 0 -#define vf_res_b_data_5_vf_gmv_bt_idx_m genmask(7, 0) - -#define vf_res_b_data_5_vf_gmv_bt_num_s 16 -#define vf_res_b_data_5_vf_gmv_bt_num_m genmask(24, 16) +/* + * fields of hns_roce_opc_query_pf_res and hns_roce_opc_alloc_vf_res + */ +#define func_res_a_vf_id cmq_req_field_loc(7, 0) +#define func_res_a_qpc_bt_idx cmq_req_field_loc(42, 32) +#define func_res_a_qpc_bt_num cmq_req_field_loc(59, 48) +#define func_res_a_srqc_bt_idx cmq_req_field_loc(72, 64) +#define func_res_a_srqc_bt_num cmq_req_field_loc(89, 80) +#define func_res_a_cqc_bt_idx cmq_req_field_loc(104, 96) +#define func_res_a_cqc_bt_num cmq_req_field_loc(121, 112) +#define func_res_a_mpt_bt_idx cmq_req_field_loc(136, 128) +#define func_res_a_mpt_bt_num cmq_req_field_loc(153, 144) +#define func_res_a_eqc_bt_idx cmq_req_field_loc(168, 160) +#define func_res_a_eqc_bt_num cmq_req_field_loc(185, 176) +#define func_res_b_smac_idx cmq_req_field_loc(39, 32) +#define func_res_b_smac_num cmq_req_field_loc(48, 40) +#define func_res_b_sgid_idx cmq_req_field_loc(71, 64) +#define func_res_b_sgid_num cmq_req_field_loc(80, 72) +#define func_res_b_qid_idx cmq_req_field_loc(105, 96) +#define func_res_b_qid_num cmq_req_field_loc(122, 112) +#define func_res_v_qid_num cmq_req_field_loc(115, 112) + +#define func_res_b_sccc_bt_idx cmq_req_field_loc(136, 128) +#define func_res_b_sccc_bt_num cmq_req_field_loc(145, 137) +#define func_res_b_gmv_bt_idx cmq_req_field_loc(167, 160) +#define func_res_b_gmv_bt_num cmq_req_field_loc(176, 168) +#define func_res_v_gmv_bt_num cmq_req_field_loc(184, 176) + +/* fields of hns_roce_opc_query_pf_timer_res */ +#define pf_timer_res_qpc_item_idx cmq_req_field_loc(43, 32) +#define pf_timer_res_qpc_item_num cmq_req_field_loc(60, 48) +#define pf_timer_res_cqc_item_idx cmq_req_field_loc(74, 64) +#define pf_timer_res_cqc_item_num cmq_req_field_loc(91, 80)
|
Networking
|
0fb46da051aec3c143e41adc321f3c8a7506d19c
|
xi wang
|
drivers
|
infiniband
|
hns, hw
|
rdma/hns: query the number of functions supported by the pf
|
query how many functions are supported by the pf from the fw and store it in the hns_roce_dev structure which will be used to support the configuration of virtual functions.
|
this release includes the landlock security module, which aims to make easier to sandbox applications; support for the clang control flow integrity, which aims to abort the program upon detecting certain forms of undefined behavior; support for randomising the stack address offset in each syscall; support for concurrent tbl flushing; preparatory apple m1 support; support for incoming amd and intel graphics chips; bpf support for calling kernel functions directly; a virtio sound driver for improved sound experience on virtualized guests; io_uring support for multi shot mode and a misc cgroup for miscellaneous resources. as always, there are many other features, new drivers, improvements and fixes.
|
support roce on virtual functions of hip09
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'security', 'networking', 'architectures arm x86 mips powerpc riscv s390 ia64 xtensa']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'remote processors', 'clock', 'phy ("physical layer" framework)', 'various']
|
['hns ']
|
['h', 'c']
| 3
| 10
| 4
|
--- diff --git a/drivers/infiniband/hw/hns/hns_roce_device.h b/drivers/infiniband/hw/hns/hns_roce_device.h --- a/drivers/infiniband/hw/hns/hns_roce_device.h +++ b/drivers/infiniband/hw/hns/hns_roce_device.h + u32 func_num; diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c --- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c +++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c - if (hr_dev->pci_dev->revision < pci_revision_id_hip09) + if (hr_dev->pci_dev->revision < pci_revision_id_hip09) { + hr_dev->func_num = 1; + } - if (ret) + if (ret) { + hr_dev->func_num = 1; + } + hr_dev->func_num = le32_to_cpu(desc.func_info.own_func_num); diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.h b/drivers/infiniband/hw/hns/hns_roce_hw_v2.h --- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.h +++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.h - __le32 rsv1; + __le32 own_func_num; - __le32 rsv2[4]; + __le32 rsv[4];
|
Networking
|
5b03a4226c42cf805c0ea11519c936cd76103ddd
|
wei xu
|
drivers
|
infiniband
|
hns, hw
|
rdma/hns: reserve the resource for the vfs
|
query the resource including eqc/smac/sgid from the firmware in the pf and distribute fairly among all the functions belong to the pf.
|
this release includes the landlock security module, which aims to make easier to sandbox applications; support for the clang control flow integrity, which aims to abort the program upon detecting certain forms of undefined behavior; support for randomising the stack address offset in each syscall; support for concurrent tbl flushing; preparatory apple m1 support; support for incoming amd and intel graphics chips; bpf support for calling kernel functions directly; a virtio sound driver for improved sound experience on virtualized guests; io_uring support for multi shot mode and a misc cgroup for miscellaneous resources. as always, there are many other features, new drivers, improvements and fixes.
|
support roce on virtual functions of hip09
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'security', 'networking', 'architectures arm x86 mips powerpc riscv s390 ia64 xtensa']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'remote processors', 'clock', 'phy ("physical layer" framework)', 'various']
|
['hns ']
|
['h', 'c']
| 3
| 60
| 28
|
--- diff --git a/drivers/infiniband/hw/hns/hns_roce_device.h b/drivers/infiniband/hw/hns/hns_roce_device.h --- a/drivers/infiniband/hw/hns/hns_roce_device.h +++ b/drivers/infiniband/hw/hns/hns_roce_device.h + u32 eqc_bt_num; + u32 smac_bt_num; + u32 sgid_bt_num; diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c --- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c +++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c + u32 func_num; + func_num = hr_dev->func_num ? hr_dev->func_num : 1; - caps->qpc_bt_num = hr_reg_read(r_a, func_res_a_qpc_bt_num); - caps->srqc_bt_num = hr_reg_read(r_a, func_res_a_srqc_bt_num); - caps->cqc_bt_num = hr_reg_read(r_a, func_res_a_cqc_bt_num); - caps->mpt_bt_num = hr_reg_read(r_a, func_res_a_mpt_bt_num); - caps->sccc_bt_num = hr_reg_read(r_b, func_res_b_sccc_bt_num); - caps->sl_num = hr_reg_read(r_b, func_res_b_qid_num); - caps->gmv_bt_num = hr_reg_read(r_b, func_res_b_gmv_bt_num); + caps->qpc_bt_num = hr_reg_read(r_a, func_res_a_qpc_bt_num) / func_num; + caps->srqc_bt_num = hr_reg_read(r_a, func_res_a_srqc_bt_num) / func_num; + caps->cqc_bt_num = hr_reg_read(r_a, func_res_a_cqc_bt_num) / func_num; + caps->mpt_bt_num = hr_reg_read(r_a, func_res_a_mpt_bt_num) / func_num; + caps->eqc_bt_num = hr_reg_read(r_a, func_res_a_eqc_bt_num) / func_num; + caps->smac_bt_num = hr_reg_read(r_b, func_res_b_smac_num) / func_num; + caps->sgid_bt_num = hr_reg_read(r_b, func_res_b_sgid_num) / func_num; + caps->sccc_bt_num = hr_reg_read(r_b, func_res_b_sccc_bt_num) / func_num; + caps->sl_num = hr_reg_read(r_b, func_res_b_qid_num) / func_num; + caps->gmv_bt_num = hr_reg_read(r_b, func_res_b_gmv_bt_num) / func_num; -static int hns_roce_alloc_vf_resource(struct hns_roce_dev *hr_dev) +static int __hns_roce_alloc_vf_resource(struct hns_roce_dev *hr_dev, int vf_id) + struct hns_roce_caps *caps = &hr_dev->caps; - hr_reg_write(r_a, func_res_a_qpc_bt_num, hns_roce_vf_qpc_bt_num); - hr_reg_write(r_a, func_res_a_qpc_bt_idx, 0); - hr_reg_write(r_a, func_res_a_srqc_bt_num, hns_roce_vf_srqc_bt_num); - hr_reg_write(r_a, func_res_a_srqc_bt_idx, 0); - hr_reg_write(r_a, func_res_a_cqc_bt_num, hns_roce_vf_cqc_bt_num); - hr_reg_write(r_a, func_res_a_cqc_bt_idx, 0); - hr_reg_write(r_a, func_res_a_mpt_bt_num, hns_roce_vf_mpt_bt_num); - hr_reg_write(r_a, func_res_a_mpt_bt_idx, 0); - hr_reg_write(r_a, func_res_a_eqc_bt_num, hns_roce_vf_eqc_num); - hr_reg_write(r_a, func_res_a_eqc_bt_idx, 0); - hr_reg_write(r_b, func_res_b_smac_num, hns_roce_vf_smac_num); - hr_reg_write(r_b, func_res_b_smac_idx, 0); - hr_reg_write(r_b, func_res_b_sgid_num, hns_roce_vf_sgid_num); - hr_reg_write(r_b, func_res_b_sgid_idx, 0); - hr_reg_write(r_b, func_res_v_qid_num, hns_roce_vf_sl_num); - hr_reg_write(r_b, func_res_b_qid_idx, 0); - hr_reg_write(r_b, func_res_b_sccc_bt_num, hns_roce_vf_sccc_bt_num); - hr_reg_write(r_b, func_res_b_sccc_bt_idx, 0); + hr_reg_write(r_a, func_res_a_vf_id, vf_id); + + hr_reg_write(r_a, func_res_a_qpc_bt_num, caps->qpc_bt_num); + hr_reg_write(r_a, func_res_a_qpc_bt_idx, vf_id * caps->qpc_bt_num); + hr_reg_write(r_a, func_res_a_srqc_bt_num, caps->srqc_bt_num); + hr_reg_write(r_a, func_res_a_srqc_bt_idx, vf_id * caps->srqc_bt_num); + hr_reg_write(r_a, func_res_a_cqc_bt_num, caps->cqc_bt_num); + hr_reg_write(r_a, func_res_a_cqc_bt_idx, vf_id * caps->cqc_bt_num); + hr_reg_write(r_a, func_res_a_mpt_bt_num, caps->mpt_bt_num); + hr_reg_write(r_a, func_res_a_mpt_bt_idx, vf_id * caps->mpt_bt_num); + hr_reg_write(r_a, func_res_a_eqc_bt_num, caps->eqc_bt_num); + hr_reg_write(r_a, func_res_a_eqc_bt_idx, vf_id * caps->eqc_bt_num); + hr_reg_write(r_b, func_res_v_qid_num, caps->sl_num); + hr_reg_write(r_b, func_res_b_qid_idx, vf_id * caps->sl_num); + hr_reg_write(r_b, func_res_b_sccc_bt_num, caps->sccc_bt_num); + hr_reg_write(r_b, func_res_b_sccc_bt_idx, vf_id * caps->sccc_bt_num); + + if (hr_dev->pci_dev->revision >= pci_revision_id_hip09) { + hr_reg_write(r_b, func_res_v_gmv_bt_num, caps->gmv_bt_num); + hr_reg_write(r_b, func_res_b_gmv_bt_idx, + vf_id * caps->gmv_bt_num); + } else { + hr_reg_write(r_b, func_res_b_sgid_num, caps->sgid_bt_num); + hr_reg_write(r_b, func_res_b_sgid_idx, + vf_id * caps->sgid_bt_num); + hr_reg_write(r_b, func_res_b_smac_num, caps->smac_bt_num); + hr_reg_write(r_b, func_res_b_smac_idx, + vf_id * caps->smac_bt_num); + } +static int hns_roce_alloc_vf_resource(struct hns_roce_dev *hr_dev) +{ + int vf_id; + int ret; + + for (vf_id = 0; vf_id < hr_dev->func_num; vf_id++) { + ret = __hns_roce_alloc_vf_resource(hr_dev, vf_id); + if (ret) + return ret; + } + + return 0; +} + diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.h b/drivers/infiniband/hw/hns/hns_roce_hw_v2.h --- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.h +++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.h -#define hns_roce_vf_eqc_num 64 -#define hns_roce_vf_sgid_num 32
|
Networking
|
2a424e1d112aee2b74786b5d29125ea57da1146f
|
wei xu
|
drivers
|
infiniband
|
hns, hw
|
rdma/hns: set parameters of all the functions belong to a pf
|
switch parameters of all functions belong to a pf should be set including vfs.
|
this release includes the landlock security module, which aims to make easier to sandbox applications; support for the clang control flow integrity, which aims to abort the program upon detecting certain forms of undefined behavior; support for randomising the stack address offset in each syscall; support for concurrent tbl flushing; preparatory apple m1 support; support for incoming amd and intel graphics chips; bpf support for calling kernel functions directly; a virtio sound driver for improved sound experience on virtualized guests; io_uring support for multi shot mode and a misc cgroup for miscellaneous resources. as always, there are many other features, new drivers, improvements and fixes.
|
support roce on virtual functions of hip09
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'security', 'networking', 'architectures arm x86 mips powerpc riscv s390 ia64 xtensa']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'remote processors', 'clock', 'phy ("physical layer" framework)', 'various']
|
['hns ']
|
['c']
| 1
| 16
| 2
|
--- diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c --- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c +++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c -static int hns_roce_set_vf_switch_param(struct hns_roce_dev *hr_dev, int vf_id) +static int __hns_roce_set_vf_switch_param(struct hns_roce_dev *hr_dev, + u32 vf_id) +static int hns_roce_set_vf_switch_param(struct hns_roce_dev *hr_dev) +{ + u32 vf_id; + int ret; + + for (vf_id = 0; vf_id < hr_dev->func_num; vf_id++) { + ret = __hns_roce_set_vf_switch_param(hr_dev, vf_id); + if (ret) + return ret; + } + return 0; +} + - ret = hns_roce_set_vf_switch_param(hr_dev, 0); + ret = hns_roce_set_vf_switch_param(hr_dev);
|
Networking
|
accfc1affe9e8f25a393a53fdf9936d5bc3dc001
|
wei xu
|
drivers
|
infiniband
|
hns, hw
|
rdma/hns: enable roce on virtual functions
|
introduce the vf support by adding code changes to allow vf pci device initialization, assgining the reserved resource of the pf to the active vfs, setting the default abilities, applying the interruptions, resetting and reducing the default qp/gid number to aovid exceeding the hardware limitation.
|
this release includes the landlock security module, which aims to make easier to sandbox applications; support for the clang control flow integrity, which aims to abort the program upon detecting certain forms of undefined behavior; support for randomising the stack address offset in each syscall; support for concurrent tbl flushing; preparatory apple m1 support; support for incoming amd and intel graphics chips; bpf support for calling kernel functions directly; a virtio sound driver for improved sound experience on virtualized guests; io_uring support for multi shot mode and a misc cgroup for miscellaneous resources. as always, there are many other features, new drivers, improvements and fixes.
|
support roce on virtual functions of hip09
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'security', 'networking', 'architectures arm x86 mips powerpc riscv s390 ia64 xtensa']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'remote processors', 'clock', 'phy ("physical layer" framework)', 'various']
|
['hns ']
|
['h', 'c']
| 3
| 202
| 39
|
--- diff --git a/drivers/infiniband/hw/hns/hns_roce_device.h b/drivers/infiniband/hw/hns/hns_roce_device.h --- a/drivers/infiniband/hw/hns/hns_roce_device.h +++ b/drivers/infiniband/hw/hns/hns_roce_device.h + u32 is_vf; diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c --- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c +++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c +static void calc_pg_sz(u32 obj_num, u32 obj_size, u32 hop_num, u32 ctx_bt_num, + u32 *buf_page_size, u32 *bt_page_size, u32 hem_type); + -static void hns_roce_function_clear(struct hns_roce_dev *hr_dev) +static void __hns_roce_function_clear(struct hns_roce_dev *hr_dev, int vf_id) + resp->rst_funcid_en = cpu_to_le32(vf_id); + resp->rst_funcid_en = cpu_to_le32(vf_id); - hr_dev->is_reset = true; + if (vf_id == 0) + hr_dev->is_reset = true; +static void hns_roce_free_vf_resource(struct hns_roce_dev *hr_dev, int vf_id) +{ + enum hns_roce_opcode_type opcode = hns_roce_opc_alloc_vf_res; + struct hns_roce_cmq_desc desc[2]; + struct hns_roce_cmq_req *req_a; + + req_a = (struct hns_roce_cmq_req *)desc[0].data; + hns_roce_cmq_setup_basic_desc(&desc[0], opcode, false); + desc[0].flag |= cpu_to_le16(hns_roce_cmd_flag_next); + hns_roce_cmq_setup_basic_desc(&desc[1], opcode, false); + hr_reg_write(req_a, func_res_a_vf_id, vf_id); + hns_roce_cmq_send(hr_dev, desc, 2); +} + +static void hns_roce_function_clear(struct hns_roce_dev *hr_dev) +{ + int i; + + for (i = hr_dev->func_num - 1; i >= 0; i--) { + __hns_roce_function_clear(hr_dev, i); + if (i != 0) + hns_roce_free_vf_resource(hr_dev, i); + } +} + -static int hns_roce_query_pf_resource(struct hns_roce_dev *hr_dev) +static int load_func_res_caps(struct hns_roce_dev *hr_dev, bool is_vf) - enum hns_roce_opcode_type opcode = hns_roce_opc_query_pf_res; + enum hns_roce_opcode_type opcode; - func_num = hr_dev->func_num ? hr_dev->func_num : 1; + if (is_vf) { + opcode = hns_roce_opc_query_vf_res; + func_num = 1; + } else { + opcode = hns_roce_opc_query_pf_res; + func_num = hr_dev->func_num; + } + - caps->sl_num = hr_reg_read(r_b, func_res_b_qid_num) / func_num; - caps->gmv_bt_num = hr_reg_read(r_b, func_res_b_gmv_bt_num) / func_num; + + if (is_vf) { + caps->sl_num = hr_reg_read(r_b, func_res_v_qid_num) / func_num; + caps->gmv_bt_num = hr_reg_read(r_b, func_res_v_gmv_bt_num) / + func_num; + } else { + caps->sl_num = hr_reg_read(r_b, func_res_b_qid_num) / func_num; + caps->gmv_bt_num = hr_reg_read(r_b, func_res_b_gmv_bt_num) / + func_num; + } +static int hns_roce_query_pf_resource(struct hns_roce_dev *hr_dev) +{ + return load_func_res_caps(hr_dev, false); +} + +static int hns_roce_query_vf_resource(struct hns_roce_dev *hr_dev) +{ + return load_func_res_caps(hr_dev, true); +} + - struct hns_roce_cmq_desc desc; + struct hns_roce_cmq_desc desc; + struct hns_roce_v2_priv *priv = hr_dev->priv; - caps->num_comp_vectors = hns_roce_v2_comp_vec_num; + caps->num_comp_vectors = + min_t(u32, caps->eqc_bt_num - 1, + (u32)priv->handle->rinfo.num_vectors - 2); + caps->pbl_ba_pg_sz = hns_roce_ba_pg_sz_supported_16k; + caps->pbl_buf_pg_sz = 0; + caps->pbl_hop_num = hns_roce_pbl_hop_num; - caps->chunk_sz = hns_roce_v2_table_chunk_size; + caps->eqe_ba_pg_sz = 0; + caps->eqe_buf_pg_sz = 0; + caps->eqe_hop_num = hns_roce_eqe_hop_num; + caps->tsq_buf_pg_sz = 0; + caps->chunk_sz = hns_roce_v2_table_chunk_size; + + calc_pg_sz(caps->num_qps, caps->qpc_sz, caps->qpc_hop_num, + caps->qpc_bt_num, &caps->qpc_buf_pg_sz, &caps->qpc_ba_pg_sz, + hem_type_qpc); + calc_pg_sz(caps->num_mtpts, caps->mtpt_entry_sz, caps->mpt_hop_num, + caps->mpt_bt_num, &caps->mpt_buf_pg_sz, &caps->mpt_ba_pg_sz, + hem_type_mtpt); + calc_pg_sz(caps->num_cqs, caps->cqc_entry_sz, caps->cqc_hop_num, + caps->cqc_bt_num, &caps->cqc_buf_pg_sz, &caps->cqc_ba_pg_sz, + hem_type_cqc); + + if (hr_dev->caps.flags & hns_roce_cap_flag_qp_flow_ctrl) + calc_pg_sz(caps->num_qps, caps->sccc_sz, + caps->sccc_hop_num, caps->sccc_bt_num, + &caps->sccc_buf_pg_sz, &caps->sccc_ba_pg_sz, + hem_type_sccc); + + if (hr_dev->caps.flags & hns_roce_cap_flag_srq) { + calc_pg_sz(caps->num_srqs, caps->srqc_entry_sz, + caps->srqc_hop_num, caps->srqc_bt_num, + &caps->srqc_buf_pg_sz, &caps->srqc_ba_pg_sz, + hem_type_srqc); + calc_pg_sz(caps->num_srqwqe_segs, caps->mtt_entry_sz, + caps->srqwqe_hop_num, 1, &caps->srqwqe_buf_pg_sz, + &caps->srqwqe_ba_pg_sz, hem_type_srqwqe); + calc_pg_sz(caps->num_idx_segs, caps->idx_entry_sz, + caps->idx_hop_num, 1, &caps->idx_buf_pg_sz, + &caps->idx_ba_pg_sz, hem_type_idx); + } + + caps->gid_table_len[0] /= hr_dev->func_num; + + +static int hns_roce_v2_vf_profile(struct hns_roce_dev *hr_dev) +{ + int ret; + + hr_dev->vendor_part_id = hr_dev->pci_dev->device; + hr_dev->sys_image_guid = be64_to_cpu(hr_dev->ib_dev.node_guid); + hr_dev->func_num = 1; + + ret = hns_roce_query_vf_resource(hr_dev); + if (ret) { + dev_err(hr_dev->dev, + "query the vf resource fail, ret = %d. ", ret); + return ret; + } + + set_default_caps(hr_dev); + + ret = hns_roce_v2_set_bt(hr_dev); + if (ret) { + dev_err(hr_dev->dev, + "configure the vf bt attribute fail, ret = %d. ", + ret); + return ret; + } + + return 0; +} + + if (hr_dev->is_vf) + return hns_roce_v2_vf_profile(hr_dev); + + /* alloc memory for source address table buffer space chunk */ + for (gmv_count = 0; gmv_count < hr_dev->caps.gmv_entry_num; + gmv_count++) { + ret = hns_roce_table_get(hr_dev, &hr_dev->gmv_table, gmv_count); + if (ret) + goto err_gmv_failed; + } + + if (hr_dev->is_vf) + return 0; + - /* alloc memory for gmv(gid/mac/vlan) table buffer space chunk */ - for (gmv_count = 0; gmv_count < hr_dev->caps.gmv_entry_num; - gmv_count++) { - ret = hns_roce_table_get(hr_dev, &hr_dev->gmv_table, gmv_count); - if (ret) { - dev_err(hr_dev->dev, - "failed to get gmv table, ret = %d. ", ret); - goto err_gmv_failed; - } - } - -err_gmv_failed: - for (i = 0; i < gmv_count; i++) - hns_roce_table_put(hr_dev, &hr_dev->gmv_table, i); - +err_gmv_failed: + for (i = 0; i < gmv_count; i++) + hns_roce_table_put(hr_dev, &hr_dev->gmv_table, i); + +static void put_hem_table(struct hns_roce_dev *hr_dev) +{ + int i; + + for (i = 0; i < hr_dev->caps.gmv_entry_num; i++) + hns_roce_table_put(hr_dev, &hr_dev->gmv_table, i); + + if (hr_dev->is_vf) + return; + + for (i = 0; i < hr_dev->caps.qpc_timer_bt_num; i++) + hns_roce_table_put(hr_dev, &hr_dev->qpc_timer_table, i); + + for (i = 0; i < hr_dev->caps.cqc_timer_bt_num; i++) + hns_roce_table_put(hr_dev, &hr_dev->cqc_timer_table, i); +} + + ret = get_hem_table(hr_dev); + if (ret) + return ret; + + if (hr_dev->is_vf) + return 0; + - return ret; + goto err_tsq_init_failed; - ret = get_hem_table(hr_dev); - if (ret) - goto err_get_hem_table_failed; - -err_get_hem_table_failed: - hns_roce_free_link_table(hr_dev, &priv->tpq); +err_tsq_init_failed: + put_hem_table(hr_dev); - hns_roce_free_link_table(hr_dev, &priv->tsq); + hns_roce_free_link_table(hr_dev, &priv->tpq); - hns_roce_free_link_table(hr_dev, &priv->tpq); - hns_roce_free_link_table(hr_dev, &priv->tsq); + if (!hr_dev->is_vf) { + hns_roce_free_link_table(hr_dev, &priv->tpq); + hns_roce_free_link_table(hr_dev, &priv->tsq); + } + {pci_vdevice(huawei, hnae3_dev_id_rdma_dcb_pfc_vf), + hnae3_dev_support_roce_dcb_bits}, + const struct pci_device_id *id; + id = pci_match_id(hns_roce_hw_v2_pci_tbl, hr_dev->pci_dev); + hr_dev->is_vf = id->driver_data; - for (i = 0; i < hns_roce_v2_max_irq_num; i++) + for (i = 0; i < handle->rinfo.num_vectors; i++) + if (id->driver_data && handle->pdev->revision < pci_revision_id_hip09) + return 0; + diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.h b/drivers/infiniband/hw/hns/hns_roce_hw_v2.h --- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.h +++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.h -#define hns_roce_v2_max_qp_num 0x100000 +#define hns_roce_v2_max_qp_num 0x1000 -#define hns_roce_v2_gid_index_num 256 +#define hns_roce_v2_gid_index_num 16 + hns_roce_opc_query_vf_res = 0x850e, - * fields of hns_roce_opc_query_pf_res and hns_roce_opc_alloc_vf_res + * fields of hns_roce_opc_query_pf_res, hns_roce_opc_query_vf_res + * and hns_roce_opc_alloc_vf_res
|
Networking
|
0b567cde9d7aa0a6667cc5ac4b89a0927b7b2c3a
|
wei xu
|
drivers
|
infiniband
|
hns, hw
|
rdma/hns: remove duplicated hem page size config code
|
remove duplicated code for setting hem page size in pf and vf.
|
this release includes the landlock security module, which aims to make easier to sandbox applications; support for the clang control flow integrity, which aims to abort the program upon detecting certain forms of undefined behavior; support for randomising the stack address offset in each syscall; support for concurrent tbl flushing; preparatory apple m1 support; support for incoming amd and intel graphics chips; bpf support for calling kernel functions directly; a virtio sound driver for improved sound experience on virtualized guests; io_uring support for multi shot mode and a misc cgroup for miscellaneous resources. as always, there are many other features, new drivers, improvements and fixes.
|
support roce on virtual functions of hip09
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'security', 'networking', 'architectures arm x86 mips powerpc riscv s390 ia64 xtensa']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'remote processors', 'clock', 'phy ("physical layer" framework)', 'various']
|
['hns ']
|
['h', 'c']
| 3
| 76
| 110
|
--- diff --git a/drivers/infiniband/hw/hns/hns_roce_device.h b/drivers/infiniband/hw/hns/hns_roce_device.h --- a/drivers/infiniband/hw/hns/hns_roce_device.h +++ b/drivers/infiniband/hw/hns/hns_roce_device.h - u32 num_cqe_segs; diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c --- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c +++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c -static void calc_pg_sz(u32 obj_num, u32 obj_size, u32 hop_num, u32 ctx_bt_num, - u32 *buf_page_size, u32 *bt_page_size, u32 hem_type); - - caps->max_sq_inline = hns_roce_v2_max_sq_inline; - caps->num_cqe_segs = hns_roce_v2_max_cqe_segs; - caps->qpc_sz = hns_roce_v2_qpc_sz; - caps->cqe_sz = hns_roce_v2_cqe_size; - caps->qpc_ba_pg_sz = 0; - caps->qpc_buf_pg_sz = 0; - caps->srqc_ba_pg_sz = 0; - caps->srqc_buf_pg_sz = 0; - caps->cqc_ba_pg_sz = 0; - caps->cqc_buf_pg_sz = 0; - caps->mpt_ba_pg_sz = 0; - caps->mpt_buf_pg_sz = 0; - caps->mtt_ba_pg_sz = 0; - caps->mtt_buf_pg_sz = 0; - caps->pbl_ba_pg_sz = hns_roce_ba_pg_sz_supported_16k; - caps->pbl_buf_pg_sz = 0; - caps->cqe_ba_pg_sz = hns_roce_ba_pg_sz_supported_256k; - caps->cqe_buf_pg_sz = 0; - caps->srqwqe_ba_pg_sz = 0; - caps->srqwqe_buf_pg_sz = 0; - caps->idx_ba_pg_sz = 0; - caps->idx_buf_pg_sz = 0; - caps->eqe_ba_pg_sz = 0; - caps->eqe_buf_pg_sz = 0; - caps->tsq_buf_pg_sz = 0; - caps->gid_table_len[0] = hns_roce_v2_gid_index_num; - caps->aeqe_size = hns_roce_aeqe_size; - caps->ceqe_size = hns_roce_ceqe_size; - caps->qpc_timer_ba_pg_sz = 0; - caps->qpc_timer_buf_pg_sz = 0; - caps->cqc_timer_ba_pg_sz = 0; - caps->cqc_timer_buf_pg_sz = 0; - caps->sccc_sz = hns_roce_v2_sccc_sz; - caps->sccc_ba_pg_sz = 0; - caps->sccc_buf_pg_sz = 0; - caps->gmv_ba_pg_sz = 0; - caps->gmv_buf_pg_sz = 0; - } - - calc_pg_sz(caps->num_qps, caps->qpc_sz, caps->qpc_hop_num, - caps->qpc_bt_num, &caps->qpc_buf_pg_sz, &caps->qpc_ba_pg_sz, - hem_type_qpc); - calc_pg_sz(caps->num_mtpts, caps->mtpt_entry_sz, caps->mpt_hop_num, - caps->mpt_bt_num, &caps->mpt_buf_pg_sz, &caps->mpt_ba_pg_sz, - hem_type_mtpt); - calc_pg_sz(caps->num_cqs, caps->cqc_entry_sz, caps->cqc_hop_num, - caps->cqc_bt_num, &caps->cqc_buf_pg_sz, &caps->cqc_ba_pg_sz, - hem_type_cqc); - - if (hr_dev->caps.flags & hns_roce_cap_flag_qp_flow_ctrl) - calc_pg_sz(caps->num_qps, caps->sccc_sz, - caps->sccc_hop_num, caps->sccc_bt_num, - &caps->sccc_buf_pg_sz, &caps->sccc_ba_pg_sz, - hem_type_sccc); - - if (hr_dev->caps.flags & hns_roce_cap_flag_srq) { - calc_pg_sz(caps->num_srqs, caps->srqc_entry_sz, - caps->srqc_hop_num, caps->srqc_bt_num, - &caps->srqc_buf_pg_sz, &caps->srqc_ba_pg_sz, - hem_type_srqc); - calc_pg_sz(caps->num_srqwqe_segs, caps->mtt_entry_sz, - caps->srqwqe_hop_num, 1, &caps->srqwqe_buf_pg_sz, - &caps->srqwqe_ba_pg_sz, hem_type_srqwqe); - calc_pg_sz(caps->num_idx_segs, caps->idx_entry_sz, - caps->idx_hop_num, 1, &caps->idx_buf_pg_sz, - &caps->idx_ba_pg_sz, hem_type_idx); + caps->max_sq_inline = hns_roce_v2_max_sq_inl_ext; + } else { + caps->aeqe_size = hns_roce_aeqe_size; + caps->ceqe_size = hns_roce_ceqe_size; + caps->cqe_sz = hns_roce_v2_cqe_size; + caps->qpc_sz = hns_roce_v2_qpc_sz; + caps->sccc_sz = hns_roce_v2_sccc_sz; + caps->gid_table_len[0] = hns_roce_v2_gid_index_num; + caps->max_sq_inline = hns_roce_v2_max_sq_inline; +static void set_hem_page_size(struct hns_roce_dev *hr_dev) +{ + struct hns_roce_caps *caps = &hr_dev->caps; + + /* eq */ + caps->eqe_ba_pg_sz = 0; + caps->eqe_buf_pg_sz = 0; + + /* link table */ + caps->tsq_buf_pg_sz = 0; + + /* mr */ + caps->pbl_ba_pg_sz = hns_roce_ba_pg_sz_supported_16k; + caps->pbl_buf_pg_sz = 0; + calc_pg_sz(caps->num_mtpts, caps->mtpt_entry_sz, caps->mpt_hop_num, + caps->mpt_bt_num, &caps->mpt_buf_pg_sz, &caps->mpt_ba_pg_sz, + hem_type_mtpt); + + /* qp */ + caps->qpc_timer_ba_pg_sz = 0; + caps->qpc_timer_buf_pg_sz = 0; + caps->mtt_ba_pg_sz = 0; + caps->mtt_buf_pg_sz = 0; + calc_pg_sz(caps->num_qps, caps->qpc_sz, caps->qpc_hop_num, + caps->qpc_bt_num, &caps->qpc_buf_pg_sz, &caps->qpc_ba_pg_sz, + hem_type_qpc); + + if (caps->flags & hns_roce_cap_flag_qp_flow_ctrl) + calc_pg_sz(caps->num_qps, caps->sccc_sz, caps->sccc_hop_num, + caps->sccc_bt_num, &caps->sccc_buf_pg_sz, + &caps->sccc_ba_pg_sz, hem_type_sccc); + + /* cq */ + calc_pg_sz(caps->num_cqs, caps->cqc_entry_sz, caps->cqc_hop_num, + caps->cqc_bt_num, &caps->cqc_buf_pg_sz, &caps->cqc_ba_pg_sz, + hem_type_cqc); + calc_pg_sz(caps->max_cqes, caps->cqe_sz, caps->cqe_hop_num, + 1, &caps->cqe_buf_pg_sz, &caps->cqe_ba_pg_sz, hem_type_cqe); + + if (caps->cqc_timer_entry_sz) + calc_pg_sz(caps->num_cqc_timer, caps->cqc_timer_entry_sz, + caps->cqc_timer_hop_num, caps->cqc_timer_bt_num, + &caps->cqc_timer_buf_pg_sz, + &caps->cqc_timer_ba_pg_sz, hem_type_cqc_timer); + + /* srq */ + if (caps->flags & hns_roce_cap_flag_srq) { + calc_pg_sz(caps->num_srqs, caps->srqc_entry_sz, + caps->srqc_hop_num, caps->srqc_bt_num, + &caps->srqc_buf_pg_sz, &caps->srqc_ba_pg_sz, + hem_type_srqc); + calc_pg_sz(caps->num_srqwqe_segs, caps->mtt_entry_sz, + caps->srqwqe_hop_num, 1, &caps->srqwqe_buf_pg_sz, + &caps->srqwqe_ba_pg_sz, hem_type_srqwqe); + calc_pg_sz(caps->num_idx_segs, caps->idx_entry_sz, + caps->idx_hop_num, 1, &caps->idx_buf_pg_sz, + &caps->idx_ba_pg_sz, hem_type_idx); + } + + /* gmv */ + caps->gmv_ba_pg_sz = 0; + caps->gmv_buf_pg_sz = 0; +} + - caps->mtt_ba_pg_sz = 0; - caps->num_cqe_segs = hns_roce_v2_max_cqe_segs; - caps->gmv_ba_pg_sz = 0; - caps->gmv_buf_pg_sz = 0; - calc_pg_sz(caps->num_qps, caps->qpc_sz, caps->qpc_hop_num, - caps->qpc_bt_num, &caps->qpc_buf_pg_sz, &caps->qpc_ba_pg_sz, - hem_type_qpc); - calc_pg_sz(caps->num_mtpts, caps->mtpt_entry_sz, caps->mpt_hop_num, - caps->mpt_bt_num, &caps->mpt_buf_pg_sz, &caps->mpt_ba_pg_sz, - hem_type_mtpt); - calc_pg_sz(caps->num_cqs, caps->cqc_entry_sz, caps->cqc_hop_num, - caps->cqc_bt_num, &caps->cqc_buf_pg_sz, &caps->cqc_ba_pg_sz, - hem_type_cqc); - calc_pg_sz(caps->num_srqs, caps->srqc_entry_sz, caps->srqc_hop_num, - caps->srqc_bt_num, &caps->srqc_buf_pg_sz, - &caps->srqc_ba_pg_sz, hem_type_srqc); - - caps->sccc_hop_num = ctx_hop_num; - calc_pg_sz(caps->num_qps, caps->sccc_sz, - caps->sccc_hop_num, caps->sccc_bt_num, - &caps->sccc_buf_pg_sz, &caps->sccc_ba_pg_sz, - hem_type_sccc); - calc_pg_sz(caps->num_cqc_timer, caps->cqc_timer_entry_sz, - caps->cqc_timer_hop_num, caps->cqc_timer_bt_num, - &caps->cqc_timer_buf_pg_sz, - &caps->cqc_timer_ba_pg_sz, hem_type_cqc_timer); - - calc_pg_sz(caps->num_cqe_segs, caps->mtt_entry_sz, caps->cqe_hop_num, - 1, &caps->cqe_buf_pg_sz, &caps->cqe_ba_pg_sz, hem_type_cqe); - calc_pg_sz(caps->num_srqwqe_segs, caps->mtt_entry_sz, - caps->srqwqe_hop_num, 1, &caps->srqwqe_buf_pg_sz, - &caps->srqwqe_ba_pg_sz, hem_type_srqwqe); - calc_pg_sz(caps->num_idx_segs, caps->idx_entry_sz, caps->idx_hop_num, - 1, &caps->idx_buf_pg_sz, &caps->idx_ba_pg_sz, hem_type_idx); - + set_hem_page_size(hr_dev); - caps->pbl_ba_pg_sz = hns_roce_ba_pg_sz_supported_16k; - caps->pbl_buf_pg_sz = 0; - caps->eqe_ba_pg_sz = 0; - caps->eqe_buf_pg_sz = 0; - caps->tsq_buf_pg_sz = 0; + set_hem_page_size(hr_dev); diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.h b/drivers/infiniband/hw/hns/hns_roce_hw_v2.h --- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.h +++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.h +#define hns_roce_v2_max_sq_inl_ext 0x400
|
Networking
|
719d13415f5977710afeb5f4e622c5c9c18976fa
|
xi wang
|
drivers
|
infiniband
|
hns, hw
|
rdma/hns: support configuring doorbell mode of rq and cq
|
hip08 supports both normal and record doorbell mode for rq and cq, sq record doorbell for userspace is also supported by the software for flushing cqe process. as now the capability of hip08 are exposed to the user and are configurable, the support of normal doorbell should be added back.
|
this release includes the landlock security module, which aims to make easier to sandbox applications; support for the clang control flow integrity, which aims to abort the program upon detecting certain forms of undefined behavior; support for randomising the stack address offset in each syscall; support for concurrent tbl flushing; preparatory apple m1 support; support for incoming amd and intel graphics chips; bpf support for calling kernel functions directly; a virtio sound driver for improved sound experience on virtualized guests; io_uring support for multi shot mode and a misc cgroup for miscellaneous resources. as always, there are many other features, new drivers, improvements and fixes.
|
support configuring doorbell mode of rq and cq
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'security', 'networking', 'architectures arm x86 mips powerpc riscv s390 ia64 xtensa']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'remote processors', 'clock', 'phy ("physical layer" framework)', 'various']
|
['hns ']
|
['h', 'c']
| 5
| 66
| 40
|
--- diff --git a/drivers/infiniband/hw/hns/hns_roce_cq.c b/drivers/infiniband/hw/hns/hns_roce_cq.c --- a/drivers/infiniband/hw/hns/hns_roce_cq.c +++ b/drivers/infiniband/hw/hns/hns_roce_cq.c - bool has_db = hr_dev->caps.flags & hns_roce_cap_flag_record_db; + bool has_db = hr_dev->caps.flags & hns_roce_cap_flag_cq_record_db; diff --git a/drivers/infiniband/hw/hns/hns_roce_device.h b/drivers/infiniband/hw/hns/hns_roce_device.h --- a/drivers/infiniband/hw/hns/hns_roce_device.h +++ b/drivers/infiniband/hw/hns/hns_roce_device.h - hns_roce_cap_flag_record_db = bit(3), - hns_roce_cap_flag_sq_record_db = bit(4), + hns_roce_cap_flag_cq_record_db = bit(3), + hns_roce_cap_flag_qp_record_db = bit(4), diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c --- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c +++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c - if (qp->state == ib_qps_err) { + if (unlikely(qp->state == ib_qps_err)) { +static inline void update_rq_db(struct hns_roce_dev *hr_dev, + struct hns_roce_qp *qp) +{ + /* + * hip08 hardware cannot flush the wqes in rq if the qp state + * gets into errored mode. hence, as a workaround to this + * hardware limitation, driver needs to assist in flushing. but + * the flushing operation uses mailbox to convey the qp state to + * the hardware and which can sleep due to the mutex protection + * around the mailbox calls. hence, use the deferred flush for + * now. + */ + if (unlikely(qp->state == ib_qps_err)) { + if (!test_and_set_bit(hns_roce_flush_flag, &qp->flush_flag)) + init_flush_work(hr_dev, qp); + } else { + if (likely(qp->en_flags & hns_roce_qp_cap_rq_record_db)) { + *qp->rdb.db_record = + qp->rq.head & v2_db_parameter_idx_m; + } else { + struct hns_roce_v2_db rq_db = {}; + + roce_set_field(rq_db.byte_4, v2_db_byte_4_tag_m, + v2_db_byte_4_tag_s, qp->qpn); + roce_set_field(rq_db.byte_4, v2_db_byte_4_cmd_m, + v2_db_byte_4_cmd_s, hns_roce_v2_rq_db); + roce_set_field(rq_db.parameter, v2_db_parameter_idx_m, + v2_db_parameter_idx_s, qp->rq.head); + + hns_roce_write64_k((__le32 *)&rq_db, qp->rq.db_reg_l); + } + } +} + - /* - * hip08 hardware cannot flush the wqes in rq if the qp state - * gets into errored mode. hence, as a workaround to this - * hardware limitation, driver needs to assist in flushing. but - * the flushing operation uses mailbox to convey the qp state to - * the hardware and which can sleep due to the mutex protection - * around the mailbox calls. hence, use the deferred flush for - * now. - */ - if (hr_qp->state == ib_qps_err) { - if (!test_and_set_bit(hns_roce_flush_flag, - &hr_qp->flush_flag)) - init_flush_work(hr_dev, hr_qp); - } else { - *hr_qp->rdb.db_record = hr_qp->rq.head & 0xffff; - } + update_rq_db(hr_dev, hr_qp); - hns_roce_cap_flag_record_db | - hns_roce_cap_flag_sq_record_db; + hns_roce_cap_flag_cq_record_db | + hns_roce_cap_flag_qp_record_db; - *hr_cq->set_ci_db = ci & v2_cq_db_parameter_cons_idx_m; + if (likely(hr_cq->flags & hns_roce_cq_flag_record_db)) { + *hr_cq->set_ci_db = ci & v2_cq_db_parameter_cons_idx_m; + } else { + struct hns_roce_v2_db cq_db = {}; + + roce_set_field(cq_db.byte_4, v2_cq_db_byte_4_tag_m, + v2_cq_db_byte_4_tag_s, hr_cq->cqn); + roce_set_field(cq_db.byte_4, v2_cq_db_byte_4_cmd_m, + v2_cq_db_byte_4_cmd_s, hns_roce_v2_cq_db_ptr); + roce_set_field(cq_db.parameter, v2_cq_db_parameter_cons_idx_m, + v2_cq_db_parameter_cons_idx_s, + ci & ((hr_cq->cq_depth << 1) - 1)); + roce_set_field(cq_db.parameter, v2_cq_db_parameter_cmd_sn_m, + v2_cq_db_parameter_cmd_sn_s, 1); + + hns_roce_write64_k((__le32 *)&cq_db, hr_cq->cq_db_l); + } - roce_set_field(context->byte_84_rq_ci_pi, - v2_qpc_byte_84_rq_producer_idx_m, - v2_qpc_byte_84_rq_producer_idx_s, hr_qp->rq.head); - roce_set_field(qpc_mask->byte_84_rq_ci_pi, - v2_qpc_byte_84_rq_producer_idx_m, - v2_qpc_byte_84_rq_producer_idx_s, 0); - - roce_set_field(qpc_mask->byte_84_rq_ci_pi, - v2_qpc_byte_84_rq_consumer_idx_m, - v2_qpc_byte_84_rq_consumer_idx_s, 0); - - if (hr_qp->rq.wqe_cnt) + if (hr_qp->en_flags & hns_roce_qp_cap_rq_record_db) diff --git a/drivers/infiniband/hw/hns/hns_roce_main.c b/drivers/infiniband/hw/hns/hns_roce_main.c --- a/drivers/infiniband/hw/hns/hns_roce_main.c +++ b/drivers/infiniband/hw/hns/hns_roce_main.c - if (hr_dev->caps.flags & hns_roce_cap_flag_record_db) { + if (hr_dev->caps.flags & hns_roce_cap_flag_cq_record_db || + hr_dev->caps.flags & hns_roce_cap_flag_qp_record_db) { - if (hr_dev->caps.flags & hns_roce_cap_flag_record_db) { + if (hr_dev->caps.flags & hns_roce_cap_flag_cq_record_db || + hr_dev->caps.flags & hns_roce_cap_flag_qp_record_db) { diff --git a/drivers/infiniband/hw/hns/hns_roce_qp.c b/drivers/infiniband/hw/hns/hns_roce_qp.c --- a/drivers/infiniband/hw/hns/hns_roce_qp.c +++ b/drivers/infiniband/hw/hns/hns_roce_qp.c - return ((hr_dev->caps.flags & hns_roce_cap_flag_sq_record_db) && + return ((hr_dev->caps.flags & hns_roce_cap_flag_qp_record_db) && - return ((hr_dev->caps.flags & hns_roce_cap_flag_record_db) && + return ((hr_dev->caps.flags & hns_roce_cap_flag_qp_record_db) && - return ((hr_dev->caps.flags & hns_roce_cap_flag_record_db) && + return ((hr_dev->caps.flags & hns_roce_cap_flag_qp_record_db) &&
|
Networking
|
cf8cd4ccb269dbd57c3792799d0e5251547d6734
|
yixian liu
|
drivers
|
infiniband
|
hns, hw
|
rdma/hns: support query information of functions from fw
|
add a new type of command to query mac id of functions from the firmware, it is used to select the template of congestion algorithm. more info will be supported in the future.
|
this release includes the landlock security module, which aims to make easier to sandbox applications; support for the clang control flow integrity, which aims to abort the program upon detecting certain forms of undefined behavior; support for randomising the stack address offset in each syscall; support for concurrent tbl flushing; preparatory apple m1 support; support for incoming amd and intel graphics chips; bpf support for calling kernel functions directly; a virtio sound driver for improved sound experience on virtualized guests; io_uring support for multi shot mode and a misc cgroup for miscellaneous resources. as always, there are many other features, new drivers, improvements and fixes.
|
support to select congestion control algorithm
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'security', 'networking', 'architectures arm x86 mips powerpc riscv s390 ia64 xtensa']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'remote processors', 'clock', 'phy ("physical layer" framework)', 'various']
|
['hns ']
|
['h', 'c']
| 3
| 37
| 1
|
--- diff --git a/drivers/infiniband/hw/hns/hns_roce_device.h b/drivers/infiniband/hw/hns/hns_roce_device.h --- a/drivers/infiniband/hw/hns/hns_roce_device.h +++ b/drivers/infiniband/hw/hns/hns_roce_device.h + u32 cong_algo_tmpl_id; diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c --- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c +++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c +static int hns_roce_query_func_info(struct hns_roce_dev *hr_dev) +{ + struct hns_roce_cmq_desc desc; + int ret; + + if (hr_dev->pci_dev->revision < pci_revision_id_hip09) + return 0; + + hns_roce_cmq_setup_basic_desc(&desc, hns_roce_opc_query_func_info, + true); + ret = hns_roce_cmq_send(hr_dev, &desc, 1); + if (ret) + return ret; + + hr_dev->cong_algo_tmpl_id = le32_to_cpu(desc.func_info.own_mac_id); + + return 0; +} + + ret = hns_roce_query_func_info(hr_dev); + if (ret) { + dev_err(hr_dev->dev, "query function info fail, ret = %d. ", + ret); + return ret; + } + diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.h b/drivers/infiniband/hw/hns/hns_roce_hw_v2.h --- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.h +++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.h + hns_roce_opc_query_func_info = 0x8407, - __le32 data[6]; + union { + __le32 data[6]; + struct { + __le32 rsv1; + __le32 own_mac_id; + __le32 rsv2[4]; + } func_info; + }; +
|
Networking
|
e079d87d1d9a5c27415bf5b71245566ae434372f
|
wei xu
|
drivers
|
infiniband
|
hns, hw
|
rdma/hns: support congestion control type selection according to the fw
|
the type of congestion control algorithm includes dcqcn, ldcp, hc3 and dip. the driver will select one of them according to the firmware when querying pf capabilities, and then set the related configuration fields into qpc.
|
this release includes the landlock security module, which aims to make easier to sandbox applications; support for the clang control flow integrity, which aims to abort the program upon detecting certain forms of undefined behavior; support for randomising the stack address offset in each syscall; support for concurrent tbl flushing; preparatory apple m1 support; support for incoming amd and intel graphics chips; bpf support for calling kernel functions directly; a virtio sound driver for improved sound experience on virtualized guests; io_uring support for multi shot mode and a misc cgroup for miscellaneous resources. as always, there are many other features, new drivers, improvements and fixes.
|
support to select congestion control algorithm
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'security', 'networking', 'architectures arm x86 mips powerpc riscv s390 ia64 xtensa']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'remote processors', 'clock', 'phy ("physical layer" framework)', 'various']
|
['hns ']
|
['h', 'c']
| 4
| 200
| 3
|
--- diff --git a/drivers/infiniband/hw/hns/hns_roce_device.h b/drivers/infiniband/hw/hns/hns_roce_device.h --- a/drivers/infiniband/hw/hns/hns_roce_device.h +++ b/drivers/infiniband/hw/hns/hns_roce_device.h +enum cong_type { + cong_type_dcqcn, + cong_type_ldcp, + cong_type_hc3, + cong_type_dip, +}; + + enum cong_type cong_type; + struct list_head dip_list; /* list of all dest ips on this dev */ + spinlock_t dip_list_lock; /* protect dip_list */ diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c --- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c +++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c + caps->cong_type = roce_get_field(resp_d->wq_hop_num_max_srqs, + v2_query_pf_caps_d_cong_type_m, + v2_query_pf_caps_d_cong_type_s); + +static void free_dip_list(struct hns_roce_dev *hr_dev) +{ + struct hns_roce_dip *hr_dip; + struct hns_roce_dip *tmp; + unsigned long flags; + + spin_lock_irqsave(&hr_dev->dip_list_lock, flags); + + list_for_each_entry_safe(hr_dip, tmp, &hr_dev->dip_list, node) { + list_del(&hr_dip->node); + kfree(hr_dip); + } + + spin_unlock_irqrestore(&hr_dev->dip_list_lock, flags); +} + + + if (hr_dev->pci_dev->revision == pci_revision_id_hip09) + free_dip_list(hr_dev); +static int get_dip_ctx_idx(struct ib_qp *ibqp, const struct ib_qp_attr *attr, + u32 *dip_idx) +{ + const struct ib_global_route *grh = rdma_ah_read_grh(&attr->ah_attr); + struct hns_roce_dev *hr_dev = to_hr_dev(ibqp->device); + struct hns_roce_dip *hr_dip; + unsigned long flags; + int ret = 0; + + spin_lock_irqsave(&hr_dev->dip_list_lock, flags); + + list_for_each_entry(hr_dip, &hr_dev->dip_list, node) { + if (!memcmp(grh->dgid.raw, hr_dip->dgid, 16)) + goto out; + } + + /* if no dgid is found, a new dip and a mapping between dgid and + * dip_idx will be created. + */ + hr_dip = kzalloc(sizeof(*hr_dip), gfp_kernel); + if (!hr_dip) { + ret = -enomem; + goto out; + } + + memcpy(hr_dip->dgid, grh->dgid.raw, sizeof(grh->dgid.raw)); + hr_dip->dip_idx = *dip_idx = ibqp->qp_num; + list_add_tail(&hr_dip->node, &hr_dev->dip_list); + +out: + spin_unlock_irqrestore(&hr_dev->dip_list_lock, flags); + return ret; +} + +enum { + cong_dcqcn, + cong_window, +}; + +enum { + unsupport_cong_level, + support_cong_level, +}; + +enum { + cong_ldcp, + cong_hc3, +}; + +enum { + dip_invalid, + dip_valid, +}; + +static int check_cong_type(struct ib_qp *ibqp, + struct hns_roce_congestion_algorithm *cong_alg) +{ + struct hns_roce_dev *hr_dev = to_hr_dev(ibqp->device); + + /* different congestion types match different configurations */ + switch (hr_dev->caps.cong_type) { + case cong_type_dcqcn: + cong_alg->alg_sel = cong_dcqcn; + cong_alg->alg_sub_sel = unsupport_cong_level; + cong_alg->dip_vld = dip_invalid; + break; + case cong_type_ldcp: + cong_alg->alg_sel = cong_window; + cong_alg->alg_sub_sel = cong_ldcp; + cong_alg->dip_vld = dip_invalid; + break; + case cong_type_hc3: + cong_alg->alg_sel = cong_window; + cong_alg->alg_sub_sel = cong_hc3; + cong_alg->dip_vld = dip_invalid; + break; + case cong_type_dip: + cong_alg->alg_sel = cong_dcqcn; + cong_alg->alg_sub_sel = unsupport_cong_level; + cong_alg->dip_vld = dip_valid; + break; + default: + ibdev_err(&hr_dev->ib_dev, + "error type(%u) for congestion selection. ", + hr_dev->caps.cong_type); + return -einval; + } + + return 0; +} + +static int fill_cong_field(struct ib_qp *ibqp, const struct ib_qp_attr *attr, + struct hns_roce_v2_qp_context *context, + struct hns_roce_v2_qp_context *qpc_mask) +{ + const struct ib_global_route *grh = rdma_ah_read_grh(&attr->ah_attr); + struct hns_roce_congestion_algorithm cong_field; + struct ib_device *ibdev = ibqp->device; + struct hns_roce_dev *hr_dev = to_hr_dev(ibdev); + u32 dip_idx = 0; + int ret; + + if (hr_dev->pci_dev->revision == pci_revision_id_hip08 || + grh->sgid_attr->gid_type == ib_gid_type_roce) + return 0; + + ret = check_cong_type(ibqp, &cong_field); + if (ret) + return ret; + + hr_reg_write(context, qpc_cong_algo_tmpl_id, hr_dev->cong_algo_tmpl_id + + hr_dev->caps.cong_type * hns_roce_cong_size); + hr_reg_write(qpc_mask, qpc_cong_algo_tmpl_id, 0); + hr_reg_write(&context->ext, qpcex_cong_alg_sel, cong_field.alg_sel); + hr_reg_write(&qpc_mask->ext, qpcex_cong_alg_sel, 0); + hr_reg_write(&context->ext, qpcex_cong_alg_sub_sel, + cong_field.alg_sub_sel); + hr_reg_write(&qpc_mask->ext, qpcex_cong_alg_sub_sel, 0); + hr_reg_write(&context->ext, qpcex_dip_ctx_idx_vld, cong_field.dip_vld); + hr_reg_write(&qpc_mask->ext, qpcex_dip_ctx_idx_vld, 0); + + /* if dip is disabled, there is no need to set dip idx */ + if (cong_field.dip_vld == 0) + return 0; + + ret = get_dip_ctx_idx(ibqp, attr, &dip_idx); + if (ret) { + ibdev_err(ibdev, "failed to fill cong field, ret = %d. ", ret); + return ret; + } + + hr_reg_write(&context->ext, qpcex_dip_ctx_idx, dip_idx); + hr_reg_write(&qpc_mask->ext, qpcex_dip_ctx_idx, 0); + + return 0; +} + + ret = fill_cong_field(ibqp, attr, context, qpc_mask); + if (ret) + return ret; + diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.h b/drivers/infiniband/hw/hns/hns_roce_hw_v2.h --- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.h +++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.h +#define hns_roce_cong_size 64 + +#define qpc_field_loc(h, l) field_loc(struct hns_roce_v2_qp_context, h, l) + +#define qpc_cong_algo_tmpl_id qpc_field_loc(455, 448) + -#define v2_qpc_byte_60_tempid_s 0 -#define v2_qpc_byte_60_tempid_m genmask(7, 0) - +#define qpcex_cong_alg_sel qpcex_field_loc(0, 0) +#define qpcex_cong_alg_sub_sel qpcex_field_loc(1, 1) +#define qpcex_dip_ctx_idx_vld qpcex_field_loc(2, 2) +#define qpcex_dip_ctx_idx qpcex_field_loc(22, 3) +#define v2_query_pf_caps_d_cong_type_s 26 +#define v2_query_pf_caps_d_cong_type_m genmask(29, 26) + +struct hns_roce_congestion_algorithm { + u8 alg_sel; + u8 alg_sub_sel; + u8 dip_vld; +}; +struct hns_roce_dip { + u8 dgid[gid_len_v2]; + u8 dip_idx; + struct list_head node; /* all dips are on a list */ +}; + diff --git a/drivers/infiniband/hw/hns/hns_roce_main.c b/drivers/infiniband/hw/hns/hns_roce_main.c --- a/drivers/infiniband/hw/hns/hns_roce_main.c +++ b/drivers/infiniband/hw/hns/hns_roce_main.c + init_list_head(&hr_dev->dip_list); + spin_lock_init(&hr_dev->dip_list_lock);
|
Networking
|
f91696f2f05326d9837b4088118c938e805be942
|
yangyang li
|
drivers
|
infiniband
|
hns, hw
|
rdma/hns: support more return types of command queue
|
add error code definition according to the return code from firmware to help find out more detailed reasons why a command fails to be sent.
|
this release includes the landlock security module, which aims to make easier to sandbox applications; support for the clang control flow integrity, which aims to abort the program upon detecting certain forms of undefined behavior; support for randomising the stack address offset in each syscall; support for concurrent tbl flushing; preparatory apple m1 support; support for incoming amd and intel graphics chips; bpf support for calling kernel functions directly; a virtio sound driver for improved sound experience on virtualized guests; io_uring support for multi shot mode and a misc cgroup for miscellaneous resources. as always, there are many other features, new drivers, improvements and fixes.
|
support more return types of command queue
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'security', 'networking', 'architectures arm x86 mips powerpc riscv s390 ia64 xtensa']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'remote processors', 'clock', 'phy ("physical layer" framework)', 'various']
|
['hns ']
|
['h']
| 1
| 14
| 4
|
--- diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.h b/drivers/infiniband/hw/hns/hns_roce_hw_v2.h --- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.h +++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.h - cmd_exec_success = 0, - cmd_no_auth = 1, - cmd_not_exec = 2, - cmd_queue_full = 3, + cmd_exec_success, + cmd_no_auth, + cmd_not_exist, + cmd_crq_full, + cmd_next_err, + cmd_not_exec, + cmd_para_err, + cmd_result_err, + cmd_timeout, + cmd_hilink_err, + cmd_info_illegal, + cmd_invalid, + cmd_roh_check_fail, + cmd_other_err = 0xff
|
Networking
|
0835cf58393c3c161647ff8b5a3b3298955404a2
|
lang cheng
|
drivers
|
infiniband
|
hns, hw
|
rdma/hns: support to query firmware version
|
implement the ops named get_dev_fw_str to support ib_get_device_fw_str().
|
this release includes the landlock security module, which aims to make easier to sandbox applications; support for the clang control flow integrity, which aims to abort the program upon detecting certain forms of undefined behavior; support for randomising the stack address offset in each syscall; support for concurrent tbl flushing; preparatory apple m1 support; support for incoming amd and intel graphics chips; bpf support for calling kernel functions directly; a virtio sound driver for improved sound experience on virtualized guests; io_uring support for multi shot mode and a misc cgroup for miscellaneous resources. as always, there are many other features, new drivers, improvements and fixes.
|
support to query firmware version
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'security', 'networking', 'architectures arm x86 mips powerpc riscv s390 ia64 xtensa']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'remote processors', 'clock', 'phy ("physical layer" framework)', 'various']
|
['hns ']
|
['c']
| 1
| 14
| 0
|
--- diff --git a/drivers/infiniband/hw/hns/hns_roce_main.c b/drivers/infiniband/hw/hns/hns_roce_main.c --- a/drivers/infiniband/hw/hns/hns_roce_main.c +++ b/drivers/infiniband/hw/hns/hns_roce_main.c +static void hns_roce_get_fw_ver(struct ib_device *device, char *str) +{ + u64 fw_ver = to_hr_dev(device)->caps.fw_ver; + unsigned int major, minor, sub_minor; + + major = upper_32_bits(fw_ver); + minor = high_16_bits(lower_32_bits(fw_ver)); + sub_minor = low_16_bits(fw_ver); + + snprintf(str, ib_fw_version_name_max, "%u.%u.%04u", major, minor, + sub_minor); +} + + .get_dev_fw_str = hns_roce_get_fw_ver,
|
Networking
|
847d19a451465304f54d69b5be97baecc86c3617
|
lang cheng leon romanovsky leonro nvidia com
|
drivers
|
infiniband
|
hns, hw
|
rdma/iwcm: allow afonly binding for ipv6 addresses
|
binding ipv6 address/port to af_inet6 domain only is provided via rdma_set_afonly(), but was not signalled to the provider. applications like nfs/rdma bind the same port to both ipv4 and ipv6 addresses simultaneously and thus rely on it working correctly.
|
this release includes the landlock security module, which aims to make easier to sandbox applications; support for the clang control flow integrity, which aims to abort the program upon detecting certain forms of undefined behavior; support for randomising the stack address offset in each syscall; support for concurrent tbl flushing; preparatory apple m1 support; support for incoming amd and intel graphics chips; bpf support for calling kernel functions directly; a virtio sound driver for improved sound experience on virtualized guests; io_uring support for multi shot mode and a misc cgroup for miscellaneous resources. as always, there are many other features, new drivers, improvements and fixes.
|
allow afonly binding for ipv6 addresses
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'security', 'networking', 'architectures arm x86 mips powerpc riscv s390 ia64 xtensa']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'remote processors', 'clock', 'phy ("physical layer" framework)', 'various']
|
['iwcm']
|
['h', 'c']
| 3
| 19
| 2
|
--- diff --git a/drivers/infiniband/core/cma.c b/drivers/infiniband/core/cma.c --- a/drivers/infiniband/core/cma.c +++ b/drivers/infiniband/core/cma.c + id->afonly = id_priv->afonly; diff --git a/drivers/infiniband/sw/siw/siw_cm.c b/drivers/infiniband/sw/siw/siw_cm.c --- a/drivers/infiniband/sw/siw/siw_cm.c +++ b/drivers/infiniband/sw/siw/siw_cm.c - struct sockaddr *raddr) + struct sockaddr *raddr, bool afonly) + if (afonly) { + rv = ip6_sock_set_v6only(s->sk); + if (rv) + return rv; + } + - rv = kernel_bindconnect(s, laddr, raddr); + rv = kernel_bindconnect(s, laddr, raddr, id->afonly); + if (id->afonly) { + rv = ip6_sock_set_v6only(s->sk); + if (rv) { + siw_dbg(id->device, + "ip6_sock_set_v6only erro: %d ", rv); + goto error; + } + } + diff --git a/include/rdma/iw_cm.h b/include/rdma/iw_cm.h --- a/include/rdma/iw_cm.h +++ b/include/rdma/iw_cm.h + bool afonly:1;
|
Networking
|
e35ecb466eb63c2311783208547633f90742d06d
|
bernard metzler chuck lever chuck lever oracle com benjamin coddington bcodding redhat com
|
include
|
rdma
|
core, siw, sw
|
ath11k: refactor ath11k_msi_config
|
move ath11k_msi_config to array of structures to add multiple pci devices support. no functional changes.
|
this release includes the landlock security module, which aims to make easier to sandbox applications; support for the clang control flow integrity, which aims to abort the program upon detecting certain forms of undefined behavior; support for randomising the stack address offset in each syscall; support for concurrent tbl flushing; preparatory apple m1 support; support for incoming amd and intel graphics chips; bpf support for calling kernel functions directly; a virtio sound driver for improved sound experience on virtualized guests; io_uring support for multi shot mode and a misc cgroup for miscellaneous resources. as always, there are many other features, new drivers, improvements and fixes.
|
add support for qcn9074
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'security', 'networking', 'architectures arm x86 mips powerpc riscv s390 ia64 xtensa']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'remote processors', 'clock', 'phy ("physical layer" framework)', 'various']
|
['ath11k ']
|
['h', 'c']
| 2
| 23
| 17
|
--- diff --git a/drivers/net/wireless/ath/ath11k/pci.c b/drivers/net/wireless/ath/ath11k/pci.c --- a/drivers/net/wireless/ath/ath11k/pci.c +++ b/drivers/net/wireless/ath/ath11k/pci.c -static const struct ath11k_msi_config msi_config = { - .total_vectors = 32, - .total_users = 4, - .users = (struct ath11k_msi_user[]) { - { .name = "mhi", .num_vectors = 3, .base_vector = 0 }, - { .name = "ce", .num_vectors = 10, .base_vector = 3 }, - { .name = "wake", .num_vectors = 1, .base_vector = 13 }, - { .name = "dp", .num_vectors = 18, .base_vector = 14 }, +static const struct ath11k_msi_config ath11k_msi_config[] = { + { + .total_vectors = 32, + .total_users = 4, + .users = (struct ath11k_msi_user[]) { + { .name = "mhi", .num_vectors = 3, .base_vector = 0 }, + { .name = "ce", .num_vectors = 10, .base_vector = 3 }, + { .name = "wake", .num_vectors = 1, .base_vector = 13 }, + { .name = "dp", .num_vectors = 18, .base_vector = 14 }, + }, + const struct ath11k_msi_config *msi_config = ab_pci->msi_config; - for (idx = 0; idx < msi_config.total_users; idx++) { - if (strcmp(user_name, msi_config.users[idx].name) == 0) { - *num_vectors = msi_config.users[idx].num_vectors; - *user_base_data = msi_config.users[idx].base_vector + for (idx = 0; idx < msi_config->total_users; idx++) { + if (strcmp(user_name, msi_config->users[idx].name) == 0) { + *num_vectors = msi_config->users[idx].num_vectors; + *user_base_data = msi_config->users[idx].base_vector + ab_pci->msi_ep_base_data; - *base_vector = msi_config.users[idx].base_vector; + *base_vector = msi_config->users[idx].base_vector; + const struct ath11k_msi_config *msi_config = ab_pci->msi_config; - msi_config.total_vectors, - msi_config.total_vectors, + msi_config->total_vectors, + msi_config->total_vectors, - if (num_vectors != msi_config.total_vectors) { + if (num_vectors != msi_config->total_vectors) { - msi_config.total_vectors, num_vectors); + msi_config->total_vectors, num_vectors); + ab_pci->msi_config = &ath11k_msi_config[0]; diff --git a/drivers/net/wireless/ath/ath11k/pci.h b/drivers/net/wireless/ath/ath11k/pci.h --- a/drivers/net/wireless/ath/ath11k/pci.h +++ b/drivers/net/wireless/ath/ath11k/pci.h + const struct ath11k_msi_config *msi_config;
|
Networking
|
7a3aed0c3c36cc08a1b123d752f141797f6ba79a
|
anilkumar kolli
|
drivers
|
net
|
ath, ath11k, wireless
|
ath11k: move qmi service_ins_id to hw_params
|
qmi service_ins_id is unique for qca6390 and qcn9074, this is needed for adding qcn9074 support. no functional changes.
|
this release includes the landlock security module, which aims to make easier to sandbox applications; support for the clang control flow integrity, which aims to abort the program upon detecting certain forms of undefined behavior; support for randomising the stack address offset in each syscall; support for concurrent tbl flushing; preparatory apple m1 support; support for incoming amd and intel graphics chips; bpf support for calling kernel functions directly; a virtio sound driver for improved sound experience on virtualized guests; io_uring support for multi shot mode and a misc cgroup for miscellaneous resources. as always, there are many other features, new drivers, improvements and fixes.
|
add support for qcn9074
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'security', 'networking', 'architectures arm x86 mips powerpc riscv s390 ia64 xtensa']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'remote processors', 'clock', 'phy ("physical layer" framework)', 'various']
|
['ath11k ']
|
['h', 'c']
| 4
| 6
| 2
|
--- diff --git a/drivers/net/wireless/ath/ath11k/ahb.c b/drivers/net/wireless/ath/ath11k/ahb.c --- a/drivers/net/wireless/ath/ath11k/ahb.c +++ b/drivers/net/wireless/ath/ath11k/ahb.c - ab->qmi.service_ins_id = ath11k_qmi_wlfw_service_ins_id_v01_ipq8074; + ab->qmi.service_ins_id = ab->hw_params.qmi_service_ins_id; diff --git a/drivers/net/wireless/ath/ath11k/core.c b/drivers/net/wireless/ath/ath11k/core.c --- a/drivers/net/wireless/ath/ath11k/core.c +++ b/drivers/net/wireless/ath/ath11k/core.c + .qmi_service_ins_id = ath11k_qmi_wlfw_service_ins_id_v01_ipq8074, + .qmi_service_ins_id = ath11k_qmi_wlfw_service_ins_id_v01_ipq8074, + .qmi_service_ins_id = ath11k_qmi_wlfw_service_ins_id_v01_qca6390, diff --git a/drivers/net/wireless/ath/ath11k/hw.h b/drivers/net/wireless/ath/ath11k/hw.h --- a/drivers/net/wireless/ath/ath11k/hw.h +++ b/drivers/net/wireless/ath/ath11k/hw.h + u32 qmi_service_ins_id; diff --git a/drivers/net/wireless/ath/ath11k/pci.c b/drivers/net/wireless/ath/ath11k/pci.c --- a/drivers/net/wireless/ath/ath11k/pci.c +++ b/drivers/net/wireless/ath/ath11k/pci.c - ab->qmi.service_ins_id = ath11k_qmi_wlfw_service_ins_id_v01_qca6390; + ab->qmi.service_ins_id = ab->hw_params.qmi_service_ins_id;
|
Networking
|
16001e4b2e681b8fb5e7bc50db5522081d46347a
|
anilkumar kolli
|
drivers
|
net
|
ath, ath11k, wireless
|
ath11k: qmi: increase the number of fw segments
|
qcn9074 firmware uses 20mb of host ddr memory, fw requests the memory in segmnets of size 1mb/512kb/256kb. increase the number of fw memory segments to 52.
|
this release includes the landlock security module, which aims to make easier to sandbox applications; support for the clang control flow integrity, which aims to abort the program upon detecting certain forms of undefined behavior; support for randomising the stack address offset in each syscall; support for concurrent tbl flushing; preparatory apple m1 support; support for incoming amd and intel graphics chips; bpf support for calling kernel functions directly; a virtio sound driver for improved sound experience on virtualized guests; io_uring support for multi shot mode and a misc cgroup for miscellaneous resources. as always, there are many other features, new drivers, improvements and fixes.
|
add support for qcn9074
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'security', 'networking', 'architectures arm x86 mips powerpc riscv s390 ia64 xtensa']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'remote processors', 'clock', 'phy ("physical layer" framework)', 'various']
|
['ath11k ']
|
['h']
| 1
| 3
| 3
|
--- diff --git a/drivers/net/wireless/ath/ath11k/qmi.h b/drivers/net/wireless/ath/ath11k/qmi.h --- a/drivers/net/wireless/ath/ath11k/qmi.h +++ b/drivers/net/wireless/ath/ath11k/qmi.h -#define ath11k_qmi_wlanfw_max_num_mem_seg_v01 32 +#define ath11k_qmi_wlanfw_max_num_mem_seg_v01 52 -#define qmi_wlanfw_request_mem_ind_msg_v01_max_len 1124 -#define qmi_wlanfw_respond_mem_req_msg_v01_max_len 548 +#define qmi_wlanfw_request_mem_ind_msg_v01_max_len 1824 +#define qmi_wlanfw_respond_mem_req_msg_v01_max_len 888
|
Networking
|
fa5f473d764398a09f7deea3a042a1130ee50e90
|
anilkumar kolli
|
drivers
|
net
|
ath, ath11k, wireless
|
ath11k: update memory segment count for qcn9074
|
qcn9074 fw requests three types memory segments during the boot, qmi mem seg type 1 of size 15728640 qmi mem seg type 4 of size 3735552 qmi mem seg type 3 of size 1048576 segment type 3 is for m3 coredump memory.
|
this release includes the landlock security module, which aims to make easier to sandbox applications; support for the clang control flow integrity, which aims to abort the program upon detecting certain forms of undefined behavior; support for randomising the stack address offset in each syscall; support for concurrent tbl flushing; preparatory apple m1 support; support for incoming amd and intel graphics chips; bpf support for calling kernel functions directly; a virtio sound driver for improved sound experience on virtualized guests; io_uring support for multi shot mode and a misc cgroup for miscellaneous resources. as always, there are many other features, new drivers, improvements and fixes.
|
add support for qcn9074
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'security', 'networking', 'architectures arm x86 mips powerpc riscv s390 ia64 xtensa']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'remote processors', 'clock', 'phy ("physical layer" framework)', 'various']
|
['ath11k ']
|
['h', 'c']
| 2
| 3
| 1
|
--- diff --git a/drivers/net/wireless/ath/ath11k/qmi.c b/drivers/net/wireless/ath/ath11k/qmi.c --- a/drivers/net/wireless/ath/ath11k/qmi.c +++ b/drivers/net/wireless/ath/ath11k/qmi.c - if (ab->qmi.mem_seg_count <= 2) { + if (ab->qmi.mem_seg_count <= ath11k_qmi_fw_mem_req_segment_cnt) { diff --git a/drivers/net/wireless/ath/ath11k/qmi.h b/drivers/net/wireless/ath/ath11k/qmi.h --- a/drivers/net/wireless/ath/ath11k/qmi.h +++ b/drivers/net/wireless/ath/ath11k/qmi.h +#define ath11k_qmi_fw_mem_req_segment_cnt 3 +#define m3_dump_region_type 0x3
|
Networking
|
5f67d306155e6a757f0b6b2b061e3ea13f44c536
|
anilkumar kolli
|
drivers
|
net
|
ath, ath11k, wireless
|
ath11k: add qcn9074 mhi controller config
|
add mhi config for qcn9074 also populate ath11k_hw_params for qcn9074.
|
this release includes the landlock security module, which aims to make easier to sandbox applications; support for the clang control flow integrity, which aims to abort the program upon detecting certain forms of undefined behavior; support for randomising the stack address offset in each syscall; support for concurrent tbl flushing; preparatory apple m1 support; support for incoming amd and intel graphics chips; bpf support for calling kernel functions directly; a virtio sound driver for improved sound experience on virtualized guests; io_uring support for multi shot mode and a misc cgroup for miscellaneous resources. as always, there are many other features, new drivers, improvements and fixes.
|
add support for qcn9074
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'security', 'networking', 'architectures arm x86 mips powerpc riscv s390 ia64 xtensa']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'remote processors', 'clock', 'phy ("physical layer" framework)', 'various']
|
['ath11k ']
|
['h', 'c']
| 4
| 122
| 8
|
--- diff --git a/drivers/net/wireless/ath/ath11k/core.c b/drivers/net/wireless/ath/ath11k/core.c --- a/drivers/net/wireless/ath/ath11k/core.c +++ b/drivers/net/wireless/ath/ath11k/core.c + { + .name = "qcn9074 hw1.0", + .hw_rev = ath11k_hw_qcn9074_hw10, + .fw = { + .dir = "qcn9074/hw1.0", + .board_size = 256 * 1024, + .cal_size = 256 * 1024, + }, + .max_radios = 1, + .single_pdev_only = false, + .qmi_service_ins_id = ath11k_qmi_wlfw_service_ins_id_v01_qcn9074, + }, diff --git a/drivers/net/wireless/ath/ath11k/core.h b/drivers/net/wireless/ath/ath11k/core.h --- a/drivers/net/wireless/ath/ath11k/core.h +++ b/drivers/net/wireless/ath/ath11k/core.h + ath11k_hw_qcn9074_hw10, diff --git a/drivers/net/wireless/ath/ath11k/mhi.c b/drivers/net/wireless/ath/ath11k/mhi.c --- a/drivers/net/wireless/ath/ath11k/mhi.c +++ b/drivers/net/wireless/ath/ath11k/mhi.c +#include "pci.h" -static struct mhi_channel_config ath11k_mhi_channels[] = { +static struct mhi_channel_config ath11k_mhi_channels_qca6390[] = { -static struct mhi_event_config ath11k_mhi_events[] = { +static struct mhi_event_config ath11k_mhi_events_qca6390[] = { -static struct mhi_controller_config ath11k_mhi_config = { +static struct mhi_controller_config ath11k_mhi_config_qca6390 = { - .num_channels = array_size(ath11k_mhi_channels), - .ch_cfg = ath11k_mhi_channels, - .num_events = array_size(ath11k_mhi_events), - .event_cfg = ath11k_mhi_events, + .num_channels = array_size(ath11k_mhi_channels_qca6390), + .ch_cfg = ath11k_mhi_channels_qca6390, + .num_events = array_size(ath11k_mhi_events_qca6390), + .event_cfg = ath11k_mhi_events_qca6390, +}; + +static struct mhi_channel_config ath11k_mhi_channels_qcn9074[] = { + { + .num = 0, + .name = "loopback", + .num_elements = 32, + .event_ring = 1, + .dir = dma_to_device, + .ee_mask = 0x14, + .pollcfg = 0, + .doorbell = mhi_db_brst_disable, + .lpm_notify = false, + .offload_channel = false, + .doorbell_mode_switch = false, + .auto_queue = false, + }, + { + .num = 1, + .name = "loopback", + .num_elements = 32, + .event_ring = 1, + .dir = dma_from_device, + .ee_mask = 0x14, + .pollcfg = 0, + .doorbell = mhi_db_brst_disable, + .lpm_notify = false, + .offload_channel = false, + .doorbell_mode_switch = false, + .auto_queue = false, + }, + { + .num = 20, + .name = "ipcr", + .num_elements = 32, + .event_ring = 1, + .dir = dma_to_device, + .ee_mask = 0x14, + .pollcfg = 0, + .doorbell = mhi_db_brst_disable, + .lpm_notify = false, + .offload_channel = false, + .doorbell_mode_switch = false, + .auto_queue = false, + }, + { + .num = 21, + .name = "ipcr", + .num_elements = 32, + .event_ring = 1, + .dir = dma_from_device, + .ee_mask = 0x14, + .pollcfg = 0, + .doorbell = mhi_db_brst_disable, + .lpm_notify = false, + .offload_channel = false, + .doorbell_mode_switch = false, + .auto_queue = true, + }, +}; + +static struct mhi_event_config ath11k_mhi_events_qcn9074[] = { + { + .num_elements = 32, + .irq_moderation_ms = 0, + .irq = 1, + .data_type = mhi_er_ctrl, + .mode = mhi_db_brst_disable, + .hardware_event = false, + .client_managed = false, + .offload_channel = false, + }, + { + .num_elements = 256, + .irq_moderation_ms = 1, + .irq = 2, + .mode = mhi_db_brst_disable, + .priority = 1, + .hardware_event = false, + .client_managed = false, + .offload_channel = false, + }, +}; + +static struct mhi_controller_config ath11k_mhi_config_qcn9074 = { + .max_channels = 30, + .timeout_ms = 10000, + .use_bounce_buf = false, + .buf_len = 0, + .num_channels = array_size(ath11k_mhi_channels_qcn9074), + .ch_cfg = ath11k_mhi_channels_qcn9074, + .num_events = array_size(ath11k_mhi_events_qcn9074), + .event_cfg = ath11k_mhi_events_qcn9074, + struct mhi_controller_config *ath11k_mhi_config; - ret = mhi_register_controller(mhi_ctrl, &ath11k_mhi_config); + if (ab->hw_rev == ath11k_hw_qca6390_hw20) + ath11k_mhi_config = &ath11k_mhi_config_qca6390; + else if (ab->hw_rev == ath11k_hw_qcn9074_hw10) + ath11k_mhi_config = &ath11k_mhi_config_qcn9074; + + ret = mhi_register_controller(mhi_ctrl, ath11k_mhi_config); diff --git a/drivers/net/wireless/ath/ath11k/qmi.h b/drivers/net/wireless/ath/ath11k/qmi.h --- a/drivers/net/wireless/ath/ath11k/qmi.h +++ b/drivers/net/wireless/ath/ath11k/qmi.h +#define ath11k_qmi_wlfw_service_ins_id_v01_qcn9074 0x07
|
Networking
|
a233811ef60081192a2b13ce23253671114308d8
|
anilkumar kolli
|
drivers
|
net
|
ath, ath11k, wireless
|
ath11k: add static window support for register access
|
three window slots can be configure. first window slot dedicate for dynamic selection and remaining two slots dedicate for static selection. to optimise the window selection, frequent registers (umac, ce) are configure in static window slot. so that we minimise the window selection. other registers are configure in dynamic window slot. get the window start address from the respective offset and access the read/write register.
|
this release includes the landlock security module, which aims to make easier to sandbox applications; support for the clang control flow integrity, which aims to abort the program upon detecting certain forms of undefined behavior; support for randomising the stack address offset in each syscall; support for concurrent tbl flushing; preparatory apple m1 support; support for incoming amd and intel graphics chips; bpf support for calling kernel functions directly; a virtio sound driver for improved sound experience on virtualized guests; io_uring support for multi shot mode and a misc cgroup for miscellaneous resources. as always, there are many other features, new drivers, improvements and fixes.
|
add support for qcn9074
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'security', 'networking', 'architectures arm x86 mips powerpc riscv s390 ia64 xtensa']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'remote processors', 'clock', 'phy ("physical layer" framework)', 'various']
|
['ath11k ']
|
['h', 'c']
| 3
| 68
| 9
|
--- diff --git a/drivers/net/wireless/ath/ath11k/core.h b/drivers/net/wireless/ath/ath11k/core.h --- a/drivers/net/wireless/ath/ath11k/core.h +++ b/drivers/net/wireless/ath/ath11k/core.h + bool static_window_map; diff --git a/drivers/net/wireless/ath/ath11k/hal.h b/drivers/net/wireless/ath/ath11k/hal.h --- a/drivers/net/wireless/ath/ath11k/hal.h +++ b/drivers/net/wireless/ath/ath11k/hal.h +#define hal_seq_wcss_umac_offset 0x00a00000 +#define hal_ce_wfss_ce_reg_base 0x01b80000 +#define hal_wlaon_reg_base 0x01f80000 + diff --git a/drivers/net/wireless/ath/ath11k/pci.c b/drivers/net/wireless/ath/ath11k/pci.c --- a/drivers/net/wireless/ath/ath11k/pci.c +++ b/drivers/net/wireless/ath/ath11k/pci.c +static inline void ath11k_pci_select_static_window(struct ath11k_pci *ab_pci) +{ + u32 umac_window = field_get(window_value_mask, hal_seq_wcss_umac_offset); + u32 ce_window = field_get(window_value_mask, hal_ce_wfss_ce_reg_base); + u32 window; + + window = (umac_window << 12) | (ce_window << 6); + + iowrite32(window_enable_bit | window, ab_pci->ab->mem + window_reg_address); +} + +static inline u32 ath11k_pci_get_window_start(struct ath11k_base *ab, + u32 offset) +{ + u32 window_start; + + /* if offset lies within dp register range, use 3rd window */ + if ((offset ^ hal_seq_wcss_umac_offset) < window_range_mask) + window_start = 3 * window_start; + /* if offset lies within ce register range, use 2nd window */ + else if ((offset ^ hal_ce_wfss_ce_reg_base) < window_range_mask) + window_start = 2 * window_start; + else + window_start = window_start; + + return window_start; +} + + u32 window_start; - spin_lock_bh(&ab_pci->window_lock); - ath11k_pci_select_window(ab_pci, offset); - iowrite32(value, ab->mem + window_start + (offset & window_range_mask)); - spin_unlock_bh(&ab_pci->window_lock); + if (ab->bus_params.static_window_map) + window_start = ath11k_pci_get_window_start(ab, offset); + else + window_start = window_start; + + if (window_start == window_start) { + spin_lock_bh(&ab_pci->window_lock); + ath11k_pci_select_window(ab_pci, offset); + iowrite32(value, ab->mem + window_start + + (offset & window_range_mask)); + spin_unlock_bh(&ab_pci->window_lock); + } else { + iowrite32(value, ab->mem + window_start + + (offset & window_range_mask)); + } - u32 val; + u32 val, window_start; - spin_lock_bh(&ab_pci->window_lock); - ath11k_pci_select_window(ab_pci, offset); - val = ioread32(ab->mem + window_start + (offset & window_range_mask)); - spin_unlock_bh(&ab_pci->window_lock); + if (ab->bus_params.static_window_map) + window_start = ath11k_pci_get_window_start(ab, offset); + else + window_start = window_start; + + if (window_start == window_start) { + spin_lock_bh(&ab_pci->window_lock); + ath11k_pci_select_window(ab_pci, offset); + val = ioread32(ab->mem + window_start + + (offset & window_range_mask)); + spin_unlock_bh(&ab_pci->window_lock); + } else { + val = ioread32(ab->mem + window_start + + (offset & window_range_mask)); + } + if (ab->bus_params.static_window_map) + ath11k_pci_select_static_window(ab_pci); +
|
Networking
|
480a73610c95511e42fb7d0359b523f66883e51a
|
karthikeyan periyasamy
|
drivers
|
net
|
ath, ath11k, wireless
|
ath11k: add hal support for qcn9074
|
define the hal ring address and ring meta descriptor mask for qcn9074. move the platform specific address to the ath11k_hw_regs. define tx_mesh_enable ops in ath11k_hw_ops since its accessing platform specific tcl descriptor.
|
this release includes the landlock security module, which aims to make easier to sandbox applications; support for the clang control flow integrity, which aims to abort the program upon detecting certain forms of undefined behavior; support for randomising the stack address offset in each syscall; support for concurrent tbl flushing; preparatory apple m1 support; support for incoming amd and intel graphics chips; bpf support for calling kernel functions directly; a virtio sound driver for improved sound experience on virtualized guests; io_uring support for multi shot mode and a misc cgroup for miscellaneous resources. as always, there are many other features, new drivers, improvements and fixes.
|
add support for qcn9074
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'security', 'networking', 'architectures arm x86 mips powerpc riscv s390 ia64 xtensa']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'remote processors', 'clock', 'phy ("physical layer" framework)', 'various']
|
['ath11k ']
|
['h', 'c']
| 11
| 269
| 89
|
--- diff --git a/drivers/net/wireless/ath/ath11k/core.c b/drivers/net/wireless/ath/ath11k/core.c --- a/drivers/net/wireless/ath/ath11k/core.c +++ b/drivers/net/wireless/ath/ath11k/core.c + .hw_ops = &qcn9074_ops, + .internal_sleep_clock = false, + .regs = &qcn9074_regs, + .rxdma1_enable = true, + .num_rxmda_per_pdev = 1, + .rx_mac_buf_ring = false, + .vdev_start_delay = false, + .htt_peer_map_v2 = true, + .tcl_0_only = false, + .interface_modes = bit(nl80211_iftype_station) | + bit(nl80211_iftype_ap) | + bit(nl80211_iftype_mesh_point), + .supports_monitor = true, + .supports_shadow_regs = false, + .idle_ps = false, + .cold_boot_calib = false, + .supports_suspend = false, diff --git a/drivers/net/wireless/ath/ath11k/dp_tx.c b/drivers/net/wireless/ath/ath11k/dp_tx.c --- a/drivers/net/wireless/ath/ath11k/dp_tx.c +++ b/drivers/net/wireless/ath/ath11k/dp_tx.c - ti.flags1 |= field_prep(hal_tcl_data_cmd_info2_mesh_enable, 1); + ti.enable_mesh = true; diff --git a/drivers/net/wireless/ath/ath11k/hal.c b/drivers/net/wireless/ath/ath11k/hal.c --- a/drivers/net/wireless/ath/ath11k/hal.c +++ b/drivers/net/wireless/ath/ath11k/hal.c - .reg_start = { - (hal_seq_wcss_umac_ce0_src_reg + - hal_ce_dst_ring_base_lsb), - hal_seq_wcss_umac_ce0_src_reg + hal_ce_dst_ring_hp, - }, - .reg_size = { - (hal_seq_wcss_umac_ce1_src_reg - - hal_seq_wcss_umac_ce0_src_reg), - (hal_seq_wcss_umac_ce1_src_reg - - hal_seq_wcss_umac_ce0_src_reg), - }, - .reg_start = { - (hal_seq_wcss_umac_ce0_dst_reg + - hal_ce_dst_ring_base_lsb), - hal_seq_wcss_umac_ce0_dst_reg + hal_ce_dst_ring_hp, - }, - .reg_size = { - (hal_seq_wcss_umac_ce1_dst_reg - - hal_seq_wcss_umac_ce0_dst_reg), - (hal_seq_wcss_umac_ce1_dst_reg - - hal_seq_wcss_umac_ce0_dst_reg), - }, - .reg_start = { - (hal_seq_wcss_umac_ce0_dst_reg + - hal_ce_dst_status_ring_base_lsb), - (hal_seq_wcss_umac_ce0_dst_reg + - hal_ce_dst_status_ring_hp), - }, - .reg_size = { - (hal_seq_wcss_umac_ce1_dst_reg - - hal_seq_wcss_umac_ce0_dst_reg), - (hal_seq_wcss_umac_ce1_dst_reg - - hal_seq_wcss_umac_ce0_dst_reg), - }, - .reg_start = { - (hal_seq_wcss_umac_wbm_reg + - hal_wbm_idle_link_ring_base_lsb), - (hal_seq_wcss_umac_wbm_reg + hal_wbm_idle_link_ring_hp), - }, - .reg_start = { - (hal_seq_wcss_umac_wbm_reg + - hal_wbm_release_ring_base_lsb), - (hal_seq_wcss_umac_wbm_reg + hal_wbm_release_ring_hp), - }, - .reg_start = { - (hal_seq_wcss_umac_wbm_reg + - hal_wbm0_release_ring_base_lsb), - (hal_seq_wcss_umac_wbm_reg + hal_wbm0_release_ring_hp), - }, - .reg_size = { - (hal_wbm1_release_ring_base_lsb - - hal_wbm0_release_ring_base_lsb), - (hal_wbm1_release_ring_hp - hal_wbm0_release_ring_hp), - }, - hal_wbm_idle_link_ring_misc_addr, 0x40); + hal_wbm_idle_link_ring_misc_addr(ab), 0x40); + s = &hal->srng_config[hal_ce_src]; + s->reg_start[0] = hal_seq_wcss_umac_ce0_src_reg(ab) + hal_ce_dst_ring_base_lsb; + s->reg_start[1] = hal_seq_wcss_umac_ce0_src_reg(ab) + hal_ce_dst_ring_hp; + s->reg_size[0] = hal_seq_wcss_umac_ce1_src_reg(ab) - + hal_seq_wcss_umac_ce0_src_reg(ab); + s->reg_size[1] = hal_seq_wcss_umac_ce1_src_reg(ab) - + hal_seq_wcss_umac_ce0_src_reg(ab); + + s = &hal->srng_config[hal_ce_dst]; + s->reg_start[0] = hal_seq_wcss_umac_ce0_dst_reg(ab) + hal_ce_dst_ring_base_lsb; + s->reg_start[1] = hal_seq_wcss_umac_ce0_dst_reg(ab) + hal_ce_dst_ring_hp; + s->reg_size[0] = hal_seq_wcss_umac_ce1_dst_reg(ab) - + hal_seq_wcss_umac_ce0_dst_reg(ab); + s->reg_size[1] = hal_seq_wcss_umac_ce1_dst_reg(ab) - + hal_seq_wcss_umac_ce0_dst_reg(ab); + + s = &hal->srng_config[hal_ce_dst_status]; + s->reg_start[0] = hal_seq_wcss_umac_ce0_dst_reg(ab) + + hal_ce_dst_status_ring_base_lsb; + s->reg_start[1] = hal_seq_wcss_umac_ce0_dst_reg(ab) + hal_ce_dst_status_ring_hp; + s->reg_size[0] = hal_seq_wcss_umac_ce1_dst_reg(ab) - + hal_seq_wcss_umac_ce0_dst_reg(ab); + s->reg_size[1] = hal_seq_wcss_umac_ce1_dst_reg(ab) - + hal_seq_wcss_umac_ce0_dst_reg(ab); + + s = &hal->srng_config[hal_wbm_idle_link]; + s->reg_start[0] = hal_seq_wcss_umac_wbm_reg + hal_wbm_idle_link_ring_base_lsb(ab); + s->reg_start[1] = hal_seq_wcss_umac_wbm_reg + hal_wbm_idle_link_ring_hp; + + s = &hal->srng_config[hal_sw2wbm_release]; + s->reg_start[0] = hal_seq_wcss_umac_wbm_reg + hal_wbm_release_ring_base_lsb(ab); + s->reg_start[1] = hal_seq_wcss_umac_wbm_reg + hal_wbm_release_ring_hp; + + s = &hal->srng_config[hal_wbm2sw_release]; + s->reg_start[0] = hal_seq_wcss_umac_wbm_reg + hal_wbm0_release_ring_base_lsb(ab); + s->reg_start[1] = hal_seq_wcss_umac_wbm_reg + hal_wbm0_release_ring_hp; + s->reg_size[0] = hal_wbm1_release_ring_base_lsb(ab) - + hal_wbm0_release_ring_base_lsb(ab); + s->reg_size[1] = hal_wbm1_release_ring_hp - hal_wbm0_release_ring_hp; + diff --git a/drivers/net/wireless/ath/ath11k/hal.h b/drivers/net/wireless/ath/ath11k/hal.h --- a/drivers/net/wireless/ath/ath11k/hal.h +++ b/drivers/net/wireless/ath/ath11k/hal.h -#define hal_seq_wcss_umac_ce0_src_reg 0x00a00000 -#define hal_seq_wcss_umac_ce0_dst_reg 0x00a01000 -#define hal_seq_wcss_umac_ce1_src_reg 0x00a02000 -#define hal_seq_wcss_umac_ce1_dst_reg 0x00a03000 +#define hal_seq_wcss_umac_ce0_src_reg(x) \ + (ab->hw_params.regs->hal_seq_wcss_umac_ce0_src_reg) +#define hal_seq_wcss_umac_ce0_dst_reg(x) \ + (ab->hw_params.regs->hal_seq_wcss_umac_ce0_dst_reg) +#define hal_seq_wcss_umac_ce1_src_reg(x) \ + (ab->hw_params.regs->hal_seq_wcss_umac_ce1_src_reg) +#define hal_seq_wcss_umac_ce1_dst_reg(x) \ + (ab->hw_params.regs->hal_seq_wcss_umac_ce1_dst_reg) -#define hal_wbm_idle_link_ring_base_lsb 0x00000860 -#define hal_wbm_idle_link_ring_misc_addr 0x00000870 +#define hal_wbm_idle_link_ring_base_lsb(x) \ + (ab->hw_params.regs->hal_wbm_idle_link_ring_base_lsb) +#define hal_wbm_idle_link_ring_misc_addr(x) \ + (ab->hw_params.regs->hal_wbm_idle_link_ring_misc) -#define hal_wbm_release_ring_base_lsb 0x000001d8 +#define hal_wbm_release_ring_base_lsb(x) \ + (ab->hw_params.regs->hal_wbm_release_ring_base_lsb) -#define hal_wbm0_release_ring_base_lsb 0x00000910 -#define hal_wbm1_release_ring_base_lsb 0x00000968 +#define hal_wbm0_release_ring_base_lsb(x) \ + (ab->hw_params.regs->hal_wbm0_release_ring_base_lsb) +#define hal_wbm1_release_ring_base_lsb(x) \ + (ab->hw_params.regs->hal_wbm1_release_ring_base_lsb) diff --git a/drivers/net/wireless/ath/ath11k/hal_desc.h b/drivers/net/wireless/ath/ath11k/hal_desc.h --- a/drivers/net/wireless/ath/ath11k/hal_desc.h +++ b/drivers/net/wireless/ath/ath11k/hal_desc.h -#define hal_tcl_data_cmd_info2_buf_timestamp genmask(18, 0) -#define hal_tcl_data_cmd_info2_buf_t_valid bit(19) -#define hal_tcl_data_cmd_info2_mesh_enable bit(20) -#define hal_tcl_data_cmd_info2_tid_overwrite bit(21) -#define hal_tcl_data_cmd_info2_tid genmask(25, 22) -#define hal_tcl_data_cmd_info2_lmac_id genmask(27, 26) +#define hal_tcl_data_cmd_info2_buf_timestamp genmask(18, 0) +#define hal_tcl_data_cmd_info2_buf_t_valid bit(19) +#define hal_ipq8074_tcl_data_cmd_info2_mesh_enable bit(20) +#define hal_tcl_data_cmd_info2_tid_overwrite bit(21) +#define hal_tcl_data_cmd_info2_tid genmask(25, 22) +#define hal_tcl_data_cmd_info2_lmac_id genmask(27, 26) +#define hal_qcn9074_tcl_data_cmd_info3_mesh_enable genmask(31, 30) diff --git a/drivers/net/wireless/ath/ath11k/hal_tx.c b/drivers/net/wireless/ath/ath11k/hal_tx.c --- a/drivers/net/wireless/ath/ath11k/hal_tx.c +++ b/drivers/net/wireless/ath/ath11k/hal_tx.c + + if (ti->enable_mesh && ab->hw_params.hw_ops->tx_mesh_enable) + ab->hw_params.hw_ops->tx_mesh_enable(ab, tcl_cmd); diff --git a/drivers/net/wireless/ath/ath11k/hal_tx.h b/drivers/net/wireless/ath/ath11k/hal_tx.h --- a/drivers/net/wireless/ath/ath11k/hal_tx.h +++ b/drivers/net/wireless/ath/ath11k/hal_tx.h + bool enable_mesh; diff --git a/drivers/net/wireless/ath/ath11k/hw.c b/drivers/net/wireless/ath/ath11k/hw.c --- a/drivers/net/wireless/ath/ath11k/hw.c +++ b/drivers/net/wireless/ath/ath11k/hw.c +static void ath11k_hw_ipq8074_tx_mesh_enable(struct ath11k_base *ab, + struct hal_tcl_data_cmd *tcl_cmd) +{ + tcl_cmd->info2 |= field_prep(hal_ipq8074_tcl_data_cmd_info2_mesh_enable, + true); +} + +static void ath11k_hw_qcn9074_tx_mesh_enable(struct ath11k_base *ab, + struct hal_tcl_data_cmd *tcl_cmd) +{ + tcl_cmd->info3 |= field_prep(hal_qcn9074_tcl_data_cmd_info3_mesh_enable, + true); +} + + .tx_mesh_enable = ath11k_hw_ipq8074_tx_mesh_enable, + .tx_mesh_enable = ath11k_hw_ipq8074_tx_mesh_enable, + .tx_mesh_enable = ath11k_hw_ipq8074_tx_mesh_enable, +}; + +const struct ath11k_hw_ops qcn9074_ops = { + .get_hw_mac_from_pdev_id = ath11k_hw_ipq6018_mac_from_pdev_id, + .wmi_init_config = ath11k_init_wmi_config_ipq8074, + .mac_id_to_pdev_id = ath11k_hw_mac_id_to_pdev_id_ipq8074, + .mac_id_to_srng_id = ath11k_hw_mac_id_to_srng_id_ipq8074, + .tx_mesh_enable = ath11k_hw_qcn9074_tx_mesh_enable, + /* wcss relative address */ + .hal_seq_wcss_umac_ce0_src_reg = 0x00a00000, + .hal_seq_wcss_umac_ce0_dst_reg = 0x00a01000, + .hal_seq_wcss_umac_ce1_src_reg = 0x00a02000, + .hal_seq_wcss_umac_ce1_dst_reg = 0x00a03000, + + /* wbm idle address */ + .hal_wbm_idle_link_ring_base_lsb = 0x00000860, + .hal_wbm_idle_link_ring_misc = 0x00000870, + + /* sw2wbm release address */ + .hal_wbm_release_ring_base_lsb = 0x000001d8, + + /* wbm2sw release address */ + .hal_wbm0_release_ring_base_lsb = 0x00000910, + .hal_wbm1_release_ring_base_lsb = 0x00000968, + + /* pcie base address */ + .pcie_qserdes_sysclk_en_sel = 0x0, + .pcie_pcs_osc_dtct_config_base = 0x0, + + /* wcss relative address */ + .hal_seq_wcss_umac_ce0_src_reg = 0x00a00000, + .hal_seq_wcss_umac_ce0_dst_reg = 0x00a01000, + .hal_seq_wcss_umac_ce1_src_reg = 0x00a02000, + .hal_seq_wcss_umac_ce1_dst_reg = 0x00a03000, + + /* wbm idle address */ + .hal_wbm_idle_link_ring_base_lsb = 0x00000860, + .hal_wbm_idle_link_ring_misc = 0x00000870, + + /* sw2wbm release address */ + .hal_wbm_release_ring_base_lsb = 0x000001d8, + + /* wbm2sw release address */ + .hal_wbm0_release_ring_base_lsb = 0x00000910, + .hal_wbm1_release_ring_base_lsb = 0x00000968, + + /* pcie base address */ + .pcie_qserdes_sysclk_en_sel = 0x01e0c0ac, + .pcie_pcs_osc_dtct_config_base = 0x01e0c628, +}; + +const struct ath11k_hw_regs qcn9074_regs = { + /* sw2tcl(x) r0 ring configuration address */ + .hal_tcl1_ring_base_lsb = 0x000004f0, + .hal_tcl1_ring_base_msb = 0x000004f4, + .hal_tcl1_ring_id = 0x000004f8, + .hal_tcl1_ring_misc = 0x00000500, + .hal_tcl1_ring_tp_addr_lsb = 0x0000050c, + .hal_tcl1_ring_tp_addr_msb = 0x00000510, + .hal_tcl1_ring_consumer_int_setup_ix0 = 0x00000520, + .hal_tcl1_ring_consumer_int_setup_ix1 = 0x00000524, + .hal_tcl1_ring_msi1_base_lsb = 0x00000538, + .hal_tcl1_ring_msi1_base_msb = 0x0000053c, + .hal_tcl1_ring_msi1_data = 0x00000540, + .hal_tcl2_ring_base_lsb = 0x00000548, + .hal_tcl_ring_base_lsb = 0x000005f8, + + /* tcl status ring address */ + .hal_tcl_status_ring_base_lsb = 0x00000700, + + /* reo2sw(x) r0 ring configuration address */ + .hal_reo1_ring_base_lsb = 0x0000029c, + .hal_reo1_ring_base_msb = 0x000002a0, + .hal_reo1_ring_id = 0x000002a4, + .hal_reo1_ring_misc = 0x000002ac, + .hal_reo1_ring_hp_addr_lsb = 0x000002b0, + .hal_reo1_ring_hp_addr_msb = 0x000002b4, + .hal_reo1_ring_producer_int_setup = 0x000002c0, + .hal_reo1_ring_msi1_base_lsb = 0x000002e4, + .hal_reo1_ring_msi1_base_msb = 0x000002e8, + .hal_reo1_ring_msi1_data = 0x000002ec, + .hal_reo2_ring_base_lsb = 0x000002f4, + .hal_reo1_aging_thresh_ix_0 = 0x00000564, + .hal_reo1_aging_thresh_ix_1 = 0x00000568, + .hal_reo1_aging_thresh_ix_2 = 0x0000056c, + .hal_reo1_aging_thresh_ix_3 = 0x00000570, + + /* reo2sw(x) r2 ring pointers (head/tail) address */ + .hal_reo1_ring_hp = 0x00003038, + .hal_reo1_ring_tp = 0x0000303c, + .hal_reo2_ring_hp = 0x00003040, + + /* reo2tcl r0 ring configuration address */ + .hal_reo_tcl_ring_base_lsb = 0x000003fc, + .hal_reo_tcl_ring_hp = 0x00003058, + + /* reo status address */ + .hal_reo_status_ring_base_lsb = 0x00000504, + .hal_reo_status_hp = 0x00003070, + + /* wcss relative address */ + .hal_seq_wcss_umac_ce0_src_reg = 0x01b80000, + .hal_seq_wcss_umac_ce0_dst_reg = 0x01b81000, + .hal_seq_wcss_umac_ce1_src_reg = 0x01b82000, + .hal_seq_wcss_umac_ce1_dst_reg = 0x01b83000, + + /* wbm idle address */ + .hal_wbm_idle_link_ring_base_lsb = 0x00000874, + .hal_wbm_idle_link_ring_misc = 0x00000884, + + /* sw2wbm release address */ + .hal_wbm_release_ring_base_lsb = 0x000001ec, + + /* wbm2sw release address */ + .hal_wbm0_release_ring_base_lsb = 0x00000924, + .hal_wbm1_release_ring_base_lsb = 0x0000097c, + + /* pcie base address */ + .pcie_qserdes_sysclk_en_sel = 0x01e0e0a8, + .pcie_pcs_osc_dtct_config_base = 0x01e0f45c, diff --git a/drivers/net/wireless/ath/ath11k/hw.h b/drivers/net/wireless/ath/ath11k/hw.h --- a/drivers/net/wireless/ath/ath11k/hw.h +++ b/drivers/net/wireless/ath/ath11k/hw.h +struct hal_tcl_data_cmd; + + void (*tx_mesh_enable)(struct ath11k_base *ab, + struct hal_tcl_data_cmd *tcl_cmd); +extern const struct ath11k_hw_ops qcn9074_ops; + + u32 hal_seq_wcss_umac_ce0_src_reg; + u32 hal_seq_wcss_umac_ce0_dst_reg; + u32 hal_seq_wcss_umac_ce1_src_reg; + u32 hal_seq_wcss_umac_ce1_dst_reg; + + u32 hal_wbm_idle_link_ring_base_lsb; + u32 hal_wbm_idle_link_ring_misc; + + u32 hal_wbm_release_ring_base_lsb; + + u32 hal_wbm0_release_ring_base_lsb; + u32 hal_wbm1_release_ring_base_lsb; + + u32 pcie_qserdes_sysclk_en_sel; + u32 pcie_pcs_osc_dtct_config_base; +extern const struct ath11k_hw_regs qcn9074_regs; diff --git a/drivers/net/wireless/ath/ath11k/pci.c b/drivers/net/wireless/ath/ath11k/pci.c --- a/drivers/net/wireless/ath/ath11k/pci.c +++ b/drivers/net/wireless/ath/ath11k/pci.c - pcie_qserdes_com_sysclk_en_sel_reg, + pcie_qserdes_com_sysclk_en_sel_reg(ab), - pcie_usb3_pcs_misc_osc_dtct_config1_reg, - pcie_usb3_pcs_misc_osc_dtct_config1_val, - pcie_usb3_pcs_misc_osc_dtct_config_msk); + pcie_pcs_osc_dtct_config1_reg(ab), + pcie_pcs_osc_dtct_config1_val, + pcie_pcs_osc_dtct_config_msk); - pcie_usb3_pcs_misc_osc_dtct_config2_reg, - pcie_usb3_pcs_misc_osc_dtct_config2_val, - pcie_usb3_pcs_misc_osc_dtct_config_msk); + pcie_pcs_osc_dtct_config2_reg(ab), + pcie_pcs_osc_dtct_config2_val, + pcie_pcs_osc_dtct_config_msk); - pcie_usb3_pcs_misc_osc_dtct_config4_reg, - pcie_usb3_pcs_misc_osc_dtct_config4_val, - pcie_usb3_pcs_misc_osc_dtct_config_msk); + pcie_pcs_osc_dtct_config4_reg(ab), + pcie_pcs_osc_dtct_config4_val, + pcie_pcs_osc_dtct_config_msk); diff --git a/drivers/net/wireless/ath/ath11k/pci.h b/drivers/net/wireless/ath/ath11k/pci.h --- a/drivers/net/wireless/ath/ath11k/pci.h +++ b/drivers/net/wireless/ath/ath11k/pci.h -#define pcie_qserdes_com_sysclk_en_sel_reg 0x01e0c0ac +#define pcie_qserdes_com_sysclk_en_sel_reg(x) \ + (ab->hw_params.regs->pcie_qserdes_sysclk_en_sel) -#define pcie_usb3_pcs_misc_osc_dtct_config1_reg 0x01e0c628 -#define pcie_usb3_pcs_misc_osc_dtct_config1_val 0x02 -#define pcie_usb3_pcs_misc_osc_dtct_config2_reg 0x01e0c62c -#define pcie_usb3_pcs_misc_osc_dtct_config2_val 0x52 -#define pcie_usb3_pcs_misc_osc_dtct_config4_reg 0x01e0c634 -#define pcie_usb3_pcs_misc_osc_dtct_config4_val 0xff -#define pcie_usb3_pcs_misc_osc_dtct_config_msk 0x000000ff +#define pcie_pcs_osc_dtct_config1_reg(x) \ + (ab->hw_params.regs->pcie_pcs_osc_dtct_config_base) +#define pcie_pcs_osc_dtct_config1_val 0x02 +#define pcie_pcs_osc_dtct_config2_reg(x) \ + (ab->hw_params.regs->pcie_pcs_osc_dtct_config_base + 0x4) +#define pcie_pcs_osc_dtct_config2_val 0x52 +#define pcie_pcs_osc_dtct_config4_reg(x) \ + (ab->hw_params.regs->pcie_pcs_osc_dtct_config_base + 0xc) +#define pcie_pcs_osc_dtct_config4_val 0xff +#define pcie_pcs_osc_dtct_config_msk 0x000000ff
|
Networking
|
6fe6f68fef7f7d5f6b5b62fde78de91cdc528c58
|
karthikeyan periyasamy
|
drivers
|
net
|
ath, ath11k, wireless
|
ath11k: add data path support for qcn9074
|
hal rx descriptor is different for qcn9074 target type. since rx_msdu_end, rx_msdu_start, rx_mpdu_start elements are in different placement/alignment. in order to have generic data path, introduce platform specific hal rx descriptor access ops in ath11k_hw_ops.
|
this release includes the landlock security module, which aims to make easier to sandbox applications; support for the clang control flow integrity, which aims to abort the program upon detecting certain forms of undefined behavior; support for randomising the stack address offset in each syscall; support for concurrent tbl flushing; preparatory apple m1 support; support for incoming amd and intel graphics chips; bpf support for calling kernel functions directly; a virtio sound driver for improved sound experience on virtualized guests; io_uring support for multi shot mode and a misc cgroup for miscellaneous resources. as always, there are many other features, new drivers, improvements and fixes.
|
add support for qcn9074
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'security', 'networking', 'architectures arm x86 mips powerpc riscv s390 ia64 xtensa']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'remote processors', 'clock', 'phy ("physical layer" framework)', 'various']
|
['ath11k ']
|
['h', 'c']
| 8
| 913
| 237
|
--- diff --git a/drivers/net/wireless/ath/ath11k/core.c b/drivers/net/wireless/ath/ath11k/core.c --- a/drivers/net/wireless/ath/ath11k/core.c +++ b/drivers/net/wireless/ath/ath11k/core.c + .hal_desc_sz = sizeof(struct hal_rx_desc_ipq8074), + .hal_desc_sz = sizeof(struct hal_rx_desc_ipq8074), + .hal_desc_sz = sizeof(struct hal_rx_desc_ipq8074), + .hal_desc_sz = sizeof(struct hal_rx_desc_qcn9074), diff --git a/drivers/net/wireless/ath/ath11k/dp_rx.c b/drivers/net/wireless/ath/ath11k/dp_rx.c --- a/drivers/net/wireless/ath/ath11k/dp_rx.c +++ b/drivers/net/wireless/ath/ath11k/dp_rx.c -static u8 *ath11k_dp_rx_h_80211_hdr(struct hal_rx_desc *desc) +static u8 *ath11k_dp_rx_h_80211_hdr(struct ath11k_base *ab, struct hal_rx_desc *desc) - return desc->hdr_status; + return ab->hw_params.hw_ops->rx_desc_get_hdr_status(desc); -static enum hal_encrypt_type ath11k_dp_rx_h_mpdu_start_enctype(struct hal_rx_desc *desc) +static enum hal_encrypt_type ath11k_dp_rx_h_mpdu_start_enctype(struct ath11k_base *ab, + struct hal_rx_desc *desc) - if (!(__le32_to_cpu(desc->mpdu_start.info1) & - rx_mpdu_start_info1_encrypt_info_valid)) + if (!ab->hw_params.hw_ops->rx_desc_encrypt_valid(desc)) - return field_get(rx_mpdu_start_info2_enc_type, - __le32_to_cpu(desc->mpdu_start.info2)); + return ab->hw_params.hw_ops->rx_desc_get_encrypt_type(desc); -static u8 ath11k_dp_rx_h_msdu_start_decap_type(struct hal_rx_desc *desc) +static u8 ath11k_dp_rx_h_msdu_start_decap_type(struct ath11k_base *ab, + struct hal_rx_desc *desc) - return field_get(rx_msdu_start_info2_decap_format, - __le32_to_cpu(desc->msdu_start.info2)); + return ab->hw_params.hw_ops->rx_desc_get_decap_type(desc); -static u8 ath11k_dp_rx_h_msdu_start_mesh_ctl_present(struct hal_rx_desc *desc) +static u8 ath11k_dp_rx_h_msdu_start_mesh_ctl_present(struct ath11k_base *ab, + struct hal_rx_desc *desc) - return field_get(rx_msdu_start_info2_mesh_ctrl_present, - __le32_to_cpu(desc->msdu_start.info2)); + return ab->hw_params.hw_ops->rx_desc_get_mesh_ctl(desc); -static bool ath11k_dp_rx_h_mpdu_start_seq_ctrl_valid(struct hal_rx_desc *desc) +static bool ath11k_dp_rx_h_mpdu_start_seq_ctrl_valid(struct ath11k_base *ab, + struct hal_rx_desc *desc) - return !!field_get(rx_mpdu_start_info1_mpdu_seq_ctrl_valid, - __le32_to_cpu(desc->mpdu_start.info1)); + return ab->hw_params.hw_ops->rx_desc_get_mpdu_seq_ctl_vld(desc); -static bool ath11k_dp_rx_h_mpdu_start_fc_valid(struct hal_rx_desc *desc) +static bool ath11k_dp_rx_h_mpdu_start_fc_valid(struct ath11k_base *ab, + struct hal_rx_desc *desc) - return !!field_get(rx_mpdu_start_info1_mpdu_fctrl_valid, - __le32_to_cpu(desc->mpdu_start.info1)); + return ab->hw_params.hw_ops->rx_desc_get_mpdu_fc_valid(desc); -static bool ath11k_dp_rx_h_mpdu_start_more_frags(struct sk_buff *skb) +static bool ath11k_dp_rx_h_mpdu_start_more_frags(struct ath11k_base *ab, + struct sk_buff *skb) - hdr = (struct ieee80211_hdr *)(skb->data + hal_rx_desc_size); + hdr = (struct ieee80211_hdr *)(skb->data + ab->hw_params.hal_desc_sz); -static u16 ath11k_dp_rx_h_mpdu_start_frag_no(struct sk_buff *skb) +static u16 ath11k_dp_rx_h_mpdu_start_frag_no(struct ath11k_base *ab, + struct sk_buff *skb) - hdr = (struct ieee80211_hdr *)(skb->data + hal_rx_desc_size); + hdr = (struct ieee80211_hdr *)(skb->data + ab->hw_params.hal_desc_sz); -static u16 ath11k_dp_rx_h_mpdu_start_seq_no(struct hal_rx_desc *desc) +static u16 ath11k_dp_rx_h_mpdu_start_seq_no(struct ath11k_base *ab, + struct hal_rx_desc *desc) - return field_get(rx_mpdu_start_info1_mpdu_seq_num, - __le32_to_cpu(desc->mpdu_start.info1)); + return ab->hw_params.hw_ops->rx_desc_get_mpdu_start_seq_no(desc); -static bool ath11k_dp_rx_h_attn_msdu_done(struct hal_rx_desc *desc) +static void *ath11k_dp_rx_get_attention(struct ath11k_base *ab, + struct hal_rx_desc *desc) +{ + return ab->hw_params.hw_ops->rx_desc_get_attention(desc); +} + +static bool ath11k_dp_rx_h_attn_msdu_done(struct rx_attention *attn) - __le32_to_cpu(desc->attention.info2)); + __le32_to_cpu(attn->info2)); -static bool ath11k_dp_rx_h_attn_l4_cksum_fail(struct hal_rx_desc *desc) +static bool ath11k_dp_rx_h_attn_l4_cksum_fail(struct rx_attention *attn) - __le32_to_cpu(desc->attention.info1)); + __le32_to_cpu(attn->info1)); -static bool ath11k_dp_rx_h_attn_ip_cksum_fail(struct hal_rx_desc *desc) +static bool ath11k_dp_rx_h_attn_ip_cksum_fail(struct rx_attention *attn) - __le32_to_cpu(desc->attention.info1)); + __le32_to_cpu(attn->info1)); -static bool ath11k_dp_rx_h_attn_is_decrypted(struct hal_rx_desc *desc) +static bool ath11k_dp_rx_h_attn_is_decrypted(struct rx_attention *attn) - __le32_to_cpu(desc->attention.info2)) == + __le32_to_cpu(attn->info2)) == -static u32 ath11k_dp_rx_h_attn_mpdu_err(struct hal_rx_desc *desc) +static u32 ath11k_dp_rx_h_attn_mpdu_err(struct rx_attention *attn) - u32 info = __le32_to_cpu(desc->attention.info1); + u32 info = __le32_to_cpu(attn->info1); -static u16 ath11k_dp_rx_h_msdu_start_msdu_len(struct hal_rx_desc *desc) +static u16 ath11k_dp_rx_h_msdu_start_msdu_len(struct ath11k_base *ab, + struct hal_rx_desc *desc) - return field_get(rx_msdu_start_info1_msdu_length, - __le32_to_cpu(desc->msdu_start.info1)); + return ab->hw_params.hw_ops->rx_desc_get_msdu_len(desc); -static u8 ath11k_dp_rx_h_msdu_start_sgi(struct hal_rx_desc *desc) +static u8 ath11k_dp_rx_h_msdu_start_sgi(struct ath11k_base *ab, + struct hal_rx_desc *desc) - return field_get(rx_msdu_start_info3_sgi, - __le32_to_cpu(desc->msdu_start.info3)); + return ab->hw_params.hw_ops->rx_desc_get_msdu_sgi(desc); -static u8 ath11k_dp_rx_h_msdu_start_rate_mcs(struct hal_rx_desc *desc) +static u8 ath11k_dp_rx_h_msdu_start_rate_mcs(struct ath11k_base *ab, + struct hal_rx_desc *desc) - return field_get(rx_msdu_start_info3_rate_mcs, - __le32_to_cpu(desc->msdu_start.info3)); + return ab->hw_params.hw_ops->rx_desc_get_msdu_rate_mcs(desc); -static u8 ath11k_dp_rx_h_msdu_start_rx_bw(struct hal_rx_desc *desc) +static u8 ath11k_dp_rx_h_msdu_start_rx_bw(struct ath11k_base *ab, + struct hal_rx_desc *desc) - return field_get(rx_msdu_start_info3_recv_bw, - __le32_to_cpu(desc->msdu_start.info3)); + return ab->hw_params.hw_ops->rx_desc_get_msdu_rx_bw(desc); -static u32 ath11k_dp_rx_h_msdu_start_freq(struct hal_rx_desc *desc) +static u32 ath11k_dp_rx_h_msdu_start_freq(struct ath11k_base *ab, + struct hal_rx_desc *desc) - return __le32_to_cpu(desc->msdu_start.phy_meta_data); + return ab->hw_params.hw_ops->rx_desc_get_msdu_freq(desc); -static u8 ath11k_dp_rx_h_msdu_start_pkt_type(struct hal_rx_desc *desc) +static u8 ath11k_dp_rx_h_msdu_start_pkt_type(struct ath11k_base *ab, + struct hal_rx_desc *desc) - return field_get(rx_msdu_start_info3_pkt_type, - __le32_to_cpu(desc->msdu_start.info3)); + return ab->hw_params.hw_ops->rx_desc_get_msdu_pkt_type(desc); -static u8 ath11k_dp_rx_h_msdu_start_nss(struct hal_rx_desc *desc) +static u8 ath11k_dp_rx_h_msdu_start_nss(struct ath11k_base *ab, + struct hal_rx_desc *desc) - u8 mimo_ss_bitmap = field_get(rx_msdu_start_info3_mimo_ss_bitmap, - __le32_to_cpu(desc->msdu_start.info3)); - - return hweight8(mimo_ss_bitmap); + return hweight8(ab->hw_params.hw_ops->rx_desc_get_msdu_nss(desc)); -static u8 ath11k_dp_rx_h_mpdu_start_tid(struct hal_rx_desc *desc) +static u8 ath11k_dp_rx_h_mpdu_start_tid(struct ath11k_base *ab, + struct hal_rx_desc *desc) - return field_get(rx_mpdu_start_info2_tid, - __le32_to_cpu(desc->mpdu_start.info2)); + return ab->hw_params.hw_ops->rx_desc_get_mpdu_tid(desc); -static u16 ath11k_dp_rx_h_mpdu_start_peer_id(struct hal_rx_desc *desc) +static u16 ath11k_dp_rx_h_mpdu_start_peer_id(struct ath11k_base *ab, + struct hal_rx_desc *desc) - return __le16_to_cpu(desc->mpdu_start.sw_peer_id); + return ab->hw_params.hw_ops->rx_desc_get_mpdu_peer_id(desc); -static u8 ath11k_dp_rx_h_msdu_end_l3pad(struct hal_rx_desc *desc) +static u8 ath11k_dp_rx_h_msdu_end_l3pad(struct ath11k_base *ab, + struct hal_rx_desc *desc) - return field_get(rx_msdu_end_info2_l3_hdr_padding, - __le32_to_cpu(desc->msdu_end.info2)); + return ab->hw_params.hw_ops->rx_desc_get_l3_pad_bytes(desc); -static bool ath11k_dp_rx_h_msdu_end_first_msdu(struct hal_rx_desc *desc) +static bool ath11k_dp_rx_h_msdu_end_first_msdu(struct ath11k_base *ab, + struct hal_rx_desc *desc) - return !!field_get(rx_msdu_end_info2_first_msdu, - __le32_to_cpu(desc->msdu_end.info2)); + return ab->hw_params.hw_ops->rx_desc_get_first_msdu(desc); -static bool ath11k_dp_rx_h_msdu_end_last_msdu(struct hal_rx_desc *desc) +static bool ath11k_dp_rx_h_msdu_end_last_msdu(struct ath11k_base *ab, + struct hal_rx_desc *desc) - return !!field_get(rx_msdu_end_info2_last_msdu, - __le32_to_cpu(desc->msdu_end.info2)); + return ab->hw_params.hw_ops->rx_desc_get_last_msdu(desc); -static void ath11k_dp_rx_desc_end_tlv_copy(struct hal_rx_desc *fdesc, +static void ath11k_dp_rx_desc_end_tlv_copy(struct ath11k_base *ab, + struct hal_rx_desc *fdesc, - memcpy((u8 *)&fdesc->msdu_end, (u8 *)&ldesc->msdu_end, - sizeof(struct rx_msdu_end)); - memcpy((u8 *)&fdesc->attention, (u8 *)&ldesc->attention, - sizeof(struct rx_attention)); - memcpy((u8 *)&fdesc->mpdu_end, (u8 *)&ldesc->mpdu_end, - sizeof(struct rx_mpdu_end)); + ab->hw_params.hw_ops->rx_desc_copy_attn_end_tlv(fdesc, ldesc); -static u32 ath11k_dp_rxdesc_get_mpdulen_err(struct hal_rx_desc *rx_desc) +static u32 ath11k_dp_rxdesc_get_mpdulen_err(struct rx_attention *attn) - struct rx_attention *rx_attn; - - rx_attn = &rx_desc->attention; - - __le32_to_cpu(rx_attn->info1)); + __le32_to_cpu(attn->info1)); -static u32 ath11k_dp_rxdesc_get_decap_format(struct hal_rx_desc *rx_desc) -{ - struct rx_msdu_start *rx_msdu_start; - - rx_msdu_start = &rx_desc->msdu_start; - - return field_get(rx_msdu_start_info2_decap_format, - __le32_to_cpu(rx_msdu_start->info2)); -} - -static u8 *ath11k_dp_rxdesc_get_80211hdr(struct hal_rx_desc *rx_desc) +static u8 *ath11k_dp_rxdesc_get_80211hdr(struct ath11k_base *ab, + struct hal_rx_desc *rx_desc) - rx_pkt_hdr = &rx_desc->msdu_payload[0]; + rx_pkt_hdr = ab->hw_params.hw_ops->rx_desc_get_msdu_payload(rx_desc); -static bool ath11k_dp_rxdesc_mpdu_valid(struct hal_rx_desc *rx_desc) +static bool ath11k_dp_rxdesc_mpdu_valid(struct ath11k_base *ab, + struct hal_rx_desc *rx_desc) - tlv_tag = field_get(hal_tlv_hdr_tag, - __le32_to_cpu(rx_desc->mpdu_start_tag)); + tlv_tag = ab->hw_params.hw_ops->rx_desc_get_mpdu_start_tag(rx_desc); -static u32 ath11k_dp_rxdesc_get_ppduid(struct hal_rx_desc *rx_desc) +static u32 ath11k_dp_rxdesc_get_ppduid(struct ath11k_base *ab, + struct hal_rx_desc *rx_desc) +{ + return ab->hw_params.hw_ops->rx_desc_get_mpdu_ppdu_id(rx_desc); +} + +static void ath11k_dp_rxdesc_set_msdu_len(struct ath11k_base *ab, + struct hal_rx_desc *desc, + u16 len) - return __le16_to_cpu(rx_desc->mpdu_start.phy_ppdu_id); + ab->hw_params.hw_ops->rx_desc_set_msdu_len(desc, len); + struct ath11k_base *ab = ar->ab; - int space_extra; - int rem_len; - int buf_len; + int space_extra, rem_len, buf_len; + u32 hal_rx_desc_sz = ar->ab->hw_params.hal_desc_sz; - buf_first_hdr_len = hal_rx_desc_size + l3pad_bytes; + buf_first_hdr_len = hal_rx_desc_sz + l3pad_bytes; - rxcb->is_first_msdu = ath11k_dp_rx_h_msdu_end_first_msdu(ldesc); - rxcb->is_last_msdu = ath11k_dp_rx_h_msdu_end_last_msdu(ldesc); + rxcb->is_first_msdu = ath11k_dp_rx_h_msdu_end_first_msdu(ab, ldesc); + rxcb->is_last_msdu = ath11k_dp_rx_h_msdu_end_last_msdu(ab, ldesc); - ath11k_dp_rx_desc_end_tlv_copy(rxcb->rx_desc, ldesc); + ath11k_dp_rx_desc_end_tlv_copy(ab, rxcb->rx_desc, ldesc); - buf_len = dp_rx_buffer_size - hal_rx_desc_size; + buf_len = dp_rx_buffer_size - hal_rx_desc_sz; - if (buf_len > (dp_rx_buffer_size - hal_rx_desc_size)) { + if (buf_len > (dp_rx_buffer_size - hal_rx_desc_sz)) { - skb_put(skb, buf_len + hal_rx_desc_size); - skb_pull(skb, hal_rx_desc_size); + skb_put(skb, buf_len + hal_rx_desc_sz); + skb_pull(skb, hal_rx_desc_sz); -static void ath11k_dp_rx_h_csum_offload(struct sk_buff *msdu) +static void ath11k_dp_rx_h_csum_offload(struct ath11k *ar, struct sk_buff *msdu) + struct rx_attention *rx_attention; - ip_csum_fail = ath11k_dp_rx_h_attn_ip_cksum_fail(rxcb->rx_desc); - l4_csum_fail = ath11k_dp_rx_h_attn_l4_cksum_fail(rxcb->rx_desc); + rx_attention = ath11k_dp_rx_get_attention(ar->ab, rxcb->rx_desc); + ip_csum_fail = ath11k_dp_rx_h_attn_ip_cksum_fail(rx_attention); + l4_csum_fail = ath11k_dp_rx_h_attn_l4_cksum_fail(rx_attention); - if (ath11k_dp_rx_h_msdu_start_mesh_ctl_present(rxcb->rx_desc)) + if (ath11k_dp_rx_h_msdu_start_mesh_ctl_present(ar->ab, rxcb->rx_desc)) - hdr = (struct ieee80211_hdr *)ath11k_dp_rx_h_80211_hdr(rxcb->rx_desc); + hdr = (struct ieee80211_hdr *)ath11k_dp_rx_h_80211_hdr(ar->ab, rxcb->rx_desc); - first_hdr = ath11k_dp_rx_h_80211_hdr(rx_desc); - decap = ath11k_dp_rx_h_msdu_start_decap_type(rx_desc); + first_hdr = ath11k_dp_rx_h_80211_hdr(ar->ab, rx_desc); + decap = ath11k_dp_rx_h_msdu_start_decap_type(ar->ab, rx_desc); + struct rx_attention *rx_attention; - err_bitmap = ath11k_dp_rx_h_attn_mpdu_err(rx_desc); + rx_attention = ath11k_dp_rx_get_attention(ar->ab, rx_desc); + err_bitmap = ath11k_dp_rx_h_attn_mpdu_err(rx_attention); - is_decrypted = ath11k_dp_rx_h_attn_is_decrypted(rx_desc); + is_decrypted = ath11k_dp_rx_h_attn_is_decrypted(rx_attention); - ath11k_dp_rx_h_csum_offload(msdu); + ath11k_dp_rx_h_csum_offload(ar, msdu); - pkt_type = ath11k_dp_rx_h_msdu_start_pkt_type(rx_desc); - bw = ath11k_dp_rx_h_msdu_start_rx_bw(rx_desc); - rate_mcs = ath11k_dp_rx_h_msdu_start_rate_mcs(rx_desc); - nss = ath11k_dp_rx_h_msdu_start_nss(rx_desc); - sgi = ath11k_dp_rx_h_msdu_start_sgi(rx_desc); + pkt_type = ath11k_dp_rx_h_msdu_start_pkt_type(ar->ab, rx_desc); + bw = ath11k_dp_rx_h_msdu_start_rx_bw(ar->ab, rx_desc); + rate_mcs = ath11k_dp_rx_h_msdu_start_rate_mcs(ar->ab, rx_desc); + nss = ath11k_dp_rx_h_msdu_start_nss(ar->ab, rx_desc); + sgi = ath11k_dp_rx_h_msdu_start_sgi(ar->ab, rx_desc); - u32 center_freq; + u32 center_freq, meta_data; - channel_num = ath11k_dp_rx_h_msdu_start_freq(rx_desc); - center_freq = ath11k_dp_rx_h_msdu_start_freq(rx_desc) >> 16; + meta_data = ath11k_dp_rx_h_msdu_start_freq(ar->ab, rx_desc); + channel_num = meta_data; + center_freq = meta_data >> 16; + struct ath11k_base *ab = ar->ab; + struct rx_attention *rx_attention; + u32 hal_rx_desc_sz = ar->ab->hw_params.hal_desc_sz; - ath11k_warn(ar->ab, + ath11k_warn(ab, - if (!ath11k_dp_rx_h_attn_msdu_done(lrx_desc)) { - ath11k_warn(ar->ab, "msdu_done bit in attention is not set "); + rx_attention = ath11k_dp_rx_get_attention(ab, lrx_desc); + if (!ath11k_dp_rx_h_attn_msdu_done(rx_attention)) { + ath11k_warn(ab, "msdu_done bit in attention is not set "); - msdu_len = ath11k_dp_rx_h_msdu_start_msdu_len(rx_desc); - l3_pad_bytes = ath11k_dp_rx_h_msdu_end_l3pad(lrx_desc); + msdu_len = ath11k_dp_rx_h_msdu_start_msdu_len(ab, rx_desc); + l3_pad_bytes = ath11k_dp_rx_h_msdu_end_l3pad(ab, lrx_desc); - skb_pull(msdu, hal_rx_desc_size); + skb_pull(msdu, hal_rx_desc_sz); - if ((msdu_len + hal_rx_desc_size) > dp_rx_buffer_size) { - hdr_status = ath11k_dp_rx_h_80211_hdr(rx_desc); + if ((msdu_len + hal_rx_desc_sz) > dp_rx_buffer_size) { + hdr_status = ath11k_dp_rx_h_80211_hdr(ab, rx_desc); - ath11k_warn(ar->ab, "invalid msdu len %u ", msdu_len); - ath11k_dbg_dump(ar->ab, ath11k_dbg_data, null, "", hdr_status, + ath11k_warn(ab, "invalid msdu len %u ", msdu_len); + ath11k_dbg_dump(ab, ath11k_dbg_data, null, "", hdr_status, - ath11k_dbg_dump(ar->ab, ath11k_dbg_data, null, "", rx_desc, + ath11k_dbg_dump(ab, ath11k_dbg_data, null, "", rx_desc, - skb_put(msdu, hal_rx_desc_size + l3_pad_bytes + msdu_len); - skb_pull(msdu, hal_rx_desc_size + l3_pad_bytes); + skb_put(msdu, hal_rx_desc_sz + l3_pad_bytes + msdu_len); + skb_pull(msdu, hal_rx_desc_sz + l3_pad_bytes); - ath11k_warn(ar->ab, + ath11k_warn(ab, - u32 hdr_len; + u32 hdr_len, hal_rx_desc_sz = ar->ab->hw_params.hal_desc_sz; - if (ath11k_dp_rx_h_mpdu_start_enctype(rx_desc) != hal_encrypt_type_tkip_mic) + if (ath11k_dp_rx_h_mpdu_start_enctype(ar->ab, rx_desc) != + hal_encrypt_type_tkip_mic) - hdr = (struct ieee80211_hdr *)(msdu->data + hal_rx_desc_size); + hdr = (struct ieee80211_hdr *)(msdu->data + hal_rx_desc_sz); - head_len = hdr_len + hal_rx_desc_size + ieee80211_tkip_iv_len; + head_len = hdr_len + hal_rx_desc_sz + ieee80211_tkip_iv_len; - skb_pull(msdu, hal_rx_desc_size); + skb_pull(msdu, hal_rx_desc_sz); + u32 hal_rx_desc_sz = ar->ab->hw_params.hal_desc_sz; - hdr = (struct ieee80211_hdr *)(msdu->data + hal_rx_desc_size); + hdr = (struct ieee80211_hdr *)(msdu->data + hal_rx_desc_sz); - memmove((void *)msdu->data + hal_rx_desc_size + crypto_len, - (void *)msdu->data + hal_rx_desc_size, hdr_len); + memmove((void *)msdu->data + hal_rx_desc_sz + crypto_len, + (void *)msdu->data + hal_rx_desc_sz, hdr_len); + struct rx_attention *rx_attention; - u32 flags; + u32 flags, hal_rx_desc_sz = ar->ab->hw_params.hal_desc_sz; - hdr = (struct ieee80211_hdr *)(skb->data + hal_rx_desc_size); + hdr = (struct ieee80211_hdr *)(skb->data + hal_rx_desc_sz); - enctype = ath11k_dp_rx_h_mpdu_start_enctype(rx_desc); - if (enctype != hal_encrypt_type_open) - is_decrypted = ath11k_dp_rx_h_attn_is_decrypted(rx_desc); + enctype = ath11k_dp_rx_h_mpdu_start_enctype(ar->ab, rx_desc); + if (enctype != hal_encrypt_type_open) { + rx_attention = ath11k_dp_rx_get_attention(ar->ab, rx_desc); + is_decrypted = ath11k_dp_rx_h_attn_is_decrypted(rx_attention); + } - skb_pull(skb, hal_rx_desc_size + + skb_pull(skb, hal_rx_desc_sz + - hdr = (struct ieee80211_hdr *)(first_frag->data + hal_rx_desc_size); + hdr = (struct ieee80211_hdr *)(first_frag->data + hal_rx_desc_sz); - u32 dst_idx, cookie; - u32 *msdu_len_offset; + u32 dst_idx, cookie, hal_rx_desc_sz; + hal_rx_desc_sz = ab->hw_params.hal_desc_sz; - defrag_skb->len - hal_rx_desc_size) | + defrag_skb->len - hal_rx_desc_sz) | - msdu_len_offset = (u32 *)&rx_desc->msdu_start; - *msdu_len_offset &= ~(rx_msdu_start_info1_msdu_length); - *msdu_len_offset |= defrag_skb->len - hal_rx_desc_size; + ath11k_dp_rxdesc_set_msdu_len(ab, rx_desc, defrag_skb->len - hal_rx_desc_sz); -static int ath11k_dp_rx_h_cmp_frags(struct sk_buff *a, struct sk_buff *b) +static int ath11k_dp_rx_h_cmp_frags(struct ath11k *ar, + struct sk_buff *a, struct sk_buff *b) - frag1 = ath11k_dp_rx_h_mpdu_start_frag_no(a); - frag2 = ath11k_dp_rx_h_mpdu_start_frag_no(b); + frag1 = ath11k_dp_rx_h_mpdu_start_frag_no(ar->ab, a); + frag2 = ath11k_dp_rx_h_mpdu_start_frag_no(ar->ab, b); -static void ath11k_dp_rx_h_sort_frags(struct sk_buff_head *frag_list, +static void ath11k_dp_rx_h_sort_frags(struct ath11k *ar, + struct sk_buff_head *frag_list, - cmp = ath11k_dp_rx_h_cmp_frags(skb, cur_frag); + cmp = ath11k_dp_rx_h_cmp_frags(ar, skb, cur_frag); -static u64 ath11k_dp_rx_h_get_pn(struct sk_buff *skb) +static u64 ath11k_dp_rx_h_get_pn(struct ath11k *ar, struct sk_buff *skb) + u32 hal_rx_desc_sz = ar->ab->hw_params.hal_desc_sz; - hdr = (struct ieee80211_hdr *)(skb->data + hal_rx_desc_size); - ehdr = skb->data + hal_rx_desc_size + ieee80211_hdrlen(hdr->frame_control); + hdr = (struct ieee80211_hdr *)(skb->data + hal_rx_desc_sz); + ehdr = skb->data + hal_rx_desc_sz + ieee80211_hdrlen(hdr->frame_control); - encrypt_type = ath11k_dp_rx_h_mpdu_start_enctype(desc); + encrypt_type = ath11k_dp_rx_h_mpdu_start_enctype(ar->ab, desc); - last_pn = ath11k_dp_rx_h_get_pn(first_frag); + last_pn = ath11k_dp_rx_h_get_pn(ar, first_frag); - cur_pn = ath11k_dp_rx_h_get_pn(skb); + cur_pn = ath11k_dp_rx_h_get_pn(ar, skb); - peer_id = ath11k_dp_rx_h_mpdu_start_peer_id(rx_desc); - tid = ath11k_dp_rx_h_mpdu_start_tid(rx_desc); - seqno = ath11k_dp_rx_h_mpdu_start_seq_no(rx_desc); - frag_no = ath11k_dp_rx_h_mpdu_start_frag_no(msdu); - more_frags = ath11k_dp_rx_h_mpdu_start_more_frags(msdu); - - if (!ath11k_dp_rx_h_mpdu_start_seq_ctrl_valid(rx_desc) || - !ath11k_dp_rx_h_mpdu_start_fc_valid(rx_desc) || + peer_id = ath11k_dp_rx_h_mpdu_start_peer_id(ar->ab, rx_desc); + tid = ath11k_dp_rx_h_mpdu_start_tid(ar->ab, rx_desc); + seqno = ath11k_dp_rx_h_mpdu_start_seq_no(ar->ab, rx_desc); + frag_no = ath11k_dp_rx_h_mpdu_start_frag_no(ar->ab, msdu); + more_frags = ath11k_dp_rx_h_mpdu_start_more_frags(ar->ab, msdu); + + if (!ath11k_dp_rx_h_mpdu_start_seq_ctrl_valid(ar->ab, rx_desc) || + !ath11k_dp_rx_h_mpdu_start_fc_valid(ar->ab, rx_desc) || - ath11k_dp_rx_h_sort_frags(&rx_tid->rx_frags, msdu); + ath11k_dp_rx_h_sort_frags(ar, &rx_tid->rx_frags, msdu); + u32 hal_rx_desc_sz = ar->ab->hw_params.hal_desc_sz; - msdu_len = ath11k_dp_rx_h_msdu_start_msdu_len(rx_desc); - if ((msdu_len + hal_rx_desc_size) > dp_rx_buffer_size) { - hdr_status = ath11k_dp_rx_h_80211_hdr(rx_desc); + msdu_len = ath11k_dp_rx_h_msdu_start_msdu_len(ar->ab, rx_desc); + if ((msdu_len + hal_rx_desc_sz) > dp_rx_buffer_size) { + hdr_status = ath11k_dp_rx_h_80211_hdr(ar->ab, rx_desc); - skb_put(msdu, hal_rx_desc_size + msdu_len); + skb_put(msdu, hal_rx_desc_sz + msdu_len); - (dp_rx_buffer_size - hal_rx_desc_size)); + (dp_rx_buffer_size - ar->ab->hw_params.hal_desc_sz)); + struct rx_attention *rx_attention; + u32 hal_rx_desc_sz = ar->ab->hw_params.hal_desc_sz; - msdu_len = ath11k_dp_rx_h_msdu_start_msdu_len(desc); + msdu_len = ath11k_dp_rx_h_msdu_start_msdu_len(ar->ab, desc); - if (!rxcb->is_frag && ((msdu_len + hal_rx_desc_size) > dp_rx_buffer_size)) { + if (!rxcb->is_frag && ((msdu_len + hal_rx_desc_sz) > dp_rx_buffer_size)) { - msdu_len = msdu_len - (dp_rx_buffer_size - hal_rx_desc_size); + msdu_len = msdu_len - (dp_rx_buffer_size - hal_rx_desc_sz); - if (!ath11k_dp_rx_h_attn_msdu_done(desc)) { + rx_attention = ath11k_dp_rx_get_attention(ar->ab, desc); + if (!ath11k_dp_rx_h_attn_msdu_done(rx_attention)) { - rxcb->is_first_msdu = ath11k_dp_rx_h_msdu_end_first_msdu(desc); - rxcb->is_last_msdu = ath11k_dp_rx_h_msdu_end_last_msdu(desc); + rxcb->is_first_msdu = ath11k_dp_rx_h_msdu_end_first_msdu(ar->ab, desc); + rxcb->is_last_msdu = ath11k_dp_rx_h_msdu_end_last_msdu(ar->ab, desc); - skb_pull(msdu, hal_rx_desc_size); + skb_pull(msdu, hal_rx_desc_sz); - l3pad_bytes = ath11k_dp_rx_h_msdu_end_l3pad(desc); + l3pad_bytes = ath11k_dp_rx_h_msdu_end_l3pad(ar->ab, desc); - if ((hal_rx_desc_size + l3pad_bytes + msdu_len) > dp_rx_buffer_size) + if ((hal_rx_desc_sz + l3pad_bytes + msdu_len) > dp_rx_buffer_size) - skb_put(msdu, hal_rx_desc_size + l3pad_bytes + msdu_len); - skb_pull(msdu, hal_rx_desc_size + l3pad_bytes); + skb_put(msdu, hal_rx_desc_sz + l3pad_bytes + msdu_len); + skb_pull(msdu, hal_rx_desc_sz + l3pad_bytes); - rxcb->tid = ath11k_dp_rx_h_mpdu_start_tid(desc); + rxcb->tid = ath11k_dp_rx_h_mpdu_start_tid(ar->ab, desc); + u32 hal_rx_desc_sz = ar->ab->hw_params.hal_desc_sz; - rxcb->is_first_msdu = ath11k_dp_rx_h_msdu_end_first_msdu(desc); - rxcb->is_last_msdu = ath11k_dp_rx_h_msdu_end_last_msdu(desc); + rxcb->is_first_msdu = ath11k_dp_rx_h_msdu_end_first_msdu(ar->ab, desc); + rxcb->is_last_msdu = ath11k_dp_rx_h_msdu_end_last_msdu(ar->ab, desc); - l3pad_bytes = ath11k_dp_rx_h_msdu_end_l3pad(desc); - msdu_len = ath11k_dp_rx_h_msdu_start_msdu_len(desc); - skb_put(msdu, hal_rx_desc_size + l3pad_bytes + msdu_len); - skb_pull(msdu, hal_rx_desc_size + l3pad_bytes); + l3pad_bytes = ath11k_dp_rx_h_msdu_end_l3pad(ar->ab, desc); + msdu_len = ath11k_dp_rx_h_msdu_start_msdu_len(ar->ab, desc); + skb_put(msdu, hal_rx_desc_sz + l3pad_bytes + msdu_len); + skb_pull(msdu, hal_rx_desc_sz + l3pad_bytes); - l2_hdr_offset = ath11k_dp_rx_h_msdu_end_l3pad(rx_desc); + l2_hdr_offset = ath11k_dp_rx_h_msdu_end_l3pad(ar->ab, rx_desc); - if (!ath11k_dp_rxdesc_mpdu_valid(rx_desc)) { + if (!ath11k_dp_rxdesc_mpdu_valid(ar->ab, rx_desc)) { - ath11k_dp_rxdesc_get_ppduid(rx_desc); + ath11k_dp_rxdesc_get_ppduid(ar->ab, rx_desc); -static void ath11k_dp_rx_msdus_set_payload(struct sk_buff *msdu) +static void ath11k_dp_rx_msdus_set_payload(struct ath11k *ar, struct sk_buff *msdu) - rx_pkt_offset = sizeof(struct hal_rx_desc); - l2_hdr_offset = ath11k_dp_rx_h_msdu_end_l3pad((struct hal_rx_desc *)msdu->data); + rx_pkt_offset = ar->ab->hw_params.hal_desc_sz; + l2_hdr_offset = ath11k_dp_rx_h_msdu_end_l3pad(ar->ab, + (struct hal_rx_desc *)msdu->data); + struct ath11k_base *ab = ar->ab; - u32 decap_format, wifi_hdr_len; + u32 wifi_hdr_len; - u8 *dest; + u8 *dest, decap_format; + struct rx_attention *rx_attention; + rx_attention = ath11k_dp_rx_get_attention(ab, rx_desc); - if (ath11k_dp_rxdesc_get_mpdulen_err(rx_desc)) + if (ath11k_dp_rxdesc_get_mpdulen_err(rx_attention)) - decap_format = ath11k_dp_rxdesc_get_decap_format(rx_desc); + decap_format = ath11k_dp_rx_h_msdu_start_decap_type(ab, rx_desc); - ath11k_dp_rx_msdus_set_payload(head_msdu); + ath11k_dp_rx_msdus_set_payload(ar, head_msdu); - ath11k_dp_rx_msdus_set_payload(msdu); + ath11k_dp_rx_msdus_set_payload(ar, msdu); - hdr_desc = ath11k_dp_rxdesc_get_80211hdr(rx_desc); + hdr_desc = ath11k_dp_rxdesc_get_80211hdr(ab, rx_desc); - hdr_desc = ath11k_dp_rxdesc_get_80211hdr(rx_desc); + hdr_desc = ath11k_dp_rxdesc_get_80211hdr(ab, rx_desc); - ath11k_dp_rx_msdus_set_payload(msdu); + ath11k_dp_rx_msdus_set_payload(ar, msdu); - ath11k_dbg(ar->ab, ath11k_dbg_data, + ath11k_dbg(ab, ath11k_dbg_data, - ath11k_dbg(ar->ab, ath11k_dbg_data, + ath11k_dbg(ab, ath11k_dbg_data, - ath11k_dbg(ar->ab, ath11k_dbg_data, + ath11k_dbg(ab, ath11k_dbg_data, diff --git a/drivers/net/wireless/ath/ath11k/hal.h b/drivers/net/wireless/ath/ath11k/hal.h --- a/drivers/net/wireless/ath/ath11k/hal.h +++ b/drivers/net/wireless/ath/ath11k/hal.h -#define hal_rx_desc_size (sizeof(struct hal_rx_desc)) - diff --git a/drivers/net/wireless/ath/ath11k/hal_tx.c b/drivers/net/wireless/ath/ath11k/hal_tx.c --- a/drivers/net/wireless/ath/ath11k/hal_tx.c +++ b/drivers/net/wireless/ath/ath11k/hal_tx.c - if (ti->enable_mesh && ab->hw_params.hw_ops->tx_mesh_enable) + if (ti->enable_mesh) diff --git a/drivers/net/wireless/ath/ath11k/hw.c b/drivers/net/wireless/ath/ath11k/hw.c --- a/drivers/net/wireless/ath/ath11k/hw.c +++ b/drivers/net/wireless/ath/ath11k/hw.c +static bool ath11k_hw_ipq8074_rx_desc_get_first_msdu(struct hal_rx_desc *desc) +{ + return !!field_get(rx_msdu_end_info2_first_msdu, + __le32_to_cpu(desc->u.ipq8074.msdu_end.info2)); +} + +static bool ath11k_hw_ipq8074_rx_desc_get_last_msdu(struct hal_rx_desc *desc) +{ + return !!field_get(rx_msdu_end_info2_last_msdu, + __le32_to_cpu(desc->u.ipq8074.msdu_end.info2)); +} + +static u8 ath11k_hw_ipq8074_rx_desc_get_l3_pad_bytes(struct hal_rx_desc *desc) +{ + return field_get(rx_msdu_end_info2_l3_hdr_padding, + __le32_to_cpu(desc->u.ipq8074.msdu_end.info2)); +} + +static u8 *ath11k_hw_ipq8074_rx_desc_get_hdr_status(struct hal_rx_desc *desc) +{ + return desc->u.ipq8074.hdr_status; +} + +static bool ath11k_hw_ipq8074_rx_desc_encrypt_valid(struct hal_rx_desc *desc) +{ + return __le32_to_cpu(desc->u.ipq8074.mpdu_start.info1) & + rx_mpdu_start_info1_encrypt_info_valid; +} + +static u32 ath11k_hw_ipq8074_rx_desc_get_encrypt_type(struct hal_rx_desc *desc) +{ + return field_get(rx_mpdu_start_info2_enc_type, + __le32_to_cpu(desc->u.ipq8074.mpdu_start.info2)); +} + +static u8 ath11k_hw_ipq8074_rx_desc_get_decap_type(struct hal_rx_desc *desc) +{ + return field_get(rx_msdu_start_info2_decap_format, + __le32_to_cpu(desc->u.ipq8074.msdu_start.info2)); +} + +static u8 ath11k_hw_ipq8074_rx_desc_get_mesh_ctl(struct hal_rx_desc *desc) +{ + return field_get(rx_msdu_start_info2_mesh_ctrl_present, + __le32_to_cpu(desc->u.ipq8074.msdu_start.info2)); +} + +static bool ath11k_hw_ipq8074_rx_desc_get_mpdu_seq_ctl_vld(struct hal_rx_desc *desc) +{ + return !!field_get(rx_mpdu_start_info1_mpdu_seq_ctrl_valid, + __le32_to_cpu(desc->u.ipq8074.mpdu_start.info1)); +} + +static bool ath11k_hw_ipq8074_rx_desc_get_mpdu_fc_valid(struct hal_rx_desc *desc) +{ + return !!field_get(rx_mpdu_start_info1_mpdu_fctrl_valid, + __le32_to_cpu(desc->u.ipq8074.mpdu_start.info1)); +} + +static u16 ath11k_hw_ipq8074_rx_desc_get_mpdu_start_seq_no(struct hal_rx_desc *desc) +{ + return field_get(rx_mpdu_start_info1_mpdu_seq_num, + __le32_to_cpu(desc->u.ipq8074.mpdu_start.info1)); +} + +static u16 ath11k_hw_ipq8074_rx_desc_get_msdu_len(struct hal_rx_desc *desc) +{ + return field_get(rx_msdu_start_info1_msdu_length, + __le32_to_cpu(desc->u.ipq8074.msdu_start.info1)); +} + +static u8 ath11k_hw_ipq8074_rx_desc_get_msdu_sgi(struct hal_rx_desc *desc) +{ + return field_get(rx_msdu_start_info3_sgi, + __le32_to_cpu(desc->u.ipq8074.msdu_start.info3)); +} + +static u8 ath11k_hw_ipq8074_rx_desc_get_msdu_rate_mcs(struct hal_rx_desc *desc) +{ + return field_get(rx_msdu_start_info3_rate_mcs, + __le32_to_cpu(desc->u.ipq8074.msdu_start.info3)); +} + +static u8 ath11k_hw_ipq8074_rx_desc_get_msdu_rx_bw(struct hal_rx_desc *desc) +{ + return field_get(rx_msdu_start_info3_recv_bw, + __le32_to_cpu(desc->u.ipq8074.msdu_start.info3)); +} + +static u32 ath11k_hw_ipq8074_rx_desc_get_msdu_freq(struct hal_rx_desc *desc) +{ + return __le32_to_cpu(desc->u.ipq8074.msdu_start.phy_meta_data); +} + +static u8 ath11k_hw_ipq8074_rx_desc_get_msdu_pkt_type(struct hal_rx_desc *desc) +{ + return field_get(rx_msdu_start_info3_pkt_type, + __le32_to_cpu(desc->u.ipq8074.msdu_start.info3)); +} + +static u8 ath11k_hw_ipq8074_rx_desc_get_msdu_nss(struct hal_rx_desc *desc) +{ + return field_get(rx_msdu_start_info3_mimo_ss_bitmap, + __le32_to_cpu(desc->u.ipq8074.msdu_start.info3)); +} + +static u8 ath11k_hw_ipq8074_rx_desc_get_mpdu_tid(struct hal_rx_desc *desc) +{ + return field_get(rx_mpdu_start_info2_tid, + __le32_to_cpu(desc->u.ipq8074.mpdu_start.info2)); +} + +static u16 ath11k_hw_ipq8074_rx_desc_get_mpdu_peer_id(struct hal_rx_desc *desc) +{ + return __le16_to_cpu(desc->u.ipq8074.mpdu_start.sw_peer_id); +} + +static void ath11k_hw_ipq8074_rx_desc_copy_attn_end(struct hal_rx_desc *fdesc, + struct hal_rx_desc *ldesc) +{ + memcpy((u8 *)&fdesc->u.ipq8074.msdu_end, (u8 *)&ldesc->u.ipq8074.msdu_end, + sizeof(struct rx_msdu_end_ipq8074)); + memcpy((u8 *)&fdesc->u.ipq8074.attention, (u8 *)&ldesc->u.ipq8074.attention, + sizeof(struct rx_attention)); + memcpy((u8 *)&fdesc->u.ipq8074.mpdu_end, (u8 *)&ldesc->u.ipq8074.mpdu_end, + sizeof(struct rx_mpdu_end)); +} + +static u32 ath11k_hw_ipq8074_rx_desc_get_mpdu_start_tag(struct hal_rx_desc *desc) +{ + return field_get(hal_tlv_hdr_tag, + __le32_to_cpu(desc->u.ipq8074.mpdu_start_tag)); +} + +static u32 ath11k_hw_ipq8074_rx_desc_get_mpdu_ppdu_id(struct hal_rx_desc *desc) +{ + return __le16_to_cpu(desc->u.ipq8074.mpdu_start.phy_ppdu_id); +} + +static void ath11k_hw_ipq8074_rx_desc_set_msdu_len(struct hal_rx_desc *desc, u16 len) +{ + u32 info = __le32_to_cpu(desc->u.ipq8074.msdu_start.info1); + + info &= ~rx_msdu_start_info1_msdu_length; + info |= field_prep(rx_msdu_start_info1_msdu_length, len); + + desc->u.ipq8074.msdu_start.info1 = __cpu_to_le32(info); +} + +static +struct rx_attention *ath11k_hw_ipq8074_rx_desc_get_attention(struct hal_rx_desc *desc) +{ + return &desc->u.ipq8074.attention; +} + +static u8 *ath11k_hw_ipq8074_rx_desc_get_msdu_payload(struct hal_rx_desc *desc) +{ + return &desc->u.ipq8074.msdu_payload[0]; +} + +static bool ath11k_hw_qcn9074_rx_desc_get_first_msdu(struct hal_rx_desc *desc) +{ + return !!field_get(rx_msdu_end_info4_first_msdu, + __le16_to_cpu(desc->u.qcn9074.msdu_end.info4)); +} + +static bool ath11k_hw_qcn9074_rx_desc_get_last_msdu(struct hal_rx_desc *desc) +{ + return !!field_get(rx_msdu_end_info4_last_msdu, + __le16_to_cpu(desc->u.qcn9074.msdu_end.info4)); +} + +static u8 ath11k_hw_qcn9074_rx_desc_get_l3_pad_bytes(struct hal_rx_desc *desc) +{ + return field_get(rx_msdu_end_info4_l3_hdr_padding, + __le16_to_cpu(desc->u.qcn9074.msdu_end.info4)); +} + +static u8 *ath11k_hw_qcn9074_rx_desc_get_hdr_status(struct hal_rx_desc *desc) +{ + return desc->u.qcn9074.hdr_status; +} + +static bool ath11k_hw_qcn9074_rx_desc_encrypt_valid(struct hal_rx_desc *desc) +{ + return __le32_to_cpu(desc->u.qcn9074.mpdu_start.info11) & + rx_mpdu_start_info11_encrypt_info_valid; +} + +static u32 ath11k_hw_qcn9074_rx_desc_get_encrypt_type(struct hal_rx_desc *desc) +{ + return field_get(rx_mpdu_start_info9_enc_type, + __le32_to_cpu(desc->u.qcn9074.mpdu_start.info9)); +} + +static u8 ath11k_hw_qcn9074_rx_desc_get_decap_type(struct hal_rx_desc *desc) +{ + return field_get(rx_msdu_start_info2_decap_format, + __le32_to_cpu(desc->u.qcn9074.msdu_start.info2)); +} + +static u8 ath11k_hw_qcn9074_rx_desc_get_mesh_ctl(struct hal_rx_desc *desc) +{ + return field_get(rx_msdu_start_info2_mesh_ctrl_present, + __le32_to_cpu(desc->u.qcn9074.msdu_start.info2)); +} + +static bool ath11k_hw_qcn9074_rx_desc_get_mpdu_seq_ctl_vld(struct hal_rx_desc *desc) +{ + return !!field_get(rx_mpdu_start_info11_mpdu_seq_ctrl_valid, + __le32_to_cpu(desc->u.qcn9074.mpdu_start.info11)); +} + +static bool ath11k_hw_qcn9074_rx_desc_get_mpdu_fc_valid(struct hal_rx_desc *desc) +{ + return !!field_get(rx_mpdu_start_info11_mpdu_fctrl_valid, + __le32_to_cpu(desc->u.qcn9074.mpdu_start.info11)); +} + +static u16 ath11k_hw_qcn9074_rx_desc_get_mpdu_start_seq_no(struct hal_rx_desc *desc) +{ + return field_get(rx_mpdu_start_info11_mpdu_seq_num, + __le32_to_cpu(desc->u.qcn9074.mpdu_start.info11)); +} + +static u16 ath11k_hw_qcn9074_rx_desc_get_msdu_len(struct hal_rx_desc *desc) +{ + return field_get(rx_msdu_start_info1_msdu_length, + __le32_to_cpu(desc->u.qcn9074.msdu_start.info1)); +} + +static u8 ath11k_hw_qcn9074_rx_desc_get_msdu_sgi(struct hal_rx_desc *desc) +{ + return field_get(rx_msdu_start_info3_sgi, + __le32_to_cpu(desc->u.qcn9074.msdu_start.info3)); +} + +static u8 ath11k_hw_qcn9074_rx_desc_get_msdu_rate_mcs(struct hal_rx_desc *desc) +{ + return field_get(rx_msdu_start_info3_rate_mcs, + __le32_to_cpu(desc->u.qcn9074.msdu_start.info3)); +} + +static u8 ath11k_hw_qcn9074_rx_desc_get_msdu_rx_bw(struct hal_rx_desc *desc) +{ + return field_get(rx_msdu_start_info3_recv_bw, + __le32_to_cpu(desc->u.qcn9074.msdu_start.info3)); +} + +static u32 ath11k_hw_qcn9074_rx_desc_get_msdu_freq(struct hal_rx_desc *desc) +{ + return __le32_to_cpu(desc->u.qcn9074.msdu_start.phy_meta_data); +} + +static u8 ath11k_hw_qcn9074_rx_desc_get_msdu_pkt_type(struct hal_rx_desc *desc) +{ + return field_get(rx_msdu_start_info3_pkt_type, + __le32_to_cpu(desc->u.qcn9074.msdu_start.info3)); +} + +static u8 ath11k_hw_qcn9074_rx_desc_get_msdu_nss(struct hal_rx_desc *desc) +{ + return field_get(rx_msdu_start_info3_mimo_ss_bitmap, + __le32_to_cpu(desc->u.qcn9074.msdu_start.info3)); +} + +static u8 ath11k_hw_qcn9074_rx_desc_get_mpdu_tid(struct hal_rx_desc *desc) +{ + return field_get(rx_mpdu_start_info9_tid, + __le32_to_cpu(desc->u.qcn9074.mpdu_start.info9)); +} + +static u16 ath11k_hw_qcn9074_rx_desc_get_mpdu_peer_id(struct hal_rx_desc *desc) +{ + return __le16_to_cpu(desc->u.qcn9074.mpdu_start.sw_peer_id); +} + +static void ath11k_hw_qcn9074_rx_desc_copy_attn_end(struct hal_rx_desc *fdesc, + struct hal_rx_desc *ldesc) +{ + memcpy((u8 *)&fdesc->u.qcn9074.msdu_end, (u8 *)&ldesc->u.qcn9074.msdu_end, + sizeof(struct rx_msdu_end_qcn9074)); + memcpy((u8 *)&fdesc->u.qcn9074.attention, (u8 *)&ldesc->u.qcn9074.attention, + sizeof(struct rx_attention)); + memcpy((u8 *)&fdesc->u.qcn9074.mpdu_end, (u8 *)&ldesc->u.qcn9074.mpdu_end, + sizeof(struct rx_mpdu_end)); +} + +static u32 ath11k_hw_qcn9074_rx_desc_get_mpdu_start_tag(struct hal_rx_desc *desc) +{ + return field_get(hal_tlv_hdr_tag, + __le32_to_cpu(desc->u.qcn9074.mpdu_start_tag)); +} + +static u32 ath11k_hw_qcn9074_rx_desc_get_mpdu_ppdu_id(struct hal_rx_desc *desc) +{ + return __le16_to_cpu(desc->u.qcn9074.mpdu_start.phy_ppdu_id); +} + +static void ath11k_hw_qcn9074_rx_desc_set_msdu_len(struct hal_rx_desc *desc, u16 len) +{ + u32 info = __le32_to_cpu(desc->u.qcn9074.msdu_start.info1); + + info &= ~rx_msdu_start_info1_msdu_length; + info |= field_prep(rx_msdu_start_info1_msdu_length, len); + + desc->u.qcn9074.msdu_start.info1 = __cpu_to_le32(info); +} + +static +struct rx_attention *ath11k_hw_qcn9074_rx_desc_get_attention(struct hal_rx_desc *desc) +{ + return &desc->u.qcn9074.attention; +} + +static u8 *ath11k_hw_qcn9074_rx_desc_get_msdu_payload(struct hal_rx_desc *desc) +{ + return &desc->u.qcn9074.msdu_payload[0]; +} + + .rx_desc_get_first_msdu = ath11k_hw_ipq8074_rx_desc_get_first_msdu, + .rx_desc_get_last_msdu = ath11k_hw_ipq8074_rx_desc_get_last_msdu, + .rx_desc_get_l3_pad_bytes = ath11k_hw_ipq8074_rx_desc_get_l3_pad_bytes, + .rx_desc_get_hdr_status = ath11k_hw_ipq8074_rx_desc_get_hdr_status, + .rx_desc_encrypt_valid = ath11k_hw_ipq8074_rx_desc_encrypt_valid, + .rx_desc_get_encrypt_type = ath11k_hw_ipq8074_rx_desc_get_encrypt_type, + .rx_desc_get_decap_type = ath11k_hw_ipq8074_rx_desc_get_decap_type, + .rx_desc_get_mesh_ctl = ath11k_hw_ipq8074_rx_desc_get_mesh_ctl, + .rx_desc_get_mpdu_seq_ctl_vld = ath11k_hw_ipq8074_rx_desc_get_mpdu_seq_ctl_vld, + .rx_desc_get_mpdu_fc_valid = ath11k_hw_ipq8074_rx_desc_get_mpdu_fc_valid, + .rx_desc_get_mpdu_start_seq_no = ath11k_hw_ipq8074_rx_desc_get_mpdu_start_seq_no, + .rx_desc_get_msdu_len = ath11k_hw_ipq8074_rx_desc_get_msdu_len, + .rx_desc_get_msdu_sgi = ath11k_hw_ipq8074_rx_desc_get_msdu_sgi, + .rx_desc_get_msdu_rate_mcs = ath11k_hw_ipq8074_rx_desc_get_msdu_rate_mcs, + .rx_desc_get_msdu_rx_bw = ath11k_hw_ipq8074_rx_desc_get_msdu_rx_bw, + .rx_desc_get_msdu_freq = ath11k_hw_ipq8074_rx_desc_get_msdu_freq, + .rx_desc_get_msdu_pkt_type = ath11k_hw_ipq8074_rx_desc_get_msdu_pkt_type, + .rx_desc_get_msdu_nss = ath11k_hw_ipq8074_rx_desc_get_msdu_nss, + .rx_desc_get_mpdu_tid = ath11k_hw_ipq8074_rx_desc_get_mpdu_tid, + .rx_desc_get_mpdu_peer_id = ath11k_hw_ipq8074_rx_desc_get_mpdu_peer_id, + .rx_desc_copy_attn_end_tlv = ath11k_hw_ipq8074_rx_desc_copy_attn_end, + .rx_desc_get_mpdu_start_tag = ath11k_hw_ipq8074_rx_desc_get_mpdu_start_tag, + .rx_desc_get_mpdu_ppdu_id = ath11k_hw_ipq8074_rx_desc_get_mpdu_ppdu_id, + .rx_desc_set_msdu_len = ath11k_hw_ipq8074_rx_desc_set_msdu_len, + .rx_desc_get_attention = ath11k_hw_ipq8074_rx_desc_get_attention, + .rx_desc_get_msdu_payload = ath11k_hw_ipq8074_rx_desc_get_msdu_payload, + .rx_desc_get_first_msdu = ath11k_hw_ipq8074_rx_desc_get_first_msdu, + .rx_desc_get_last_msdu = ath11k_hw_ipq8074_rx_desc_get_last_msdu, + .rx_desc_get_l3_pad_bytes = ath11k_hw_ipq8074_rx_desc_get_l3_pad_bytes, + .rx_desc_get_hdr_status = ath11k_hw_ipq8074_rx_desc_get_hdr_status, + .rx_desc_encrypt_valid = ath11k_hw_ipq8074_rx_desc_encrypt_valid, + .rx_desc_get_encrypt_type = ath11k_hw_ipq8074_rx_desc_get_encrypt_type, + .rx_desc_get_decap_type = ath11k_hw_ipq8074_rx_desc_get_decap_type, + .rx_desc_get_mesh_ctl = ath11k_hw_ipq8074_rx_desc_get_mesh_ctl, + .rx_desc_get_mpdu_seq_ctl_vld = ath11k_hw_ipq8074_rx_desc_get_mpdu_seq_ctl_vld, + .rx_desc_get_mpdu_fc_valid = ath11k_hw_ipq8074_rx_desc_get_mpdu_fc_valid, + .rx_desc_get_mpdu_start_seq_no = ath11k_hw_ipq8074_rx_desc_get_mpdu_start_seq_no, + .rx_desc_get_msdu_len = ath11k_hw_ipq8074_rx_desc_get_msdu_len, + .rx_desc_get_msdu_sgi = ath11k_hw_ipq8074_rx_desc_get_msdu_sgi, + .rx_desc_get_msdu_rate_mcs = ath11k_hw_ipq8074_rx_desc_get_msdu_rate_mcs, + .rx_desc_get_msdu_rx_bw = ath11k_hw_ipq8074_rx_desc_get_msdu_rx_bw, + .rx_desc_get_msdu_freq = ath11k_hw_ipq8074_rx_desc_get_msdu_freq, + .rx_desc_get_msdu_pkt_type = ath11k_hw_ipq8074_rx_desc_get_msdu_pkt_type, + .rx_desc_get_msdu_nss = ath11k_hw_ipq8074_rx_desc_get_msdu_nss, + .rx_desc_get_mpdu_tid = ath11k_hw_ipq8074_rx_desc_get_mpdu_tid, + .rx_desc_get_mpdu_peer_id = ath11k_hw_ipq8074_rx_desc_get_mpdu_peer_id, + .rx_desc_copy_attn_end_tlv = ath11k_hw_ipq8074_rx_desc_copy_attn_end, + .rx_desc_get_mpdu_start_tag = ath11k_hw_ipq8074_rx_desc_get_mpdu_start_tag, + .rx_desc_get_mpdu_ppdu_id = ath11k_hw_ipq8074_rx_desc_get_mpdu_ppdu_id, + .rx_desc_set_msdu_len = ath11k_hw_ipq8074_rx_desc_set_msdu_len, + .rx_desc_get_attention = ath11k_hw_ipq8074_rx_desc_get_attention, + .rx_desc_get_msdu_payload = ath11k_hw_ipq8074_rx_desc_get_msdu_payload, + .rx_desc_get_first_msdu = ath11k_hw_ipq8074_rx_desc_get_first_msdu, + .rx_desc_get_last_msdu = ath11k_hw_ipq8074_rx_desc_get_last_msdu, + .rx_desc_get_l3_pad_bytes = ath11k_hw_ipq8074_rx_desc_get_l3_pad_bytes, + .rx_desc_get_hdr_status = ath11k_hw_ipq8074_rx_desc_get_hdr_status, + .rx_desc_encrypt_valid = ath11k_hw_ipq8074_rx_desc_encrypt_valid, + .rx_desc_get_encrypt_type = ath11k_hw_ipq8074_rx_desc_get_encrypt_type, + .rx_desc_get_decap_type = ath11k_hw_ipq8074_rx_desc_get_decap_type, + .rx_desc_get_mesh_ctl = ath11k_hw_ipq8074_rx_desc_get_mesh_ctl, + .rx_desc_get_mpdu_seq_ctl_vld = ath11k_hw_ipq8074_rx_desc_get_mpdu_seq_ctl_vld, + .rx_desc_get_mpdu_fc_valid = ath11k_hw_ipq8074_rx_desc_get_mpdu_fc_valid, + .rx_desc_get_mpdu_start_seq_no = ath11k_hw_ipq8074_rx_desc_get_mpdu_start_seq_no, + .rx_desc_get_msdu_len = ath11k_hw_ipq8074_rx_desc_get_msdu_len, + .rx_desc_get_msdu_sgi = ath11k_hw_ipq8074_rx_desc_get_msdu_sgi, + .rx_desc_get_msdu_rate_mcs = ath11k_hw_ipq8074_rx_desc_get_msdu_rate_mcs, + .rx_desc_get_msdu_rx_bw = ath11k_hw_ipq8074_rx_desc_get_msdu_rx_bw, + .rx_desc_get_msdu_freq = ath11k_hw_ipq8074_rx_desc_get_msdu_freq, + .rx_desc_get_msdu_pkt_type = ath11k_hw_ipq8074_rx_desc_get_msdu_pkt_type, + .rx_desc_get_msdu_nss = ath11k_hw_ipq8074_rx_desc_get_msdu_nss, + .rx_desc_get_mpdu_tid = ath11k_hw_ipq8074_rx_desc_get_mpdu_tid, + .rx_desc_get_mpdu_peer_id = ath11k_hw_ipq8074_rx_desc_get_mpdu_peer_id, + .rx_desc_copy_attn_end_tlv = ath11k_hw_ipq8074_rx_desc_copy_attn_end, + .rx_desc_get_mpdu_start_tag = ath11k_hw_ipq8074_rx_desc_get_mpdu_start_tag, + .rx_desc_get_mpdu_ppdu_id = ath11k_hw_ipq8074_rx_desc_get_mpdu_ppdu_id, + .rx_desc_set_msdu_len = ath11k_hw_ipq8074_rx_desc_set_msdu_len, + .rx_desc_get_attention = ath11k_hw_ipq8074_rx_desc_get_attention, + .rx_desc_get_msdu_payload = ath11k_hw_ipq8074_rx_desc_get_msdu_payload, + .rx_desc_get_first_msdu = ath11k_hw_qcn9074_rx_desc_get_first_msdu, + .rx_desc_get_last_msdu = ath11k_hw_qcn9074_rx_desc_get_last_msdu, + .rx_desc_get_l3_pad_bytes = ath11k_hw_qcn9074_rx_desc_get_l3_pad_bytes, + .rx_desc_get_hdr_status = ath11k_hw_qcn9074_rx_desc_get_hdr_status, + .rx_desc_encrypt_valid = ath11k_hw_qcn9074_rx_desc_encrypt_valid, + .rx_desc_get_encrypt_type = ath11k_hw_qcn9074_rx_desc_get_encrypt_type, + .rx_desc_get_decap_type = ath11k_hw_qcn9074_rx_desc_get_decap_type, + .rx_desc_get_mesh_ctl = ath11k_hw_qcn9074_rx_desc_get_mesh_ctl, + .rx_desc_get_mpdu_seq_ctl_vld = ath11k_hw_qcn9074_rx_desc_get_mpdu_seq_ctl_vld, + .rx_desc_get_mpdu_fc_valid = ath11k_hw_qcn9074_rx_desc_get_mpdu_fc_valid, + .rx_desc_get_mpdu_start_seq_no = ath11k_hw_qcn9074_rx_desc_get_mpdu_start_seq_no, + .rx_desc_get_msdu_len = ath11k_hw_qcn9074_rx_desc_get_msdu_len, + .rx_desc_get_msdu_sgi = ath11k_hw_qcn9074_rx_desc_get_msdu_sgi, + .rx_desc_get_msdu_rate_mcs = ath11k_hw_qcn9074_rx_desc_get_msdu_rate_mcs, + .rx_desc_get_msdu_rx_bw = ath11k_hw_qcn9074_rx_desc_get_msdu_rx_bw, + .rx_desc_get_msdu_freq = ath11k_hw_qcn9074_rx_desc_get_msdu_freq, + .rx_desc_get_msdu_pkt_type = ath11k_hw_qcn9074_rx_desc_get_msdu_pkt_type, + .rx_desc_get_msdu_nss = ath11k_hw_qcn9074_rx_desc_get_msdu_nss, + .rx_desc_get_mpdu_tid = ath11k_hw_qcn9074_rx_desc_get_mpdu_tid, + .rx_desc_get_mpdu_peer_id = ath11k_hw_qcn9074_rx_desc_get_mpdu_peer_id, + .rx_desc_copy_attn_end_tlv = ath11k_hw_qcn9074_rx_desc_copy_attn_end, + .rx_desc_get_mpdu_start_tag = ath11k_hw_qcn9074_rx_desc_get_mpdu_start_tag, + .rx_desc_get_mpdu_ppdu_id = ath11k_hw_qcn9074_rx_desc_get_mpdu_ppdu_id, + .rx_desc_set_msdu_len = ath11k_hw_qcn9074_rx_desc_set_msdu_len, + .rx_desc_get_attention = ath11k_hw_qcn9074_rx_desc_get_attention, + .rx_desc_get_msdu_payload = ath11k_hw_qcn9074_rx_desc_get_msdu_payload, diff --git a/drivers/net/wireless/ath/ath11k/hw.h b/drivers/net/wireless/ath/ath11k/hw.h --- a/drivers/net/wireless/ath/ath11k/hw.h +++ b/drivers/net/wireless/ath/ath11k/hw.h +struct hal_rx_desc; + u32 hal_desc_sz; + bool (*rx_desc_get_first_msdu)(struct hal_rx_desc *desc); + bool (*rx_desc_get_last_msdu)(struct hal_rx_desc *desc); + u8 (*rx_desc_get_l3_pad_bytes)(struct hal_rx_desc *desc); + u8 *(*rx_desc_get_hdr_status)(struct hal_rx_desc *desc); + bool (*rx_desc_encrypt_valid)(struct hal_rx_desc *desc); + u32 (*rx_desc_get_encrypt_type)(struct hal_rx_desc *desc); + u8 (*rx_desc_get_decap_type)(struct hal_rx_desc *desc); + u8 (*rx_desc_get_mesh_ctl)(struct hal_rx_desc *desc); + bool (*rx_desc_get_mpdu_seq_ctl_vld)(struct hal_rx_desc *desc); + bool (*rx_desc_get_mpdu_fc_valid)(struct hal_rx_desc *desc); + u16 (*rx_desc_get_mpdu_start_seq_no)(struct hal_rx_desc *desc); + u16 (*rx_desc_get_msdu_len)(struct hal_rx_desc *desc); + u8 (*rx_desc_get_msdu_sgi)(struct hal_rx_desc *desc); + u8 (*rx_desc_get_msdu_rate_mcs)(struct hal_rx_desc *desc); + u8 (*rx_desc_get_msdu_rx_bw)(struct hal_rx_desc *desc); + u32 (*rx_desc_get_msdu_freq)(struct hal_rx_desc *desc); + u8 (*rx_desc_get_msdu_pkt_type)(struct hal_rx_desc *desc); + u8 (*rx_desc_get_msdu_nss)(struct hal_rx_desc *desc); + u8 (*rx_desc_get_mpdu_tid)(struct hal_rx_desc *desc); + u16 (*rx_desc_get_mpdu_peer_id)(struct hal_rx_desc *desc); + void (*rx_desc_copy_attn_end_tlv)(struct hal_rx_desc *fdesc, + struct hal_rx_desc *ldesc); + u32 (*rx_desc_get_mpdu_start_tag)(struct hal_rx_desc *desc); + u32 (*rx_desc_get_mpdu_ppdu_id)(struct hal_rx_desc *desc); + void (*rx_desc_set_msdu_len)(struct hal_rx_desc *desc, u16 len); + struct rx_attention *(*rx_desc_get_attention)(struct hal_rx_desc *desc); + u8 *(*rx_desc_get_msdu_payload)(struct hal_rx_desc *desc); diff --git a/drivers/net/wireless/ath/ath11k/pci.c b/drivers/net/wireless/ath/ath11k/pci.c --- a/drivers/net/wireless/ath/ath11k/pci.c +++ b/drivers/net/wireless/ath/ath11k/pci.c + diff --git a/drivers/net/wireless/ath/ath11k/rx_desc.h b/drivers/net/wireless/ath/ath11k/rx_desc.h --- a/drivers/net/wireless/ath/ath11k/rx_desc.h +++ b/drivers/net/wireless/ath/ath11k/rx_desc.h -struct rx_mpdu_start { +struct rx_mpdu_start_ipq8074 { +#define rx_mpdu_start_info7_reo_dest_ind genmask(4, 0) +#define rx_mpdu_start_info7_lmac_peer_id_msb genmask(6, 5) +#define rx_mpdu_start_info7_flow_id_toeplitz bit(7) +#define rx_mpdu_start_info7_pkt_sel_fp_ucast_data bit(8) +#define rx_mpdu_start_info7_pkt_sel_fp_mcast_data bit(9) +#define rx_mpdu_start_info7_pkt_sel_fp_ctrl_bar bit(10) +#define rx_mpdu_start_info7_rxdma0_src_ring_sel genmask(12, 11) +#define rx_mpdu_start_info7_rxdma0_dst_ring_sel genmask(14, 13) + +#define rx_mpdu_start_info8_reo_queue_desc_hi genmask(7, 0) +#define rx_mpdu_start_info8_recv_queue_num genmask(23, 8) +#define rx_mpdu_start_info8_pre_delim_err_warn bit(24) +#define rx_mpdu_start_info8_first_delim_err bit(25) + +#define rx_mpdu_start_info9_epd_en bit(0) +#define rx_mpdu_start_info9_all_frame_encpd bit(1) +#define rx_mpdu_start_info9_enc_type genmask(5, 2) +#define rx_mpdu_start_info9_var_wep_key_width genmask(7, 6) +#define rx_mpdu_start_info9_mesh_sta genmask(9, 8) +#define rx_mpdu_start_info9_bssid_hit bit(10) +#define rx_mpdu_start_info9_bssid_num genmask(14, 11) +#define rx_mpdu_start_info9_tid genmask(18, 15) + +#define rx_mpdu_start_info10_rxpcu_mpdu_fltr genmask(1, 0) +#define rx_mpdu_start_info10_sw_frame_grp_id genmask(8, 2) +#define rx_mpdu_start_info10_ndp_frame bit(9) +#define rx_mpdu_start_info10_phy_err bit(10) +#define rx_mpdu_start_info10_phy_err_mpdu_hdr bit(11) +#define rx_mpdu_start_info10_proto_ver_err bit(12) +#define rx_mpdu_start_info10_ast_lookup_valid bit(13) + +#define rx_mpdu_start_info11_mpdu_fctrl_valid bit(0) +#define rx_mpdu_start_info11_mpdu_dur_valid bit(1) +#define rx_mpdu_start_info11_mac_addr1_valid bit(2) +#define rx_mpdu_start_info11_mac_addr2_valid bit(3) +#define rx_mpdu_start_info11_mac_addr3_valid bit(4) +#define rx_mpdu_start_info11_mac_addr4_valid bit(5) +#define rx_mpdu_start_info11_mpdu_seq_ctrl_valid bit(6) +#define rx_mpdu_start_info11_mpdu_qos_ctrl_valid bit(7) +#define rx_mpdu_start_info11_mpdu_ht_ctrl_valid bit(8) +#define rx_mpdu_start_info11_encrypt_info_valid bit(9) +#define rx_mpdu_start_info11_mpdu_frag_number genmask(13, 10) +#define rx_mpdu_start_info11_more_frag_flag bit(14) +#define rx_mpdu_start_info11_from_ds bit(16) +#define rx_mpdu_start_info11_to_ds bit(17) +#define rx_mpdu_start_info11_encrypted bit(18) +#define rx_mpdu_start_info11_mpdu_retry bit(19) +#define rx_mpdu_start_info11_mpdu_seq_num genmask(31, 20) + +#define rx_mpdu_start_info12_key_id genmask(7, 0) +#define rx_mpdu_start_info12_new_peer_entry bit(8) +#define rx_mpdu_start_info12_decrypt_needed bit(9) +#define rx_mpdu_start_info12_decap_type genmask(11, 10) +#define rx_mpdu_start_info12_vlan_tag_c_padding bit(12) +#define rx_mpdu_start_info12_vlan_tag_s_padding bit(13) +#define rx_mpdu_start_info12_strip_vlan_tag_c bit(14) +#define rx_mpdu_start_info12_strip_vlan_tag_s bit(15) +#define rx_mpdu_start_info12_pre_delim_count genmask(27, 16) +#define rx_mpdu_start_info12_ampdu_flag bit(28) +#define rx_mpdu_start_info12_bar_frame bit(29) +#define rx_mpdu_start_info12_raw_mpdu bit(30) + +#define rx_mpdu_start_info13_mpdu_len genmask(13, 0) +#define rx_mpdu_start_info13_first_mpdu bit(14) +#define rx_mpdu_start_info13_mcast_bcast bit(15) +#define rx_mpdu_start_info13_ast_idx_not_found bit(16) +#define rx_mpdu_start_info13_ast_idx_timeout bit(17) +#define rx_mpdu_start_info13_power_mgmt bit(18) +#define rx_mpdu_start_info13_non_qos bit(19) +#define rx_mpdu_start_info13_null_data bit(20) +#define rx_mpdu_start_info13_mgmt_type bit(21) +#define rx_mpdu_start_info13_ctrl_type bit(22) +#define rx_mpdu_start_info13_more_data bit(23) +#define rx_mpdu_start_info13_eosp bit(24) +#define rx_mpdu_start_info13_fragment bit(25) +#define rx_mpdu_start_info13_order bit(26) +#define rx_mpdu_start_info13_uapsd_trigger bit(27) +#define rx_mpdu_start_info13_encrypt_required bit(28) +#define rx_mpdu_start_info13_directed bit(29) +#define rx_mpdu_start_info13_amsdu_present bit(30) + +struct rx_mpdu_start_qcn9074 { + __le32 info7; + __le32 reo_queue_desc_lo; + __le32 info8; + __le32 pn[4]; + __le32 info9; + __le32 peer_meta_data; + __le16 info10; + __le16 phy_ppdu_id; + __le16 ast_index; + __le16 sw_peer_id; + __le32 info11; + __le32 info12; + __le32 info13; + __le16 frame_ctrl; + __le16 duration; + u8 addr1[eth_alen]; + u8 addr2[eth_alen]; + u8 addr3[eth_alen]; + __le16 seq_ctrl; + u8 addr4[eth_alen]; + __le16 qos_ctrl; + __le32 ht_ctrl; +} __packed; + -struct rx_msdu_start { +struct rx_msdu_start_ipq8074 { + __le16 info0; + __le16 phy_ppdu_id; + __le32 info1; + __le32 info2; + __le32 toeplitz_hash; + __le32 flow_id_toeplitz; + __le32 info3; + __le32 ppdu_start_timestamp; + __le32 phy_meta_data; +} __packed; + +struct rx_msdu_start_qcn9074 { + __le16 vlan_ctag_c1; + __le16 vlan_stag_c1; -struct rx_msdu_end { +struct rx_msdu_end_ipq8074 { +#define rx_msdu_end_mpdu_length_info genmask(13, 0) + +#define rx_msdu_end_info2_da_offset genmask(5, 0) +#define rx_msdu_end_info2_sa_offset genmask(11, 6) +#define rx_msdu_end_info2_da_offset_valid bit(12) +#define rx_msdu_end_info2_sa_offset_valid bit(13) +#define rx_msdu_end_info2_l3_type genmask(31, 16) + +#define rx_msdu_end_info4_sa_idx_timeout bit(0) +#define rx_msdu_end_info4_da_idx_timeout bit(1) +#define rx_msdu_end_info4_msdu_limit_err bit(2) +#define rx_msdu_end_info4_flow_idx_timeout bit(3) +#define rx_msdu_end_info4_flow_idx_invalid bit(4) +#define rx_msdu_end_info4_wifi_parser_err bit(5) +#define rx_msdu_end_info4_amsdu_parser_err bit(6) +#define rx_msdu_end_info4_sa_is_valid bit(7) +#define rx_msdu_end_info4_da_is_valid bit(8) +#define rx_msdu_end_info4_da_is_mcbc bit(9) +#define rx_msdu_end_info4_l3_hdr_padding genmask(11, 10) +#define rx_msdu_end_info4_first_msdu bit(12) +#define rx_msdu_end_info4_last_msdu bit(13) + +#define rx_msdu_end_info6_aggr_count genmask(7, 0) +#define rx_msdu_end_info6_flow_aggr_contn bit(8) +#define rx_msdu_end_info6_fisa_timeout bit(9) + +struct rx_msdu_end_qcn9074 { + __le16 info0; + __le16 phy_ppdu_id; + __le16 ip_hdr_cksum; + __le16 mpdu_length_info; + __le32 info1; + __le32 rule_indication[2]; + __le32 info2; + __le32 ipv6_options_crc; + __le32 tcp_seq_num; + __le32 tcp_ack_num; + __le16 info3; + __le16 window_size; + __le16 tcp_udp_cksum; + __le16 info4; + __le16 sa_idx; + __le16 da_idx; + __le32 info5; + __le32 fse_metadata; + __le16 cce_metadata; + __le16 sa_sw_peer_id; + __le32 info6; + __le16 cum_l4_cksum; + __le16 cum_ip_length; +} __packed; + -struct hal_rx_desc { +struct hal_rx_desc_ipq8074 { - struct rx_msdu_end msdu_end; + struct rx_msdu_end_ipq8074 msdu_end; - struct rx_msdu_start msdu_start; + struct rx_msdu_start_ipq8074 msdu_start; - struct rx_mpdu_start mpdu_start; + struct rx_mpdu_start_ipq8074 mpdu_start; +struct hal_rx_desc_qcn9074 { + __le32 msdu_end_tag; + struct rx_msdu_end_qcn9074 msdu_end; + __le32 rx_attn_tag; + struct rx_attention attention; + __le32 msdu_start_tag; + struct rx_msdu_start_qcn9074 msdu_start; + u8 rx_padding0[hal_rx_desc_padding0_bytes]; + __le32 mpdu_start_tag; + struct rx_mpdu_start_qcn9074 mpdu_start; + __le32 mpdu_end_tag; + struct rx_mpdu_end mpdu_end; + u8 rx_padding1[hal_rx_desc_padding1_bytes]; + __le32 hdr_status_tag; + __le32 phy_ppdu_id; + u8 hdr_status[hal_rx_desc_hdr_status_len]; + u8 msdu_payload[0]; +} __packed; + +struct hal_rx_desc { + union { + struct hal_rx_desc_ipq8074 ipq8074; + struct hal_rx_desc_qcn9074 qcn9074; + } u; +} __packed; +
|
Networking
|
e678fbd401b9bdca9d1bd64065abfcc87ae66b94
|
karthikeyan periyasamy
|
drivers
|
net
|
ath, ath11k, wireless
|
ath11k: add ce interrupt support for qcn9074
|
define host ce configuration for qcn9074 since the max ce count is six. available msi interrupt is five so cannot able to map the ce_id directly for the msi_data_idx. added get_ce_msi_idx ops in ath11k_hif_ops to get the ce msi idx which is used to initialize the ce ring.
|
this release includes the landlock security module, which aims to make easier to sandbox applications; support for the clang control flow integrity, which aims to abort the program upon detecting certain forms of undefined behavior; support for randomising the stack address offset in each syscall; support for concurrent tbl flushing; preparatory apple m1 support; support for incoming amd and intel graphics chips; bpf support for calling kernel functions directly; a virtio sound driver for improved sound experience on virtualized guests; io_uring support for multi shot mode and a misc cgroup for miscellaneous resources. as always, there are many other features, new drivers, improvements and fixes.
|
add support for qcn9074
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'security', 'networking', 'architectures arm x86 mips powerpc riscv s390 ia64 xtensa']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'remote processors', 'clock', 'phy ("physical layer" framework)', 'various']
|
['ath11k ']
|
['h', 'c']
| 7
| 294
| 8
|
--- diff --git a/drivers/net/wireless/ath/ath11k/ce.c b/drivers/net/wireless/ath/ath11k/ce.c --- a/drivers/net/wireless/ath/ath11k/ce.c +++ b/drivers/net/wireless/ath/ath11k/ce.c +const struct ce_attr ath11k_host_ce_config_qcn9074[] = { + /* ce0: host->target htc control and raw streams */ + { + .flags = ce_attr_flags, + .src_nentries = 16, + .src_sz_max = 2048, + .dest_nentries = 0, + }, + + /* ce1: target->host htt + htc control */ + { + .flags = ce_attr_flags, + .src_nentries = 0, + .src_sz_max = 2048, + .dest_nentries = 512, + .recv_cb = ath11k_htc_rx_completion_handler, + }, + + /* ce2: target->host wmi */ + { + .flags = ce_attr_flags, + .src_nentries = 0, + .src_sz_max = 2048, + .dest_nentries = 32, + .recv_cb = ath11k_htc_rx_completion_handler, + }, + + /* ce3: host->target wmi (mac0) */ + { + .flags = ce_attr_flags, + .src_nentries = 32, + .src_sz_max = 2048, + .dest_nentries = 0, + }, + + /* ce4: host->target htt */ + { + .flags = ce_attr_flags | ce_attr_dis_intr, + .src_nentries = 2048, + .src_sz_max = 256, + .dest_nentries = 0, + }, + + /* ce5: target->host pktlog */ + { + .flags = ce_attr_flags, + .src_nentries = 0, + .src_sz_max = 2048, + .dest_nentries = 512, + .recv_cb = ath11k_dp_htt_htc_t2h_msg_handler, + }, +}; + - u32 msi_data_count; + u32 msi_data_count, msi_data_idx; + ath11k_get_ce_msi_idx(ab, ce_id, &msi_data_idx); - ring_params->msi_data = (ce_id % msi_data_count) + msi_data_start; + ring_params->msi_data = (msi_data_idx % msi_data_count) + msi_data_start; diff --git a/drivers/net/wireless/ath/ath11k/ce.h b/drivers/net/wireless/ath/ath11k/ce.h --- a/drivers/net/wireless/ath/ath11k/ce.h +++ b/drivers/net/wireless/ath/ath11k/ce.h +extern const struct ce_attr ath11k_host_ce_config_qcn9074[]; diff --git a/drivers/net/wireless/ath/ath11k/core.c b/drivers/net/wireless/ath/ath11k/core.c --- a/drivers/net/wireless/ath/ath11k/core.c +++ b/drivers/net/wireless/ath/ath11k/core.c + .host_ce_config = ath11k_host_ce_config_qcn9074, + .ce_count = 6, + .target_ce_config = ath11k_target_ce_config_wlan_qcn9074, + .target_ce_count = 9, + .svc_to_ce_map = ath11k_target_service_to_ce_map_wlan_qcn9074, + .svc_to_ce_map_len = 18, diff --git a/drivers/net/wireless/ath/ath11k/core.h b/drivers/net/wireless/ath/ath11k/core.h --- a/drivers/net/wireless/ath/ath11k/core.h +++ b/drivers/net/wireless/ath/ath11k/core.h +extern const struct ce_pipe_config ath11k_target_ce_config_wlan_qcn9074[]; +extern const struct service_to_pipe ath11k_target_service_to_ce_map_wlan_qcn9074[]; diff --git a/drivers/net/wireless/ath/ath11k/hif.h b/drivers/net/wireless/ath/ath11k/hif.h --- a/drivers/net/wireless/ath/ath11k/hif.h +++ b/drivers/net/wireless/ath/ath11k/hif.h + void (*get_ce_msi_idx)(struct ath11k_base *ab, u32 ce_id, u32 *msi_idx); + +static inline void ath11k_get_ce_msi_idx(struct ath11k_base *ab, u32 ce_id, + u32 *msi_data_idx) +{ + if (ab->hif.ops->get_ce_msi_idx) + ab->hif.ops->get_ce_msi_idx(ab, ce_id, msi_data_idx); + else + *msi_data_idx = ce_id; +} diff --git a/drivers/net/wireless/ath/ath11k/hw.c b/drivers/net/wireless/ath/ath11k/hw.c --- a/drivers/net/wireless/ath/ath11k/hw.c +++ b/drivers/net/wireless/ath/ath11k/hw.c +/* target firmware's copy engine configuration. */ +const struct ce_pipe_config ath11k_target_ce_config_wlan_qcn9074[] = { + /* ce0: host->target htc control and raw streams */ + { + .pipenum = __cpu_to_le32(0), + .pipedir = __cpu_to_le32(pipedir_out), + .nentries = __cpu_to_le32(32), + .nbytes_max = __cpu_to_le32(2048), + .flags = __cpu_to_le32(ce_attr_flags), + .reserved = __cpu_to_le32(0), + }, + + /* ce1: target->host htt + htc control */ + { + .pipenum = __cpu_to_le32(1), + .pipedir = __cpu_to_le32(pipedir_in), + .nentries = __cpu_to_le32(32), + .nbytes_max = __cpu_to_le32(2048), + .flags = __cpu_to_le32(ce_attr_flags), + .reserved = __cpu_to_le32(0), + }, + + /* ce2: target->host wmi */ + { + .pipenum = __cpu_to_le32(2), + .pipedir = __cpu_to_le32(pipedir_in), + .nentries = __cpu_to_le32(32), + .nbytes_max = __cpu_to_le32(2048), + .flags = __cpu_to_le32(ce_attr_flags), + .reserved = __cpu_to_le32(0), + }, + + /* ce3: host->target wmi */ + { + .pipenum = __cpu_to_le32(3), + .pipedir = __cpu_to_le32(pipedir_out), + .nentries = __cpu_to_le32(32), + .nbytes_max = __cpu_to_le32(2048), + .flags = __cpu_to_le32(ce_attr_flags), + .reserved = __cpu_to_le32(0), + }, + + /* ce4: host->target htt */ + { + .pipenum = __cpu_to_le32(4), + .pipedir = __cpu_to_le32(pipedir_out), + .nentries = __cpu_to_le32(256), + .nbytes_max = __cpu_to_le32(256), + .flags = __cpu_to_le32(ce_attr_flags | ce_attr_dis_intr), + .reserved = __cpu_to_le32(0), + }, + + /* ce5: target->host pktlog */ + { + .pipenum = __cpu_to_le32(5), + .pipedir = __cpu_to_le32(pipedir_in), + .nentries = __cpu_to_le32(32), + .nbytes_max = __cpu_to_le32(2048), + .flags = __cpu_to_le32(ce_attr_flags), + .reserved = __cpu_to_le32(0), + }, + + /* ce6: reserved for target autonomous hif_memcpy */ + { + .pipenum = __cpu_to_le32(6), + .pipedir = __cpu_to_le32(pipedir_inout), + .nentries = __cpu_to_le32(32), + .nbytes_max = __cpu_to_le32(16384), + .flags = __cpu_to_le32(ce_attr_flags), + .reserved = __cpu_to_le32(0), + }, + + /* ce7 used only by host */ + { + .pipenum = __cpu_to_le32(7), + .pipedir = __cpu_to_le32(pipedir_inout_h2h), + .nentries = __cpu_to_le32(0), + .nbytes_max = __cpu_to_le32(0), + .flags = __cpu_to_le32(ce_attr_flags | ce_attr_dis_intr), + .reserved = __cpu_to_le32(0), + }, + + /* ce8 target->host used only by ipa */ + { + .pipenum = __cpu_to_le32(8), + .pipedir = __cpu_to_le32(pipedir_inout), + .nentries = __cpu_to_le32(32), + .nbytes_max = __cpu_to_le32(16384), + .flags = __cpu_to_le32(ce_attr_flags), + .reserved = __cpu_to_le32(0), + }, + /* ce 9, 10, 11 are used by mhi driver */ +}; + +/* map from service/endpoint to copy engine. + * this table is derived from the ce_pci table, above. + * it is passed to the target at startup for use by firmware. + */ +const struct service_to_pipe ath11k_target_service_to_ce_map_wlan_qcn9074[] = { + { + __cpu_to_le32(ath11k_htc_svc_id_wmi_data_vo), + __cpu_to_le32(pipedir_out), /* out = ul = host -> target */ + __cpu_to_le32(3), + }, + { + __cpu_to_le32(ath11k_htc_svc_id_wmi_data_vo), + __cpu_to_le32(pipedir_in), /* in = dl = target -> host */ + __cpu_to_le32(2), + }, + { + __cpu_to_le32(ath11k_htc_svc_id_wmi_data_bk), + __cpu_to_le32(pipedir_out), /* out = ul = host -> target */ + __cpu_to_le32(3), + }, + { + __cpu_to_le32(ath11k_htc_svc_id_wmi_data_bk), + __cpu_to_le32(pipedir_in), /* in = dl = target -> host */ + __cpu_to_le32(2), + }, + { + __cpu_to_le32(ath11k_htc_svc_id_wmi_data_be), + __cpu_to_le32(pipedir_out), /* out = ul = host -> target */ + __cpu_to_le32(3), + }, + { + __cpu_to_le32(ath11k_htc_svc_id_wmi_data_be), + __cpu_to_le32(pipedir_in), /* in = dl = target -> host */ + __cpu_to_le32(2), + }, + { + __cpu_to_le32(ath11k_htc_svc_id_wmi_data_vi), + __cpu_to_le32(pipedir_out), /* out = ul = host -> target */ + __cpu_to_le32(3), + }, + { + __cpu_to_le32(ath11k_htc_svc_id_wmi_data_vi), + __cpu_to_le32(pipedir_in), /* in = dl = target -> host */ + __cpu_to_le32(2), + }, + { + __cpu_to_le32(ath11k_htc_svc_id_wmi_control), + __cpu_to_le32(pipedir_out), /* out = ul = host -> target */ + __cpu_to_le32(3), + }, + { + __cpu_to_le32(ath11k_htc_svc_id_wmi_control), + __cpu_to_le32(pipedir_in), /* in = dl = target -> host */ + __cpu_to_le32(2), + }, + { + __cpu_to_le32(ath11k_htc_svc_id_rsvd_ctrl), + __cpu_to_le32(pipedir_out), /* out = ul = host -> target */ + __cpu_to_le32(0), + }, + { + __cpu_to_le32(ath11k_htc_svc_id_rsvd_ctrl), + __cpu_to_le32(pipedir_in), /* in = dl = target -> host */ + __cpu_to_le32(1), + }, + { + __cpu_to_le32(ath11k_htc_svc_id_test_raw_streams), + __cpu_to_le32(pipedir_out), /* out = ul = host -> target */ + __cpu_to_le32(0), + }, + { + __cpu_to_le32(ath11k_htc_svc_id_test_raw_streams), + __cpu_to_le32(pipedir_in), /* in = dl = target -> host */ + __cpu_to_le32(1), + }, + { + __cpu_to_le32(ath11k_htc_svc_id_htt_data_msg), + __cpu_to_le32(pipedir_out), /* out = ul = host -> target */ + __cpu_to_le32(4), + }, + { + __cpu_to_le32(ath11k_htc_svc_id_htt_data_msg), + __cpu_to_le32(pipedir_in), /* in = dl = target -> host */ + __cpu_to_le32(1), + }, + { + __cpu_to_le32(ath11k_htc_svc_id_pkt_log), + __cpu_to_le32(pipedir_in), /* in = dl = target -> host */ + __cpu_to_le32(5), + }, + + /* (additions here) */ + + { /* must be last */ + __cpu_to_le32(0), + __cpu_to_le32(0), + __cpu_to_le32(0), + }, +}; + diff --git a/drivers/net/wireless/ath/ath11k/pci.c b/drivers/net/wireless/ath/ath11k/pci.c --- a/drivers/net/wireless/ath/ath11k/pci.c +++ b/drivers/net/wireless/ath/ath11k/pci.c +static void ath11k_pci_get_ce_msi_idx(struct ath11k_base *ab, u32 ce_id, + u32 *msi_idx) +{ + u32 i, msi_data_idx; + + for (i = 0, msi_data_idx = 0; i < ab->hw_params.ce_count; i++) { + if (ath11k_ce_get_attr_flags(ab, i) & ce_attr_dis_intr) + continue; + + if (ce_id == i) + break; + + msi_data_idx++; + } + *msi_idx = msi_data_idx; +} + - u32 msi_data_count; + u32 msi_data_count, msi_data_idx; - for (i = 0; i < ab->hw_params.ce_count; i++) { - msi_data = (i % msi_data_count) + msi_irq_start; - irq = ath11k_pci_get_msi_irq(ab->dev, msi_data); - ce_pipe = &ab->ce.ce_pipe[i]; - + for (i = 0, msi_data_idx = 0; i < ab->hw_params.ce_count; i++) { + msi_data = (msi_data_idx % msi_data_count) + msi_irq_start; + irq = ath11k_pci_get_msi_irq(ab->dev, msi_data); + ce_pipe = &ab->ce.ce_pipe[i]; + + msi_data_idx++; + .get_ce_msi_idx = ath11k_pci_get_ce_msi_idx,
|
Networking
|
6289ac2b7182d418ee68e5c0f3f83d383d7a72ed
|
karthikeyan periyasamy
|
drivers
|
net
|
ath, ath11k, wireless
|
ath11k: add extended interrupt support for qcn9074
|
update the specific hw ring mask for qcn9074. update the timestamp information while processing dp and ce interrupts.
|
this release includes the landlock security module, which aims to make easier to sandbox applications; support for the clang control flow integrity, which aims to abort the program upon detecting certain forms of undefined behavior; support for randomising the stack address offset in each syscall; support for concurrent tbl flushing; preparatory apple m1 support; support for incoming amd and intel graphics chips; bpf support for calling kernel functions directly; a virtio sound driver for improved sound experience on virtualized guests; io_uring support for multi shot mode and a misc cgroup for miscellaneous resources. as always, there are many other features, new drivers, improvements and fixes.
|
add support for qcn9074
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'security', 'networking', 'architectures arm x86 mips powerpc riscv s390 ia64 xtensa']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'remote processors', 'clock', 'phy ("physical layer" framework)', 'various']
|
['ath11k ']
|
['h', 'c']
| 5
| 56
| 4
|
--- diff --git a/drivers/net/wireless/ath/ath11k/core.c b/drivers/net/wireless/ath/ath11k/core.c --- a/drivers/net/wireless/ath/ath11k/core.c +++ b/drivers/net/wireless/ath/ath11k/core.c + .ring_mask = &ath11k_hw_ring_mask_qcn9074, diff --git a/drivers/net/wireless/ath/ath11k/dp_tx.c b/drivers/net/wireless/ath/ath11k/dp_tx.c --- a/drivers/net/wireless/ath/ath11k/dp_tx.c +++ b/drivers/net/wireless/ath/ath11k/dp_tx.c - cmd->ring_msi_addr_lo = params.msi_addr & 0xffffffff; - cmd->ring_msi_addr_hi = ((uint64_t)(params.msi_addr) >> 32) & 0xffffffff; + cmd->ring_msi_addr_lo = lower_32_bits(params.msi_addr); + cmd->ring_msi_addr_hi = upper_32_bits(params.msi_addr); diff --git a/drivers/net/wireless/ath/ath11k/hw.c b/drivers/net/wireless/ath/ath11k/hw.c --- a/drivers/net/wireless/ath/ath11k/hw.c +++ b/drivers/net/wireless/ath/ath11k/hw.c +const struct ath11k_hw_ring_mask ath11k_hw_ring_mask_qcn9074 = { + .tx = { + ath11k_tx_ring_mask_0, + ath11k_tx_ring_mask_1, + ath11k_tx_ring_mask_2, + }, + .rx_mon_status = { + 0, 0, 0, + ath11k_rx_mon_status_ring_mask_0, + ath11k_rx_mon_status_ring_mask_1, + ath11k_rx_mon_status_ring_mask_2, + }, + .rx = { + 0, 0, 0, 0, + ath11k_rx_ring_mask_0, + ath11k_rx_ring_mask_1, + ath11k_rx_ring_mask_2, + ath11k_rx_ring_mask_3, + }, + .rx_err = { + 0, 0, 0, + ath11k_rx_err_ring_mask_0, + }, + .rx_wbm_rel = { + 0, 0, 0, + ath11k_rx_wbm_rel_ring_mask_0, + }, + .reo_status = { + 0, 0, 0, + ath11k_reo_status_ring_mask_0, + }, + .rxdma2host = { + 0, 0, 0, + ath11k_rxdma2host_ring_mask_0, + }, + .host2rxdma = { + 0, 0, 0, + ath11k_host2rxdma_ring_mask_0, + }, +}; + diff --git a/drivers/net/wireless/ath/ath11k/hw.h b/drivers/net/wireless/ath/ath11k/hw.h --- a/drivers/net/wireless/ath/ath11k/hw.h +++ b/drivers/net/wireless/ath/ath11k/hw.h +extern const struct ath11k_hw_ring_mask ath11k_hw_ring_mask_qcn9074; diff --git a/drivers/net/wireless/ath/ath11k/pci.c b/drivers/net/wireless/ath/ath11k/pci.c --- a/drivers/net/wireless/ath/ath11k/pci.c +++ b/drivers/net/wireless/ath/ath11k/pci.c + /* last interrupt received for this ce */ + ce_pipe->timestamp = jiffies; + + /* last interrupt received for this group */ + irq_grp->timestamp = jiffies; + - u32 user_base_data = 0, base_vector = 0; + u32 user_base_data = 0, base_vector = 0, base_idx; + base_idx = ath11k_pci_irq_ce0_offset + ce_count_max; - irq_grp->irqs[0] = base_vector + i; + irq_grp->irqs[0] = base_idx + i; + + irq_set_status_flags(irq, irq_disable_unlazy);
|
Networking
|
7dc67af063e3f0237c864504bb2188ada753b804
|
karthikeyan periyasamy
|
drivers
|
net
|
ath, ath11k, wireless
|
ath11k: add qcn9074 pci device support
|
qcn9074 is pci based 11ax radio. - has 2g/5g/6g variants. - has nss 2x2 and 4x4 variants.
|
this release includes the landlock security module, which aims to make easier to sandbox applications; support for the clang control flow integrity, which aims to abort the program upon detecting certain forms of undefined behavior; support for randomising the stack address offset in each syscall; support for concurrent tbl flushing; preparatory apple m1 support; support for incoming amd and intel graphics chips; bpf support for calling kernel functions directly; a virtio sound driver for improved sound experience on virtualized guests; io_uring support for multi shot mode and a misc cgroup for miscellaneous resources. as always, there are many other features, new drivers, improvements and fixes.
|
add support for qcn9074
|
['core (various)', 'file systems', 'memory management', 'block layer', 'tracing, perf and bpf', 'virtualization', 'security', 'networking', 'architectures arm x86 mips powerpc riscv s390 ia64 xtensa']
|
['graphics', 'power management', 'storage', 'drivers in the staging area', 'networking', 'audio', 'tablets, touch screens, keyboards, mouses', 'tv tuners, webcams, video capturers', 'universal serial bus', 'serial peripheral interface (spi)', 'watchdog', 'serial', 'cpu frequency scaling', 'voltage, current regulators, power capping, power supply', 'real time clock (rtc)', 'pin controllers (pinctrl)', 'multi media card (mmc)', 'memory technology devices (mtd)', 'industrial i/o (iio)', 'multi function devices (mfd)', 'pulse-width modulation (pwm)', 'inter-integrated circuit (i2c + i3c)', 'hardware monitoring (hwmon)', 'general purpose i/o (gpio)', 'leds', 'dma engines', 'cryptography hardware acceleration', 'pci', 'remote processors', 'clock', 'phy ("physical layer" framework)', 'various']
|
['ath11k ']
|
['c']
| 1
| 17
| 1
|
- has 2g/5g/6g variants. - has nss 2x2 and 4x4 variants. --- diff --git a/drivers/net/wireless/ath/ath11k/pci.c b/drivers/net/wireless/ath/ath11k/pci.c --- a/drivers/net/wireless/ath/ath11k/pci.c +++ b/drivers/net/wireless/ath/ath11k/pci.c +#define qcn9074_device_id 0x1104 + /* todo: add qcn9074_device_id) once firmware issues are resolved */ + { + .total_vectors = 16, + .total_users = 3, + .users = (struct ath11k_msi_user[]) { + { .name = "mhi", .num_vectors = 3, .base_vector = 0 }, + { .name = "ce", .num_vectors = 5, .base_vector = 3 }, + { .name = "dp", .num_vectors = 8, .base_vector = 8 }, + }, + }, + ab_pci->msi_config = &ath11k_msi_config[0]; + break; + case qcn9074_device_id: + ab_pci->msi_config = &ath11k_msi_config[1]; + ab->bus_params.static_window_map = true; + ab->hw_rev = ath11k_hw_qcn9074_hw10; - ab_pci->msi_config = &ath11k_msi_config[0];
|
Networking
|
4e80946197a83a6115e308334618449b77696d6a
|
anilkumar kolli
|
drivers
|
net
|
ath, ath11k, wireless
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.