idx int64 0 25.4k | project stringclasses 707 values | project_url stringclasses 735 values | filepath stringlengths 4 100 | commit_id stringlengths 7 40 | commit_message stringlengths 0 18.3k ⌀ | is_vulnerable bool 2 classes | hash stringlengths 32 32 | func_name stringlengths 3 112 | func_body stringlengths 23 235k | changed_lines stringlengths 2 27.6k | changed_statements stringlengths 2 161k | cve_list listlengths 1 19 | cwe_list listlengths 1 6 | fixed_func_idx int64 1 25.4k ⌀ | context dict |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | linux | https://github.com/torvalds/linux | drivers/media/usb/dvb-usb-v2/dvb_usb_core.c | 005145378c9ad7575a01b6ce1ba118fb427f583a | [media] dvb-usb-v2: avoid use-after-free
I ran into a stack frame size warning because of the on-stack copy of
the USB device structure:
drivers/media/usb/dvb-usb-v2/dvb_usb_core.c: In function 'dvb_usbv2_disconnect':
drivers/media/usb/dvb-usb-v2/dvb_usb_core.c:1029:1: error: the frame size of 1104 bytes is larger than 1024 bytes [-Werror=frame-larger-than=]
Copying a device structure like this is wrong for a number of other reasons
too aside from the possible stack overflow. One of them is that the
dev_info() call will print the name of the device later, but AFAICT
we have only copied a pointer to the name earlier and the actual name
has been freed by the time it gets printed.
This removes the on-stack copy of the device and instead copies the
device name using kstrdup(). I'm ignoring the possible failure here
as both printk() and kfree() are able to deal with NULL pointers.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com> | true | 5eed13a1f22c5a961ab0b4dcab91a1a7 | dvb_usbv2_disconnect | void dvb_usbv2_disconnect(struct usb_interface *intf)
{
struct dvb_usb_device *d = usb_get_intfdata(intf);
const char *name = d->name;
struct device dev = d->udev->dev;
dev_dbg(&d->udev->dev, "%s: bInterfaceNumber=%d\n", __func__,
intf->cur_altsetting->desc.bInterfaceNumber);
if (d->props->exit)
d->props->exit(d);
dvb_usbv2_exit(d);
dev_info(&dev, "%s: '%s' successfully deinitialized and disconnected\n",
KBUILD_MODNAME, name);
}
| [[1015, "\tconst char *name = d->name;\n"], [1016, "\tstruct device dev = d->udev->dev;\n"], [1026, "\tdev_info(&dev, \"%s: '%s' successfully deinitialized and disconnected\\n\",\n"], [1027, "\t\t\tKBUILD_MODNAME, name);\n"]] | [[1015, "const char *name = d->name;"], [1016, "struct device dev = d->udev->dev;"], [1026, "dev_info(&dev, \"%s: '%s' successfully deinitialized and disconnected\\n\",\n\t\t\tKBUILD_MODNAME, name);"]] | [
"CVE-2017-8064"
] | [
"CWE-119"
] | 1 | {
"Execution Environment": [
"CONFIG_VMAP_STACK"
],
"Explanation": null,
"External Function": [
"usb_get_intfdata",
"dev_info"
],
"Function Argument": [
"intf"
],
"Globals": [
"KBUILD_MODNAME"
],
"Type Execution Declaration": [
"struct dvb_usb_device",
"struct device"
]
} |
1 | linux | https://github.com/torvalds/linux | drivers/media/usb/dvb-usb-v2/dvb_usb_core.c | 005145378c9ad7575a01b6ce1ba118fb427f583a | [media] dvb-usb-v2: avoid use-after-free
I ran into a stack frame size warning because of the on-stack copy of
the USB device structure:
drivers/media/usb/dvb-usb-v2/dvb_usb_core.c: In function 'dvb_usbv2_disconnect':
drivers/media/usb/dvb-usb-v2/dvb_usb_core.c:1029:1: error: the frame size of 1104 bytes is larger than 1024 bytes [-Werror=frame-larger-than=]
Copying a device structure like this is wrong for a number of other reasons
too aside from the possible stack overflow. One of them is that the
dev_info() call will print the name of the device later, but AFAICT
we have only copied a pointer to the name earlier and the actual name
has been freed by the time it gets printed.
This removes the on-stack copy of the device and instead copies the
device name using kstrdup(). I'm ignoring the possible failure here
as both printk() and kfree() are able to deal with NULL pointers.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com> | false | 1adb6cd316c6ed00994b824a788e5c69 | dvb_usbv2_disconnect | void dvb_usbv2_disconnect(struct usb_interface *intf)
{
struct dvb_usb_device *d = usb_get_intfdata(intf);
const char *devname = kstrdup(dev_name(&d->udev->dev), GFP_KERNEL);
const char *drvname = d->name;
dev_dbg(&d->udev->dev, "%s: bInterfaceNumber=%d\n", __func__,
intf->cur_altsetting->desc.bInterfaceNumber);
if (d->props->exit)
d->props->exit(d);
dvb_usbv2_exit(d);
pr_info("%s: '%s:%s' successfully deinitialized and disconnected\n",
KBUILD_MODNAME, drvname, devname);
kfree(devname);
}
| [[1015, "\tconst char *devname = kstrdup(dev_name(&d->udev->dev), GFP_KERNEL);\n"], [1016, "\tconst char *drvname = d->name;\n"], [1026, "\tpr_info(\"%s: '%s:%s' successfully deinitialized and disconnected\\n\",\n"], [1027, "\t\tKBUILD_MODNAME, drvname, devname);\n"], [1028, "\tkfree(devname);\n"]] | [[1015, "const char *devname = kstrdup(dev_name(&d->udev->dev), GFP_KERNEL);"], [1016, "const char *drvname = d->name;"], [1026, "pr_info(\"%s: '%s:%s' successfully deinitialized and disconnected\\n\",\n\t\tKBUILD_MODNAME, drvname, devname);"], [1028, "kfree(devname);"]] | [
"CVE-2017-8064"
] | [
"CWE-119"
] | 1 | {
"Execution Environment": [
"CONFIG_VMAP_STACK"
],
"Explanation": null,
"External Function": [
"usb_get_intfdata",
"dev_info"
],
"Function Argument": [
"intf"
],
"Globals": [
"KBUILD_MODNAME"
],
"Type Execution Declaration": [
"struct dvb_usb_device",
"struct device"
]
} |
2 | linux | https://github.com/torvalds/linux | drivers/net/ethernet/netronome/nfp/nfpcore/nfp_cppcore.c | 02e1a114fdb71e59ee6770294166c30d437bf86a | nfp: fix use-after-free in area_cache_get()
area_cache_get() is used to distribute cache->area and set cache->id,
and if cache->id is not 0 and cache->area->kref refcount is 0, it will
release the cache->area by nfp_cpp_area_release(). area_cache_get()
set cache->id before cpp->op->area_init() and nfp_cpp_area_acquire().
But if area_init() or nfp_cpp_area_acquire() fails, the cache->id is
is already set but the refcount is not increased as expected. At this
time, calling the nfp_cpp_area_release() will cause use-after-free.
To avoid the use-after-free, set cache->id after area_init() and
nfp_cpp_area_acquire() complete successfully.
Note: This vulnerability is triggerable by providing emulated device
equipped with specified configuration.
BUG: KASAN: use-after-free in nfp6000_area_init (drivers/net/ethernet/netronome/nfp/nfpcore/nfp6000_pcie.c:760)
Write of size 4 at addr ffff888005b7f4a0 by task swapper/0/1
Call Trace:
<TASK>
nfp6000_area_init (drivers/net/ethernet/netronome/nfp/nfpcore/nfp6000_pcie.c:760)
area_cache_get.constprop.8 (drivers/net/ethernet/netronome/nfp/nfpcore/nfp_cppcore.c:884)
Allocated by task 1:
nfp_cpp_area_alloc_with_name (drivers/net/ethernet/netronome/nfp/nfpcore/nfp_cppcore.c:303)
nfp_cpp_area_cache_add (drivers/net/ethernet/netronome/nfp/nfpcore/nfp_cppcore.c:802)
nfp6000_init (drivers/net/ethernet/netronome/nfp/nfpcore/nfp6000_pcie.c:1230)
nfp_cpp_from_operations (drivers/net/ethernet/netronome/nfp/nfpcore/nfp_cppcore.c:1215)
nfp_pci_probe (drivers/net/ethernet/netronome/nfp/nfp_main.c:744)
Freed by task 1:
kfree (mm/slub.c:4562)
area_cache_get.constprop.8 (drivers/net/ethernet/netronome/nfp/nfpcore/nfp_cppcore.c:873)
nfp_cpp_read (drivers/net/ethernet/netronome/nfp/nfpcore/nfp_cppcore.c:924 drivers/net/ethernet/netronome/nfp/nfpcore/nfp_cppcore.c:973)
nfp_cpp_readl (drivers/net/ethernet/netronome/nfp/nfpcore/nfp_cpplib.c:48)
Signed-off-by: Jialiang Wang <wangjialiang0806@163.com>
Reviewed-by: Yinjun Zhang <yinjun.zhang@corigine.com>
Acked-by: Simon Horman <simon.horman@corigine.com>
Link: https://lore.kernel.org/r/20220810073057.4032-1-wangjialiang0806@163.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org> | true | b09c47a4db6314a9ae2201fbee26ee43 | area_cache_get | static struct nfp_cpp_area_cache *
area_cache_get(struct nfp_cpp *cpp, u32 id,
u64 addr, unsigned long *offset, size_t length)
{
struct nfp_cpp_area_cache *cache;
int err;
/* Early exit when length == 0, which prevents
* the need for special case code below when
* checking against available cache size.
*/
if (length == 0 || id == 0)
return NULL;
/* Remap from cpp_island to cpp_target */
err = nfp_target_cpp(id, addr, &id, &addr, cpp->imb_cat_table);
if (err < 0)
return NULL;
mutex_lock(&cpp->area_cache_mutex);
if (list_empty(&cpp->area_cache_list)) {
mutex_unlock(&cpp->area_cache_mutex);
return NULL;
}
addr += *offset;
/* See if we have a match */
list_for_each_entry(cache, &cpp->area_cache_list, entry) {
if (id == cache->id &&
addr >= cache->addr &&
addr + length <= cache->addr + cache->size)
goto exit;
}
/* No matches - inspect the tail of the LRU */
cache = list_entry(cpp->area_cache_list.prev,
struct nfp_cpp_area_cache, entry);
/* Can we fit in the cache entry? */
if (round_down(addr + length - 1, cache->size) !=
round_down(addr, cache->size)) {
mutex_unlock(&cpp->area_cache_mutex);
return NULL;
}
/* If id != 0, we will need to release it */
if (cache->id) {
nfp_cpp_area_release(cache->area);
cache->id = 0;
cache->addr = 0;
}
/* Adjust the start address to be cache size aligned */
cache->id = id;
cache->addr = addr & ~(u64)(cache->size - 1);
/* Re-init to the new ID and address */
if (cpp->op->area_init) {
err = cpp->op->area_init(cache->area,
id, cache->addr, cache->size);
if (err < 0) {
mutex_unlock(&cpp->area_cache_mutex);
return NULL;
}
}
/* Attempt to acquire */
err = nfp_cpp_area_acquire(cache->area);
if (err < 0) {
mutex_unlock(&cpp->area_cache_mutex);
return NULL;
}
exit:
/* Adjust offset */
*offset = addr - cache->addr;
return cache;
}
| [[877, "\tcache->id = id;\n"]] | [[877, "cache->id = id;"]] | [
"CVE-2022-3545"
] | [
"CWE-119"
] | 3 | {
"Execution Environment": [],
"Explanation": null,
"External Function": [
"nfp_cpp_area_release",
"cpp->op->area_init",
"nfp_cpp_area_acquire"
],
"Function Argument": [],
"Globals": [],
"Type Execution Declaration": []
} |
3 | linux | https://github.com/torvalds/linux | drivers/net/ethernet/netronome/nfp/nfpcore/nfp_cppcore.c | 02e1a114fdb71e59ee6770294166c30d437bf86a | nfp: fix use-after-free in area_cache_get()
area_cache_get() is used to distribute cache->area and set cache->id,
and if cache->id is not 0 and cache->area->kref refcount is 0, it will
release the cache->area by nfp_cpp_area_release(). area_cache_get()
set cache->id before cpp->op->area_init() and nfp_cpp_area_acquire().
But if area_init() or nfp_cpp_area_acquire() fails, the cache->id is
is already set but the refcount is not increased as expected. At this
time, calling the nfp_cpp_area_release() will cause use-after-free.
To avoid the use-after-free, set cache->id after area_init() and
nfp_cpp_area_acquire() complete successfully.
Note: This vulnerability is triggerable by providing emulated device
equipped with specified configuration.
BUG: KASAN: use-after-free in nfp6000_area_init (drivers/net/ethernet/netronome/nfp/nfpcore/nfp6000_pcie.c:760)
Write of size 4 at addr ffff888005b7f4a0 by task swapper/0/1
Call Trace:
<TASK>
nfp6000_area_init (drivers/net/ethernet/netronome/nfp/nfpcore/nfp6000_pcie.c:760)
area_cache_get.constprop.8 (drivers/net/ethernet/netronome/nfp/nfpcore/nfp_cppcore.c:884)
Allocated by task 1:
nfp_cpp_area_alloc_with_name (drivers/net/ethernet/netronome/nfp/nfpcore/nfp_cppcore.c:303)
nfp_cpp_area_cache_add (drivers/net/ethernet/netronome/nfp/nfpcore/nfp_cppcore.c:802)
nfp6000_init (drivers/net/ethernet/netronome/nfp/nfpcore/nfp6000_pcie.c:1230)
nfp_cpp_from_operations (drivers/net/ethernet/netronome/nfp/nfpcore/nfp_cppcore.c:1215)
nfp_pci_probe (drivers/net/ethernet/netronome/nfp/nfp_main.c:744)
Freed by task 1:
kfree (mm/slub.c:4562)
area_cache_get.constprop.8 (drivers/net/ethernet/netronome/nfp/nfpcore/nfp_cppcore.c:873)
nfp_cpp_read (drivers/net/ethernet/netronome/nfp/nfpcore/nfp_cppcore.c:924 drivers/net/ethernet/netronome/nfp/nfpcore/nfp_cppcore.c:973)
nfp_cpp_readl (drivers/net/ethernet/netronome/nfp/nfpcore/nfp_cpplib.c:48)
Signed-off-by: Jialiang Wang <wangjialiang0806@163.com>
Reviewed-by: Yinjun Zhang <yinjun.zhang@corigine.com>
Acked-by: Simon Horman <simon.horman@corigine.com>
Link: https://lore.kernel.org/r/20220810073057.4032-1-wangjialiang0806@163.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org> | false | 6eb6e5761715e630d11c438f0e0f15e5 | area_cache_get | static struct nfp_cpp_area_cache *
area_cache_get(struct nfp_cpp *cpp, u32 id,
u64 addr, unsigned long *offset, size_t length)
{
struct nfp_cpp_area_cache *cache;
int err;
/* Early exit when length == 0, which prevents
* the need for special case code below when
* checking against available cache size.
*/
if (length == 0 || id == 0)
return NULL;
/* Remap from cpp_island to cpp_target */
err = nfp_target_cpp(id, addr, &id, &addr, cpp->imb_cat_table);
if (err < 0)
return NULL;
mutex_lock(&cpp->area_cache_mutex);
if (list_empty(&cpp->area_cache_list)) {
mutex_unlock(&cpp->area_cache_mutex);
return NULL;
}
addr += *offset;
/* See if we have a match */
list_for_each_entry(cache, &cpp->area_cache_list, entry) {
if (id == cache->id &&
addr >= cache->addr &&
addr + length <= cache->addr + cache->size)
goto exit;
}
/* No matches - inspect the tail of the LRU */
cache = list_entry(cpp->area_cache_list.prev,
struct nfp_cpp_area_cache, entry);
/* Can we fit in the cache entry? */
if (round_down(addr + length - 1, cache->size) !=
round_down(addr, cache->size)) {
mutex_unlock(&cpp->area_cache_mutex);
return NULL;
}
/* If id != 0, we will need to release it */
if (cache->id) {
nfp_cpp_area_release(cache->area);
cache->id = 0;
cache->addr = 0;
}
/* Adjust the start address to be cache size aligned */
cache->addr = addr & ~(u64)(cache->size - 1);
/* Re-init to the new ID and address */
if (cpp->op->area_init) {
err = cpp->op->area_init(cache->area,
id, cache->addr, cache->size);
if (err < 0) {
mutex_unlock(&cpp->area_cache_mutex);
return NULL;
}
}
/* Attempt to acquire */
err = nfp_cpp_area_acquire(cache->area);
if (err < 0) {
mutex_unlock(&cpp->area_cache_mutex);
return NULL;
}
cache->id = id;
exit:
/* Adjust offset */
*offset = addr - cache->addr;
return cache;
}
| [[896, "\tcache->id = id;\n"], [897, "\n"]] | [[896, "cache->id = id;"], [897, "\n"]] | [
"CVE-2022-3545"
] | [
"CWE-119"
] | 3 | {
"Execution Environment": [],
"Explanation": null,
"External Function": [
"nfp_cpp_area_release",
"cpp->op->area_init",
"nfp_cpp_area_acquire"
],
"Function Argument": [],
"Globals": [],
"Type Execution Declaration": []
} |
4 | linux | https://github.com/torvalds/linux | net/netfilter/ipvs/ip_vs_ctl.c | 04bcef2a83f40c6db24222b27a52892cba39dffb | ipvs: Add boundary check on ioctl arguments
The ipvs code has a nifty system for doing the size of ioctl command
copies; it defines an array with values into which it indexes the cmd
to find the right length.
Unfortunately, the ipvs code forgot to check if the cmd was in the
range that the array provides, allowing for an index outside of the
array, which then gives a "garbage" result into the length, which
then gets used for copying into a stack buffer.
Fix this by adding sanity checks on these as well as the copy size.
[ horms@verge.net.au: adjusted limit to IP_VS_SO_GET_MAX ]
Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>
Acked-by: Julian Anastasov <ja@ssi.bg>
Signed-off-by: Simon Horman <horms@verge.net.au>
Signed-off-by: Patrick McHardy <kaber@trash.net> | true | 4142c8c9b956090ae535f2e0dfa2b660 | do_ip_vs_get_ctl | static int
do_ip_vs_get_ctl(struct sock *sk, int cmd, void __user *user, int *len)
{
unsigned char arg[128];
int ret = 0;
if (!capable(CAP_NET_ADMIN))
return -EPERM;
if (*len < get_arglen[GET_CMDID(cmd)]) {
pr_err("get_ctl: len %u < %u\n",
*len, get_arglen[GET_CMDID(cmd)]);
return -EINVAL;
}
if (copy_from_user(arg, user, get_arglen[GET_CMDID(cmd)]) != 0)
return -EFAULT;
if (mutex_lock_interruptible(&__ip_vs_mutex))
return -ERESTARTSYS;
switch (cmd) {
case IP_VS_SO_GET_VERSION:
{
char buf[64];
sprintf(buf, "IP Virtual Server version %d.%d.%d (size=%d)",
NVERSION(IP_VS_VERSION_CODE), IP_VS_CONN_TAB_SIZE);
if (copy_to_user(user, buf, strlen(buf)+1) != 0) {
ret = -EFAULT;
goto out;
}
*len = strlen(buf)+1;
}
break;
case IP_VS_SO_GET_INFO:
{
struct ip_vs_getinfo info;
info.version = IP_VS_VERSION_CODE;
info.size = IP_VS_CONN_TAB_SIZE;
info.num_services = ip_vs_num_services;
if (copy_to_user(user, &info, sizeof(info)) != 0)
ret = -EFAULT;
}
break;
case IP_VS_SO_GET_SERVICES:
{
struct ip_vs_get_services *get;
int size;
get = (struct ip_vs_get_services *)arg;
size = sizeof(*get) +
sizeof(struct ip_vs_service_entry) * get->num_services;
if (*len != size) {
pr_err("length: %u != %u\n", *len, size);
ret = -EINVAL;
goto out;
}
ret = __ip_vs_get_service_entries(get, user);
}
break;
case IP_VS_SO_GET_SERVICE:
{
struct ip_vs_service_entry *entry;
struct ip_vs_service *svc;
union nf_inet_addr addr;
entry = (struct ip_vs_service_entry *)arg;
addr.ip = entry->addr;
if (entry->fwmark)
svc = __ip_vs_svc_fwm_get(AF_INET, entry->fwmark);
else
svc = __ip_vs_service_get(AF_INET, entry->protocol,
&addr, entry->port);
if (svc) {
ip_vs_copy_service(entry, svc);
if (copy_to_user(user, entry, sizeof(*entry)) != 0)
ret = -EFAULT;
ip_vs_service_put(svc);
} else
ret = -ESRCH;
}
break;
case IP_VS_SO_GET_DESTS:
{
struct ip_vs_get_dests *get;
int size;
get = (struct ip_vs_get_dests *)arg;
size = sizeof(*get) +
sizeof(struct ip_vs_dest_entry) * get->num_dests;
if (*len != size) {
pr_err("length: %u != %u\n", *len, size);
ret = -EINVAL;
goto out;
}
ret = __ip_vs_get_dest_entries(get, user);
}
break;
case IP_VS_SO_GET_TIMEOUT:
{
struct ip_vs_timeout_user t;
__ip_vs_get_timeouts(&t);
if (copy_to_user(user, &t, sizeof(t)) != 0)
ret = -EFAULT;
}
break;
case IP_VS_SO_GET_DAEMON:
{
struct ip_vs_daemon_user d[2];
memset(&d, 0, sizeof(d));
if (ip_vs_sync_state & IP_VS_STATE_MASTER) {
d[0].state = IP_VS_STATE_MASTER;
strlcpy(d[0].mcast_ifn, ip_vs_master_mcast_ifn, sizeof(d[0].mcast_ifn));
d[0].syncid = ip_vs_master_syncid;
}
if (ip_vs_sync_state & IP_VS_STATE_BACKUP) {
d[1].state = IP_VS_STATE_BACKUP;
strlcpy(d[1].mcast_ifn, ip_vs_backup_mcast_ifn, sizeof(d[1].mcast_ifn));
d[1].syncid = ip_vs_backup_syncid;
}
if (copy_to_user(user, &d, sizeof(d)) != 0)
ret = -EFAULT;
}
break;
default:
ret = -EINVAL;
}
out:
mutex_unlock(&__ip_vs_mutex);
return ret;
}
| [[2365, "\tif (copy_from_user(arg, user, get_arglen[GET_CMDID(cmd)]) != 0)\n"]] | [[2365, "if (copy_from_user(arg, user, get_arglen[GET_CMDID(cmd)]) != 0)"]] | [
"CVE-2013-4588"
] | [
"CWE-119"
] | 6 | {
"Execution Environment": [],
"Explanation": null,
"External Function": [
"GET_CMDID"
],
"Function Argument": [
"cmd"
],
"Globals": [
"get_arglen"
],
"Type Execution Declaration": []
} |
5 | linux | https://github.com/torvalds/linux | net/netfilter/ipvs/ip_vs_ctl.c | 04bcef2a83f40c6db24222b27a52892cba39dffb | ipvs: Add boundary check on ioctl arguments
The ipvs code has a nifty system for doing the size of ioctl command
copies; it defines an array with values into which it indexes the cmd
to find the right length.
Unfortunately, the ipvs code forgot to check if the cmd was in the
range that the array provides, allowing for an index outside of the
array, which then gives a "garbage" result into the length, which
then gets used for copying into a stack buffer.
Fix this by adding sanity checks on these as well as the copy size.
[ horms@verge.net.au: adjusted limit to IP_VS_SO_GET_MAX ]
Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>
Acked-by: Julian Anastasov <ja@ssi.bg>
Signed-off-by: Simon Horman <horms@verge.net.au>
Signed-off-by: Patrick McHardy <kaber@trash.net> | false | 3952069a1f9b3d417354575f7e77abec | do_ip_vs_set_ctl | static int
do_ip_vs_set_ctl(struct sock *sk, int cmd, void __user *user, unsigned int len)
{
int ret;
unsigned char arg[MAX_ARG_LEN];
struct ip_vs_service_user *usvc_compat;
struct ip_vs_service_user_kern usvc;
struct ip_vs_service *svc;
struct ip_vs_dest_user *udest_compat;
struct ip_vs_dest_user_kern udest;
if (!capable(CAP_NET_ADMIN))
return -EPERM;
if (cmd < IP_VS_BASE_CTL || cmd > IP_VS_SO_SET_MAX)
return -EINVAL;
if (len < 0 || len > MAX_ARG_LEN)
return -EINVAL;
if (len != set_arglen[SET_CMDID(cmd)]) {
pr_err("set_ctl: len %u != %u\n",
len, set_arglen[SET_CMDID(cmd)]);
return -EINVAL;
}
if (copy_from_user(arg, user, len) != 0)
return -EFAULT;
/* increase the module use count */
ip_vs_use_count_inc();
if (mutex_lock_interruptible(&__ip_vs_mutex)) {
ret = -ERESTARTSYS;
goto out_dec;
}
if (cmd == IP_VS_SO_SET_FLUSH) {
/* Flush the virtual service */
ret = ip_vs_flush();
goto out_unlock;
} else if (cmd == IP_VS_SO_SET_TIMEOUT) {
/* Set timeout values for (tcp tcpfin udp) */
ret = ip_vs_set_timeout((struct ip_vs_timeout_user *)arg);
goto out_unlock;
} else if (cmd == IP_VS_SO_SET_STARTDAEMON) {
struct ip_vs_daemon_user *dm = (struct ip_vs_daemon_user *)arg;
ret = start_sync_thread(dm->state, dm->mcast_ifn, dm->syncid);
goto out_unlock;
} else if (cmd == IP_VS_SO_SET_STOPDAEMON) {
struct ip_vs_daemon_user *dm = (struct ip_vs_daemon_user *)arg;
ret = stop_sync_thread(dm->state);
goto out_unlock;
}
usvc_compat = (struct ip_vs_service_user *)arg;
udest_compat = (struct ip_vs_dest_user *)(usvc_compat + 1);
/* We only use the new structs internally, so copy userspace compat
* structs to extended internal versions */
ip_vs_copy_usvc_compat(&usvc, usvc_compat);
ip_vs_copy_udest_compat(&udest, udest_compat);
if (cmd == IP_VS_SO_SET_ZERO) {
/* if no service address is set, zero counters in all */
if (!usvc.fwmark && !usvc.addr.ip && !usvc.port) {
ret = ip_vs_zero_all();
goto out_unlock;
}
}
/* Check for valid protocol: TCP or UDP, even for fwmark!=0 */
if (usvc.protocol != IPPROTO_TCP && usvc.protocol != IPPROTO_UDP) {
pr_err("set_ctl: invalid protocol: %d %pI4:%d %s\n",
usvc.protocol, &usvc.addr.ip,
ntohs(usvc.port), usvc.sched_name);
ret = -EFAULT;
goto out_unlock;
}
/* Lookup the exact service by <protocol, addr, port> or fwmark */
if (usvc.fwmark == 0)
svc = __ip_vs_service_get(usvc.af, usvc.protocol,
&usvc.addr, usvc.port);
else
svc = __ip_vs_svc_fwm_get(usvc.af, usvc.fwmark);
if (cmd != IP_VS_SO_SET_ADD
&& (svc == NULL || svc->protocol != usvc.protocol)) {
ret = -ESRCH;
goto out_unlock;
}
switch (cmd) {
case IP_VS_SO_SET_ADD:
if (svc != NULL)
ret = -EEXIST;
else
ret = ip_vs_add_service(&usvc, &svc);
break;
case IP_VS_SO_SET_EDIT:
ret = ip_vs_edit_service(svc, &usvc);
break;
case IP_VS_SO_SET_DEL:
ret = ip_vs_del_service(svc);
if (!ret)
goto out_unlock;
break;
case IP_VS_SO_SET_ZERO:
ret = ip_vs_zero_service(svc);
break;
case IP_VS_SO_SET_ADDDEST:
ret = ip_vs_add_dest(svc, &udest);
break;
case IP_VS_SO_SET_EDITDEST:
ret = ip_vs_edit_dest(svc, &udest);
break;
case IP_VS_SO_SET_DELDEST:
ret = ip_vs_del_dest(svc, &udest);
break;
default:
ret = -EINVAL;
}
if (svc)
ip_vs_service_put(svc);
out_unlock:
mutex_unlock(&__ip_vs_mutex);
out_dec:
/* decrease the module use count */
ip_vs_use_count_dec();
return ret;
}
| [[2080, "\tif (cmd < IP_VS_BASE_CTL || cmd > IP_VS_SO_SET_MAX)\n"], [2081, "\t\treturn -EINVAL;\n"], [2082, "\tif (len < 0 || len > MAX_ARG_LEN)\n"], [2083, "\t\treturn -EINVAL;\n"]] | [[2080, "if (cmd < IP_VS_BASE_CTL || cmd > IP_VS_SO_SET_MAX)"], [2081, "return -EINVAL;"], [2082, "if (len < 0 || len > MAX_ARG_LEN)"], [2083, "return -EINVAL;"]] | [
"CVE-2013-4588"
] | [
"CWE-119"
] | null | {
"Execution Environment": null,
"Explanation": null,
"External Function": null,
"Function Argument": null,
"Globals": null,
"Type Execution Declaration": null
} |
6 | linux | https://github.com/torvalds/linux | net/netfilter/ipvs/ip_vs_ctl.c | 04bcef2a83f40c6db24222b27a52892cba39dffb | ipvs: Add boundary check on ioctl arguments
The ipvs code has a nifty system for doing the size of ioctl command
copies; it defines an array with values into which it indexes the cmd
to find the right length.
Unfortunately, the ipvs code forgot to check if the cmd was in the
range that the array provides, allowing for an index outside of the
array, which then gives a "garbage" result into the length, which
then gets used for copying into a stack buffer.
Fix this by adding sanity checks on these as well as the copy size.
[ horms@verge.net.au: adjusted limit to IP_VS_SO_GET_MAX ]
Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>
Acked-by: Julian Anastasov <ja@ssi.bg>
Signed-off-by: Simon Horman <horms@verge.net.au>
Signed-off-by: Patrick McHardy <kaber@trash.net> | false | cb91c5bff7f5d1bc520dfb0a8a207efb | do_ip_vs_get_ctl | static int
do_ip_vs_get_ctl(struct sock *sk, int cmd, void __user *user, int *len)
{
unsigned char arg[128];
int ret = 0;
unsigned int copylen;
if (!capable(CAP_NET_ADMIN))
return -EPERM;
if (cmd < IP_VS_BASE_CTL || cmd > IP_VS_SO_GET_MAX)
return -EINVAL;
if (*len < get_arglen[GET_CMDID(cmd)]) {
pr_err("get_ctl: len %u < %u\n",
*len, get_arglen[GET_CMDID(cmd)]);
return -EINVAL;
}
copylen = get_arglen[GET_CMDID(cmd)];
if (copylen > 128)
return -EINVAL;
if (copy_from_user(arg, user, copylen) != 0)
return -EFAULT;
if (mutex_lock_interruptible(&__ip_vs_mutex))
return -ERESTARTSYS;
switch (cmd) {
case IP_VS_SO_GET_VERSION:
{
char buf[64];
sprintf(buf, "IP Virtual Server version %d.%d.%d (size=%d)",
NVERSION(IP_VS_VERSION_CODE), IP_VS_CONN_TAB_SIZE);
if (copy_to_user(user, buf, strlen(buf)+1) != 0) {
ret = -EFAULT;
goto out;
}
*len = strlen(buf)+1;
}
break;
case IP_VS_SO_GET_INFO:
{
struct ip_vs_getinfo info;
info.version = IP_VS_VERSION_CODE;
info.size = IP_VS_CONN_TAB_SIZE;
info.num_services = ip_vs_num_services;
if (copy_to_user(user, &info, sizeof(info)) != 0)
ret = -EFAULT;
}
break;
case IP_VS_SO_GET_SERVICES:
{
struct ip_vs_get_services *get;
int size;
get = (struct ip_vs_get_services *)arg;
size = sizeof(*get) +
sizeof(struct ip_vs_service_entry) * get->num_services;
if (*len != size) {
pr_err("length: %u != %u\n", *len, size);
ret = -EINVAL;
goto out;
}
ret = __ip_vs_get_service_entries(get, user);
}
break;
case IP_VS_SO_GET_SERVICE:
{
struct ip_vs_service_entry *entry;
struct ip_vs_service *svc;
union nf_inet_addr addr;
entry = (struct ip_vs_service_entry *)arg;
addr.ip = entry->addr;
if (entry->fwmark)
svc = __ip_vs_svc_fwm_get(AF_INET, entry->fwmark);
else
svc = __ip_vs_service_get(AF_INET, entry->protocol,
&addr, entry->port);
if (svc) {
ip_vs_copy_service(entry, svc);
if (copy_to_user(user, entry, sizeof(*entry)) != 0)
ret = -EFAULT;
ip_vs_service_put(svc);
} else
ret = -ESRCH;
}
break;
case IP_VS_SO_GET_DESTS:
{
struct ip_vs_get_dests *get;
int size;
get = (struct ip_vs_get_dests *)arg;
size = sizeof(*get) +
sizeof(struct ip_vs_dest_entry) * get->num_dests;
if (*len != size) {
pr_err("length: %u != %u\n", *len, size);
ret = -EINVAL;
goto out;
}
ret = __ip_vs_get_dest_entries(get, user);
}
break;
case IP_VS_SO_GET_TIMEOUT:
{
struct ip_vs_timeout_user t;
__ip_vs_get_timeouts(&t);
if (copy_to_user(user, &t, sizeof(t)) != 0)
ret = -EFAULT;
}
break;
case IP_VS_SO_GET_DAEMON:
{
struct ip_vs_daemon_user d[2];
memset(&d, 0, sizeof(d));
if (ip_vs_sync_state & IP_VS_STATE_MASTER) {
d[0].state = IP_VS_STATE_MASTER;
strlcpy(d[0].mcast_ifn, ip_vs_master_mcast_ifn, sizeof(d[0].mcast_ifn));
d[0].syncid = ip_vs_master_syncid;
}
if (ip_vs_sync_state & IP_VS_STATE_BACKUP) {
d[1].state = IP_VS_STATE_BACKUP;
strlcpy(d[1].mcast_ifn, ip_vs_backup_mcast_ifn, sizeof(d[1].mcast_ifn));
d[1].syncid = ip_vs_backup_syncid;
}
if (copy_to_user(user, &d, sizeof(d)) != 0)
ret = -EFAULT;
}
break;
default:
ret = -EINVAL;
}
out:
mutex_unlock(&__ip_vs_mutex);
return ret;
}
| [[2359, "\tunsigned int copylen;\n"], [2364, "\tif (cmd < IP_VS_BASE_CTL || cmd > IP_VS_SO_GET_MAX)\n"], [2365, "\t\treturn -EINVAL;\n"], [2366, "\n"], [2373, "\tcopylen = get_arglen[GET_CMDID(cmd)];\n"], [2374, "\tif (copylen > 128)\n"], [2375, "\t\treturn -EINVAL;\n"], [2376, "\n"], [2377, "\tif (copy_from_user(arg, user, copylen) != 0)\n"]] | [[2359, "unsigned int copylen;"], [2364, "if (cmd < IP_VS_BASE_CTL || cmd > IP_VS_SO_GET_MAX)"], [2365, "return -EINVAL;"], [2366, "\n"], [2373, "copylen = get_arglen[GET_CMDID(cmd)];"], [2374, "if (copylen > 128)"], [2375, "return -EINVAL;"], [2376, "\n"], [2377, "if (copy_from_user(arg, user, copylen) != 0)"]] | [
"CVE-2013-4588"
] | [
"CWE-119"
] | 6 | {
"Execution Environment": [],
"Explanation": null,
"External Function": [
"GET_CMDID"
],
"Function Argument": [
"cmd"
],
"Globals": [
"get_arglen"
],
"Type Execution Declaration": []
} |
7 | linux | https://github.com/torvalds/linux | drivers/vfio/pci/vfio_pci.c | 05692d7005a364add85c6e25a6c4447ce08f913a | vfio/pci: Fix integer overflows, bitmask check
The VFIO_DEVICE_SET_IRQS ioctl did not sufficiently sanitize
user-supplied integers, potentially allowing memory corruption. This
patch adds appropriate integer overflow checks, checks the range bounds
for VFIO_IRQ_SET_DATA_NONE, and also verifies that only single element
in the VFIO_IRQ_SET_DATA_TYPE_MASK bitmask is set.
VFIO_IRQ_SET_ACTION_TYPE_MASK is already correctly checked later in
vfio_pci_set_irqs_ioctl().
Furthermore, a kzalloc is changed to a kcalloc because the use of a
kzalloc with an integer multiplication allowed an integer overflow
condition to be reached without this patch. kcalloc checks for overflow
and should prevent a similar occurrence.
Signed-off-by: Vlad Tsyrklevich <vlad@tsyrklevich.net>
Signed-off-by: Alex Williamson <alex.williamson@redhat.com> | true | a049777f01d743bcf5935c70c207b35e | vfio_pci_ioctl | static long vfio_pci_ioctl(void *device_data,
unsigned int cmd, unsigned long arg)
{
struct vfio_pci_device *vdev = device_data;
unsigned long minsz;
if (cmd == VFIO_DEVICE_GET_INFO) {
struct vfio_device_info info;
minsz = offsetofend(struct vfio_device_info, num_irqs);
if (copy_from_user(&info, (void __user *)arg, minsz))
return -EFAULT;
if (info.argsz < minsz)
return -EINVAL;
info.flags = VFIO_DEVICE_FLAGS_PCI;
if (vdev->reset_works)
info.flags |= VFIO_DEVICE_FLAGS_RESET;
info.num_regions = VFIO_PCI_NUM_REGIONS + vdev->num_regions;
info.num_irqs = VFIO_PCI_NUM_IRQS;
return copy_to_user((void __user *)arg, &info, minsz) ?
-EFAULT : 0;
} else if (cmd == VFIO_DEVICE_GET_REGION_INFO) {
struct pci_dev *pdev = vdev->pdev;
struct vfio_region_info info;
struct vfio_info_cap caps = { .buf = NULL, .size = 0 };
int i, ret;
minsz = offsetofend(struct vfio_region_info, offset);
if (copy_from_user(&info, (void __user *)arg, minsz))
return -EFAULT;
if (info.argsz < minsz)
return -EINVAL;
switch (info.index) {
case VFIO_PCI_CONFIG_REGION_INDEX:
info.offset = VFIO_PCI_INDEX_TO_OFFSET(info.index);
info.size = pdev->cfg_size;
info.flags = VFIO_REGION_INFO_FLAG_READ |
VFIO_REGION_INFO_FLAG_WRITE;
break;
case VFIO_PCI_BAR0_REGION_INDEX ... VFIO_PCI_BAR5_REGION_INDEX:
info.offset = VFIO_PCI_INDEX_TO_OFFSET(info.index);
info.size = pci_resource_len(pdev, info.index);
if (!info.size) {
info.flags = 0;
break;
}
info.flags = VFIO_REGION_INFO_FLAG_READ |
VFIO_REGION_INFO_FLAG_WRITE;
if (vdev->bar_mmap_supported[info.index]) {
info.flags |= VFIO_REGION_INFO_FLAG_MMAP;
if (info.index == vdev->msix_bar) {
ret = msix_sparse_mmap_cap(vdev, &caps);
if (ret)
return ret;
}
}
break;
case VFIO_PCI_ROM_REGION_INDEX:
{
void __iomem *io;
size_t size;
info.offset = VFIO_PCI_INDEX_TO_OFFSET(info.index);
info.flags = 0;
/* Report the BAR size, not the ROM size */
info.size = pci_resource_len(pdev, info.index);
if (!info.size) {
/* Shadow ROMs appear as PCI option ROMs */
if (pdev->resource[PCI_ROM_RESOURCE].flags &
IORESOURCE_ROM_SHADOW)
info.size = 0x20000;
else
break;
}
/* Is it really there? */
io = pci_map_rom(pdev, &size);
if (!io || !size) {
info.size = 0;
break;
}
pci_unmap_rom(pdev, io);
info.flags = VFIO_REGION_INFO_FLAG_READ;
break;
}
case VFIO_PCI_VGA_REGION_INDEX:
if (!vdev->has_vga)
return -EINVAL;
info.offset = VFIO_PCI_INDEX_TO_OFFSET(info.index);
info.size = 0xc0000;
info.flags = VFIO_REGION_INFO_FLAG_READ |
VFIO_REGION_INFO_FLAG_WRITE;
break;
default:
if (info.index >=
VFIO_PCI_NUM_REGIONS + vdev->num_regions)
return -EINVAL;
i = info.index - VFIO_PCI_NUM_REGIONS;
info.offset = VFIO_PCI_INDEX_TO_OFFSET(info.index);
info.size = vdev->region[i].size;
info.flags = vdev->region[i].flags;
ret = region_type_cap(vdev, &caps,
vdev->region[i].type,
vdev->region[i].subtype);
if (ret)
return ret;
}
if (caps.size) {
info.flags |= VFIO_REGION_INFO_FLAG_CAPS;
if (info.argsz < sizeof(info) + caps.size) {
info.argsz = sizeof(info) + caps.size;
info.cap_offset = 0;
} else {
vfio_info_cap_shift(&caps, sizeof(info));
if (copy_to_user((void __user *)arg +
sizeof(info), caps.buf,
caps.size)) {
kfree(caps.buf);
return -EFAULT;
}
info.cap_offset = sizeof(info);
}
kfree(caps.buf);
}
return copy_to_user((void __user *)arg, &info, minsz) ?
-EFAULT : 0;
} else if (cmd == VFIO_DEVICE_GET_IRQ_INFO) {
struct vfio_irq_info info;
minsz = offsetofend(struct vfio_irq_info, count);
if (copy_from_user(&info, (void __user *)arg, minsz))
return -EFAULT;
if (info.argsz < minsz || info.index >= VFIO_PCI_NUM_IRQS)
return -EINVAL;
switch (info.index) {
case VFIO_PCI_INTX_IRQ_INDEX ... VFIO_PCI_MSIX_IRQ_INDEX:
case VFIO_PCI_REQ_IRQ_INDEX:
break;
case VFIO_PCI_ERR_IRQ_INDEX:
if (pci_is_pcie(vdev->pdev))
break;
/* pass thru to return error */
default:
return -EINVAL;
}
info.flags = VFIO_IRQ_INFO_EVENTFD;
info.count = vfio_pci_get_irq_count(vdev, info.index);
if (info.index == VFIO_PCI_INTX_IRQ_INDEX)
info.flags |= (VFIO_IRQ_INFO_MASKABLE |
VFIO_IRQ_INFO_AUTOMASKED);
else
info.flags |= VFIO_IRQ_INFO_NORESIZE;
return copy_to_user((void __user *)arg, &info, minsz) ?
-EFAULT : 0;
} else if (cmd == VFIO_DEVICE_SET_IRQS) {
struct vfio_irq_set hdr;
u8 *data = NULL;
int ret = 0;
minsz = offsetofend(struct vfio_irq_set, count);
if (copy_from_user(&hdr, (void __user *)arg, minsz))
return -EFAULT;
if (hdr.argsz < minsz || hdr.index >= VFIO_PCI_NUM_IRQS ||
hdr.flags & ~(VFIO_IRQ_SET_DATA_TYPE_MASK |
VFIO_IRQ_SET_ACTION_TYPE_MASK))
return -EINVAL;
if (!(hdr.flags & VFIO_IRQ_SET_DATA_NONE)) {
size_t size;
int max = vfio_pci_get_irq_count(vdev, hdr.index);
if (hdr.flags & VFIO_IRQ_SET_DATA_BOOL)
size = sizeof(uint8_t);
else if (hdr.flags & VFIO_IRQ_SET_DATA_EVENTFD)
size = sizeof(int32_t);
else
return -EINVAL;
if (hdr.argsz - minsz < hdr.count * size ||
hdr.start >= max || hdr.start + hdr.count > max)
return -EINVAL;
data = memdup_user((void __user *)(arg + minsz),
hdr.count * size);
if (IS_ERR(data))
return PTR_ERR(data);
}
mutex_lock(&vdev->igate);
ret = vfio_pci_set_irqs_ioctl(vdev, hdr.flags, hdr.index,
hdr.start, hdr.count, data);
mutex_unlock(&vdev->igate);
kfree(data);
return ret;
} else if (cmd == VFIO_DEVICE_RESET) {
return vdev->reset_works ?
pci_try_reset_function(vdev->pdev) : -EINVAL;
} else if (cmd == VFIO_DEVICE_GET_PCI_HOT_RESET_INFO) {
struct vfio_pci_hot_reset_info hdr;
struct vfio_pci_fill_info fill = { 0 };
struct vfio_pci_dependent_device *devices = NULL;
bool slot = false;
int ret = 0;
minsz = offsetofend(struct vfio_pci_hot_reset_info, count);
if (copy_from_user(&hdr, (void __user *)arg, minsz))
return -EFAULT;
if (hdr.argsz < minsz)
return -EINVAL;
hdr.flags = 0;
/* Can we do a slot or bus reset or neither? */
if (!pci_probe_reset_slot(vdev->pdev->slot))
slot = true;
else if (pci_probe_reset_bus(vdev->pdev->bus))
return -ENODEV;
/* How many devices are affected? */
ret = vfio_pci_for_each_slot_or_bus(vdev->pdev,
vfio_pci_count_devs,
&fill.max, slot);
if (ret)
return ret;
WARN_ON(!fill.max); /* Should always be at least one */
/*
* If there's enough space, fill it now, otherwise return
* -ENOSPC and the number of devices affected.
*/
if (hdr.argsz < sizeof(hdr) + (fill.max * sizeof(*devices))) {
ret = -ENOSPC;
hdr.count = fill.max;
goto reset_info_exit;
}
devices = kcalloc(fill.max, sizeof(*devices), GFP_KERNEL);
if (!devices)
return -ENOMEM;
fill.devices = devices;
ret = vfio_pci_for_each_slot_or_bus(vdev->pdev,
vfio_pci_fill_devs,
&fill, slot);
/*
* If a device was removed between counting and filling,
* we may come up short of fill.max. If a device was
* added, we'll have a return of -EAGAIN above.
*/
if (!ret)
hdr.count = fill.cur;
reset_info_exit:
if (copy_to_user((void __user *)arg, &hdr, minsz))
ret = -EFAULT;
if (!ret) {
if (copy_to_user((void __user *)(arg + minsz), devices,
hdr.count * sizeof(*devices)))
ret = -EFAULT;
}
kfree(devices);
return ret;
} else if (cmd == VFIO_DEVICE_PCI_HOT_RESET) {
struct vfio_pci_hot_reset hdr;
int32_t *group_fds;
struct vfio_pci_group_entry *groups;
struct vfio_pci_group_info info;
bool slot = false;
int i, count = 0, ret = 0;
minsz = offsetofend(struct vfio_pci_hot_reset, count);
if (copy_from_user(&hdr, (void __user *)arg, minsz))
return -EFAULT;
if (hdr.argsz < minsz || hdr.flags)
return -EINVAL;
/* Can we do a slot or bus reset or neither? */
if (!pci_probe_reset_slot(vdev->pdev->slot))
slot = true;
else if (pci_probe_reset_bus(vdev->pdev->bus))
return -ENODEV;
/*
* We can't let userspace give us an arbitrarily large
* buffer to copy, so verify how many we think there
* could be. Note groups can have multiple devices so
* one group per device is the max.
*/
ret = vfio_pci_for_each_slot_or_bus(vdev->pdev,
vfio_pci_count_devs,
&count, slot);
if (ret)
return ret;
/* Somewhere between 1 and count is OK */
if (!hdr.count || hdr.count > count)
return -EINVAL;
group_fds = kcalloc(hdr.count, sizeof(*group_fds), GFP_KERNEL);
groups = kcalloc(hdr.count, sizeof(*groups), GFP_KERNEL);
if (!group_fds || !groups) {
kfree(group_fds);
kfree(groups);
return -ENOMEM;
}
if (copy_from_user(group_fds, (void __user *)(arg + minsz),
hdr.count * sizeof(*group_fds))) {
kfree(group_fds);
kfree(groups);
return -EFAULT;
}
/*
* For each group_fd, get the group through the vfio external
* user interface and store the group and iommu ID. This
* ensures the group is held across the reset.
*/
for (i = 0; i < hdr.count; i++) {
struct vfio_group *group;
struct fd f = fdget(group_fds[i]);
if (!f.file) {
ret = -EBADF;
break;
}
group = vfio_group_get_external_user(f.file);
fdput(f);
if (IS_ERR(group)) {
ret = PTR_ERR(group);
break;
}
groups[i].group = group;
groups[i].id = vfio_external_user_iommu_id(group);
}
kfree(group_fds);
/* release reference to groups on error */
if (ret)
goto hot_reset_release;
info.count = hdr.count;
info.groups = groups;
/*
* Test whether all the affected devices are contained
* by the set of groups provided by the user.
*/
ret = vfio_pci_for_each_slot_or_bus(vdev->pdev,
vfio_pci_validate_devs,
&info, slot);
if (!ret)
/* User has access, do the reset */
ret = slot ? pci_try_reset_slot(vdev->pdev->slot) :
pci_try_reset_bus(vdev->pdev->bus);
hot_reset_release:
for (i--; i >= 0; i--)
vfio_group_put_external_user(groups[i].group);
kfree(groups);
return ret;
}
return -ENOTTY;
}
| [[833, "\t\tint ret = 0;\n"], [845, "\t\tif (!(hdr.flags & VFIO_IRQ_SET_DATA_NONE)) {\n"], [846, "\t\t\tsize_t size;\n"], [847, "\t\t\tint max = vfio_pci_get_irq_count(vdev, hdr.index);\n"], [849, "\t\t\tif (hdr.flags & VFIO_IRQ_SET_DATA_BOOL)\n"], [850, "\t\t\t\tsize = sizeof(uint8_t);\n"], [851, "\t\t\telse if (hdr.flags & VFIO_IRQ_SET_DATA_EVENTFD)\n"], [852, "\t\t\t\tsize = sizeof(int32_t);\n"], [853, "\t\t\telse\n"], [854, "\t\t\t\treturn -EINVAL;\n"], [856, "\t\t\tif (hdr.argsz - minsz < hdr.count * size ||\n"], [857, "\t\t\t hdr.start >= max || hdr.start + hdr.count > max)\n"]] | [[833, "int ret = 0;"], [845, "if (!(hdr.flags & VFIO_IRQ_SET_DATA_NONE))"], [846, "size_t size;"], [847, "int max = vfio_pci_get_irq_count(vdev, hdr.index);"], [849, "if (hdr.flags & VFIO_IRQ_SET_DATA_BOOL)"], [850, "size = sizeof(uint8_t);"], [851, "else if (hdr.flags & VFIO_IRQ_SET_DATA_EVENTFD)"], [852, "size = sizeof(int32_t);"], [853, "else"], [854, "return -EINVAL;"], [856, "if (hdr.argsz - minsz < hdr.count * size ||\n\t\t\t hdr.start >= max || hdr.start + hdr.count > max)"]] | [
"CVE-2016-9083",
"CVE-2016-9084"
] | [
"CWE-190",
"CWE-119"
] | 8 | {
"Execution Environment": [],
"Explanation": null,
"External Function": [
"vfio_pci_get_irq_count"
],
"Function Argument": [
"device_data",
"cmd",
"arg"
],
"Globals": [
"VFIO_IRQ_SET_DATA_NONE",
"VFIO_IRQ_SET_DATA_BOOL",
"VFIO_IRQ_SET_DATA_EVENTFD",
"VFIO_IRQ_SET_DATA_TYPE_MASK"
],
"Type Execution Declaration": []
} |
8 | linux | https://github.com/torvalds/linux | drivers/vfio/pci/vfio_pci.c | 05692d7005a364add85c6e25a6c4447ce08f913a | vfio/pci: Fix integer overflows, bitmask check
The VFIO_DEVICE_SET_IRQS ioctl did not sufficiently sanitize
user-supplied integers, potentially allowing memory corruption. This
patch adds appropriate integer overflow checks, checks the range bounds
for VFIO_IRQ_SET_DATA_NONE, and also verifies that only single element
in the VFIO_IRQ_SET_DATA_TYPE_MASK bitmask is set.
VFIO_IRQ_SET_ACTION_TYPE_MASK is already correctly checked later in
vfio_pci_set_irqs_ioctl().
Furthermore, a kzalloc is changed to a kcalloc because the use of a
kzalloc with an integer multiplication allowed an integer overflow
condition to be reached without this patch. kcalloc checks for overflow
and should prevent a similar occurrence.
Signed-off-by: Vlad Tsyrklevich <vlad@tsyrklevich.net>
Signed-off-by: Alex Williamson <alex.williamson@redhat.com> | false | a868d284be988ce6b14e3074a510fdcf | vfio_pci_ioctl | static long vfio_pci_ioctl(void *device_data,
unsigned int cmd, unsigned long arg)
{
struct vfio_pci_device *vdev = device_data;
unsigned long minsz;
if (cmd == VFIO_DEVICE_GET_INFO) {
struct vfio_device_info info;
minsz = offsetofend(struct vfio_device_info, num_irqs);
if (copy_from_user(&info, (void __user *)arg, minsz))
return -EFAULT;
if (info.argsz < minsz)
return -EINVAL;
info.flags = VFIO_DEVICE_FLAGS_PCI;
if (vdev->reset_works)
info.flags |= VFIO_DEVICE_FLAGS_RESET;
info.num_regions = VFIO_PCI_NUM_REGIONS + vdev->num_regions;
info.num_irqs = VFIO_PCI_NUM_IRQS;
return copy_to_user((void __user *)arg, &info, minsz) ?
-EFAULT : 0;
} else if (cmd == VFIO_DEVICE_GET_REGION_INFO) {
struct pci_dev *pdev = vdev->pdev;
struct vfio_region_info info;
struct vfio_info_cap caps = { .buf = NULL, .size = 0 };
int i, ret;
minsz = offsetofend(struct vfio_region_info, offset);
if (copy_from_user(&info, (void __user *)arg, minsz))
return -EFAULT;
if (info.argsz < minsz)
return -EINVAL;
switch (info.index) {
case VFIO_PCI_CONFIG_REGION_INDEX:
info.offset = VFIO_PCI_INDEX_TO_OFFSET(info.index);
info.size = pdev->cfg_size;
info.flags = VFIO_REGION_INFO_FLAG_READ |
VFIO_REGION_INFO_FLAG_WRITE;
break;
case VFIO_PCI_BAR0_REGION_INDEX ... VFIO_PCI_BAR5_REGION_INDEX:
info.offset = VFIO_PCI_INDEX_TO_OFFSET(info.index);
info.size = pci_resource_len(pdev, info.index);
if (!info.size) {
info.flags = 0;
break;
}
info.flags = VFIO_REGION_INFO_FLAG_READ |
VFIO_REGION_INFO_FLAG_WRITE;
if (vdev->bar_mmap_supported[info.index]) {
info.flags |= VFIO_REGION_INFO_FLAG_MMAP;
if (info.index == vdev->msix_bar) {
ret = msix_sparse_mmap_cap(vdev, &caps);
if (ret)
return ret;
}
}
break;
case VFIO_PCI_ROM_REGION_INDEX:
{
void __iomem *io;
size_t size;
info.offset = VFIO_PCI_INDEX_TO_OFFSET(info.index);
info.flags = 0;
/* Report the BAR size, not the ROM size */
info.size = pci_resource_len(pdev, info.index);
if (!info.size) {
/* Shadow ROMs appear as PCI option ROMs */
if (pdev->resource[PCI_ROM_RESOURCE].flags &
IORESOURCE_ROM_SHADOW)
info.size = 0x20000;
else
break;
}
/* Is it really there? */
io = pci_map_rom(pdev, &size);
if (!io || !size) {
info.size = 0;
break;
}
pci_unmap_rom(pdev, io);
info.flags = VFIO_REGION_INFO_FLAG_READ;
break;
}
case VFIO_PCI_VGA_REGION_INDEX:
if (!vdev->has_vga)
return -EINVAL;
info.offset = VFIO_PCI_INDEX_TO_OFFSET(info.index);
info.size = 0xc0000;
info.flags = VFIO_REGION_INFO_FLAG_READ |
VFIO_REGION_INFO_FLAG_WRITE;
break;
default:
if (info.index >=
VFIO_PCI_NUM_REGIONS + vdev->num_regions)
return -EINVAL;
i = info.index - VFIO_PCI_NUM_REGIONS;
info.offset = VFIO_PCI_INDEX_TO_OFFSET(info.index);
info.size = vdev->region[i].size;
info.flags = vdev->region[i].flags;
ret = region_type_cap(vdev, &caps,
vdev->region[i].type,
vdev->region[i].subtype);
if (ret)
return ret;
}
if (caps.size) {
info.flags |= VFIO_REGION_INFO_FLAG_CAPS;
if (info.argsz < sizeof(info) + caps.size) {
info.argsz = sizeof(info) + caps.size;
info.cap_offset = 0;
} else {
vfio_info_cap_shift(&caps, sizeof(info));
if (copy_to_user((void __user *)arg +
sizeof(info), caps.buf,
caps.size)) {
kfree(caps.buf);
return -EFAULT;
}
info.cap_offset = sizeof(info);
}
kfree(caps.buf);
}
return copy_to_user((void __user *)arg, &info, minsz) ?
-EFAULT : 0;
} else if (cmd == VFIO_DEVICE_GET_IRQ_INFO) {
struct vfio_irq_info info;
minsz = offsetofend(struct vfio_irq_info, count);
if (copy_from_user(&info, (void __user *)arg, minsz))
return -EFAULT;
if (info.argsz < minsz || info.index >= VFIO_PCI_NUM_IRQS)
return -EINVAL;
switch (info.index) {
case VFIO_PCI_INTX_IRQ_INDEX ... VFIO_PCI_MSIX_IRQ_INDEX:
case VFIO_PCI_REQ_IRQ_INDEX:
break;
case VFIO_PCI_ERR_IRQ_INDEX:
if (pci_is_pcie(vdev->pdev))
break;
/* pass thru to return error */
default:
return -EINVAL;
}
info.flags = VFIO_IRQ_INFO_EVENTFD;
info.count = vfio_pci_get_irq_count(vdev, info.index);
if (info.index == VFIO_PCI_INTX_IRQ_INDEX)
info.flags |= (VFIO_IRQ_INFO_MASKABLE |
VFIO_IRQ_INFO_AUTOMASKED);
else
info.flags |= VFIO_IRQ_INFO_NORESIZE;
return copy_to_user((void __user *)arg, &info, minsz) ?
-EFAULT : 0;
} else if (cmd == VFIO_DEVICE_SET_IRQS) {
struct vfio_irq_set hdr;
size_t size;
u8 *data = NULL;
int max, ret = 0;
minsz = offsetofend(struct vfio_irq_set, count);
if (copy_from_user(&hdr, (void __user *)arg, minsz))
return -EFAULT;
if (hdr.argsz < minsz || hdr.index >= VFIO_PCI_NUM_IRQS ||
hdr.count >= (U32_MAX - hdr.start) ||
hdr.flags & ~(VFIO_IRQ_SET_DATA_TYPE_MASK |
VFIO_IRQ_SET_ACTION_TYPE_MASK))
return -EINVAL;
max = vfio_pci_get_irq_count(vdev, hdr.index);
if (hdr.start >= max || hdr.start + hdr.count > max)
return -EINVAL;
switch (hdr.flags & VFIO_IRQ_SET_DATA_TYPE_MASK) {
case VFIO_IRQ_SET_DATA_NONE:
size = 0;
break;
case VFIO_IRQ_SET_DATA_BOOL:
size = sizeof(uint8_t);
break;
case VFIO_IRQ_SET_DATA_EVENTFD:
size = sizeof(int32_t);
break;
default:
return -EINVAL;
}
if (size) {
if (hdr.argsz - minsz < hdr.count * size)
return -EINVAL;
data = memdup_user((void __user *)(arg + minsz),
hdr.count * size);
if (IS_ERR(data))
return PTR_ERR(data);
}
mutex_lock(&vdev->igate);
ret = vfio_pci_set_irqs_ioctl(vdev, hdr.flags, hdr.index,
hdr.start, hdr.count, data);
mutex_unlock(&vdev->igate);
kfree(data);
return ret;
} else if (cmd == VFIO_DEVICE_RESET) {
return vdev->reset_works ?
pci_try_reset_function(vdev->pdev) : -EINVAL;
} else if (cmd == VFIO_DEVICE_GET_PCI_HOT_RESET_INFO) {
struct vfio_pci_hot_reset_info hdr;
struct vfio_pci_fill_info fill = { 0 };
struct vfio_pci_dependent_device *devices = NULL;
bool slot = false;
int ret = 0;
minsz = offsetofend(struct vfio_pci_hot_reset_info, count);
if (copy_from_user(&hdr, (void __user *)arg, minsz))
return -EFAULT;
if (hdr.argsz < minsz)
return -EINVAL;
hdr.flags = 0;
/* Can we do a slot or bus reset or neither? */
if (!pci_probe_reset_slot(vdev->pdev->slot))
slot = true;
else if (pci_probe_reset_bus(vdev->pdev->bus))
return -ENODEV;
/* How many devices are affected? */
ret = vfio_pci_for_each_slot_or_bus(vdev->pdev,
vfio_pci_count_devs,
&fill.max, slot);
if (ret)
return ret;
WARN_ON(!fill.max); /* Should always be at least one */
/*
* If there's enough space, fill it now, otherwise return
* -ENOSPC and the number of devices affected.
*/
if (hdr.argsz < sizeof(hdr) + (fill.max * sizeof(*devices))) {
ret = -ENOSPC;
hdr.count = fill.max;
goto reset_info_exit;
}
devices = kcalloc(fill.max, sizeof(*devices), GFP_KERNEL);
if (!devices)
return -ENOMEM;
fill.devices = devices;
ret = vfio_pci_for_each_slot_or_bus(vdev->pdev,
vfio_pci_fill_devs,
&fill, slot);
/*
* If a device was removed between counting and filling,
* we may come up short of fill.max. If a device was
* added, we'll have a return of -EAGAIN above.
*/
if (!ret)
hdr.count = fill.cur;
reset_info_exit:
if (copy_to_user((void __user *)arg, &hdr, minsz))
ret = -EFAULT;
if (!ret) {
if (copy_to_user((void __user *)(arg + minsz), devices,
hdr.count * sizeof(*devices)))
ret = -EFAULT;
}
kfree(devices);
return ret;
} else if (cmd == VFIO_DEVICE_PCI_HOT_RESET) {
struct vfio_pci_hot_reset hdr;
int32_t *group_fds;
struct vfio_pci_group_entry *groups;
struct vfio_pci_group_info info;
bool slot = false;
int i, count = 0, ret = 0;
minsz = offsetofend(struct vfio_pci_hot_reset, count);
if (copy_from_user(&hdr, (void __user *)arg, minsz))
return -EFAULT;
if (hdr.argsz < minsz || hdr.flags)
return -EINVAL;
/* Can we do a slot or bus reset or neither? */
if (!pci_probe_reset_slot(vdev->pdev->slot))
slot = true;
else if (pci_probe_reset_bus(vdev->pdev->bus))
return -ENODEV;
/*
* We can't let userspace give us an arbitrarily large
* buffer to copy, so verify how many we think there
* could be. Note groups can have multiple devices so
* one group per device is the max.
*/
ret = vfio_pci_for_each_slot_or_bus(vdev->pdev,
vfio_pci_count_devs,
&count, slot);
if (ret)
return ret;
/* Somewhere between 1 and count is OK */
if (!hdr.count || hdr.count > count)
return -EINVAL;
group_fds = kcalloc(hdr.count, sizeof(*group_fds), GFP_KERNEL);
groups = kcalloc(hdr.count, sizeof(*groups), GFP_KERNEL);
if (!group_fds || !groups) {
kfree(group_fds);
kfree(groups);
return -ENOMEM;
}
if (copy_from_user(group_fds, (void __user *)(arg + minsz),
hdr.count * sizeof(*group_fds))) {
kfree(group_fds);
kfree(groups);
return -EFAULT;
}
/*
* For each group_fd, get the group through the vfio external
* user interface and store the group and iommu ID. This
* ensures the group is held across the reset.
*/
for (i = 0; i < hdr.count; i++) {
struct vfio_group *group;
struct fd f = fdget(group_fds[i]);
if (!f.file) {
ret = -EBADF;
break;
}
group = vfio_group_get_external_user(f.file);
fdput(f);
if (IS_ERR(group)) {
ret = PTR_ERR(group);
break;
}
groups[i].group = group;
groups[i].id = vfio_external_user_iommu_id(group);
}
kfree(group_fds);
/* release reference to groups on error */
if (ret)
goto hot_reset_release;
info.count = hdr.count;
info.groups = groups;
/*
* Test whether all the affected devices are contained
* by the set of groups provided by the user.
*/
ret = vfio_pci_for_each_slot_or_bus(vdev->pdev,
vfio_pci_validate_devs,
&info, slot);
if (!ret)
/* User has access, do the reset */
ret = slot ? pci_try_reset_slot(vdev->pdev->slot) :
pci_try_reset_bus(vdev->pdev->bus);
hot_reset_release:
for (i--; i >= 0; i--)
vfio_group_put_external_user(groups[i].group);
kfree(groups);
return ret;
}
return -ENOTTY;
}
| [[832, "\t\tsize_t size;\n"], [834, "\t\tint max, ret = 0;\n"], [842, "\t\t hdr.count >= (U32_MAX - hdr.start) ||\n"], [847, "\t\tmax = vfio_pci_get_irq_count(vdev, hdr.index);\n"], [848, "\t\tif (hdr.start >= max || hdr.start + hdr.count > max)\n"], [849, "\t\t\treturn -EINVAL;\n"], [851, "\t\tswitch (hdr.flags & VFIO_IRQ_SET_DATA_TYPE_MASK) {\n"], [852, "\t\tcase VFIO_IRQ_SET_DATA_NONE:\n"], [853, "\t\t\tsize = 0;\n"], [854, "\t\t\tbreak;\n"], [855, "\t\tcase VFIO_IRQ_SET_DATA_BOOL:\n"], [856, "\t\t\tsize = sizeof(uint8_t);\n"], [857, "\t\t\tbreak;\n"], [858, "\t\tcase VFIO_IRQ_SET_DATA_EVENTFD:\n"], [859, "\t\t\tsize = sizeof(int32_t);\n"], [860, "\t\t\tbreak;\n"], [861, "\t\tdefault:\n"], [862, "\t\t\treturn -EINVAL;\n"], [863, "\t\t}\n"], [865, "\t\tif (size) {\n"], [866, "\t\t\tif (hdr.argsz - minsz < hdr.count * size)\n"]] | [[832, "size_t size;"], [834, "int max, ret = 0;"], [841, "if (hdr.argsz < minsz || hdr.index >= VFIO_PCI_NUM_IRQS ||\n\t\t hdr.count >= (U32_MAX - hdr.start) ||\n\t\t hdr.flags & ~(VFIO_IRQ_SET_DATA_TYPE_MASK |\n\t\t\t\t VFIO_IRQ_SET_ACTION_TYPE_MASK))"], [847, "max = vfio_pci_get_irq_count(vdev, hdr.index);"], [848, "if (hdr.start >= max || hdr.start + hdr.count > max)"], [849, "return -EINVAL;"], [851, "switch (hdr.flags & VFIO_IRQ_SET_DATA_TYPE_MASK) {"], [852, "case VFIO_IRQ_SET_DATA_NONE:"], [853, "size = 0;"], [854, "break;"], [855, "case VFIO_IRQ_SET_DATA_BOOL:"], [856, "size = sizeof(uint8_t);"], [857, "break;"], [858, "case VFIO_IRQ_SET_DATA_EVENTFD:"], [859, "size = sizeof(int32_t);"], [860, "break;"], [861, "default:"], [862, "return -EINVAL;"], [863, "\t\t}\n"], [865, "if (size)"], [866, "if (hdr.argsz - minsz < hdr.count * size)"]] | [
"CVE-2016-9083",
"CVE-2016-9084"
] | [
"CWE-190",
"CWE-119"
] | 8 | {
"Execution Environment": [],
"Explanation": null,
"External Function": [
"vfio_pci_get_irq_count"
],
"Function Argument": [
"device_data",
"cmd",
"arg"
],
"Globals": [
"VFIO_IRQ_SET_DATA_NONE",
"VFIO_IRQ_SET_DATA_BOOL",
"VFIO_IRQ_SET_DATA_EVENTFD",
"VFIO_IRQ_SET_DATA_TYPE_MASK"
],
"Type Execution Declaration": []
} |
9 | linux | https://github.com/torvalds/linux | drivers/vfio/pci/vfio_pci_intrs.c | 05692d7005a364add85c6e25a6c4447ce08f913a | vfio/pci: Fix integer overflows, bitmask check
The VFIO_DEVICE_SET_IRQS ioctl did not sufficiently sanitize
user-supplied integers, potentially allowing memory corruption. This
patch adds appropriate integer overflow checks, checks the range bounds
for VFIO_IRQ_SET_DATA_NONE, and also verifies that only single element
in the VFIO_IRQ_SET_DATA_TYPE_MASK bitmask is set.
VFIO_IRQ_SET_ACTION_TYPE_MASK is already correctly checked later in
vfio_pci_set_irqs_ioctl().
Furthermore, a kzalloc is changed to a kcalloc because the use of a
kzalloc with an integer multiplication allowed an integer overflow
condition to be reached without this patch. kcalloc checks for overflow
and should prevent a similar occurrence.
Signed-off-by: Vlad Tsyrklevich <vlad@tsyrklevich.net>
Signed-off-by: Alex Williamson <alex.williamson@redhat.com> | true | 4e082208eaeda7cf04e95dca1377d293 | vfio_msi_enable | static int vfio_msi_enable(struct vfio_pci_device *vdev, int nvec, bool msix)
{
struct pci_dev *pdev = vdev->pdev;
unsigned int flag = msix ? PCI_IRQ_MSIX : PCI_IRQ_MSI;
int ret;
if (!is_irq_none(vdev))
return -EINVAL;
vdev->ctx = kzalloc(nvec * sizeof(struct vfio_pci_irq_ctx), GFP_KERNEL);
if (!vdev->ctx)
return -ENOMEM;
/* return the number of supported vectors if we can't get all: */
ret = pci_alloc_irq_vectors(pdev, 1, nvec, flag);
if (ret < nvec) {
if (ret > 0)
pci_free_irq_vectors(pdev);
kfree(vdev->ctx);
return ret;
}
vdev->num_ctx = nvec;
vdev->irq_type = msix ? VFIO_PCI_MSIX_IRQ_INDEX :
VFIO_PCI_MSI_IRQ_INDEX;
if (!msix) {
/*
* Compute the virtual hardware field for max msi vectors -
* it is the log base 2 of the number of vectors.
*/
vdev->msi_qmax = fls(nvec * 2 - 1) - 1;
}
return 0;
}
| [[259, "\tvdev->ctx = kzalloc(nvec * sizeof(struct vfio_pci_irq_ctx), GFP_KERNEL);\n"]] | [[259, "vdev->ctx = kzalloc(nvec * sizeof(struct vfio_pci_irq_ctx), GFP_KERNEL);"]] | [
"CVE-2016-9083",
"CVE-2016-9084"
] | [
"CWE-190",
"CWE-119"
] | 10 | {
"Execution Environment": [],
"Explanation": null,
"External Function": [],
"Function Argument": [
"nvec"
],
"Globals": [],
"Type Execution Declaration": []
} |
10 | linux | https://github.com/torvalds/linux | drivers/vfio/pci/vfio_pci_intrs.c | 05692d7005a364add85c6e25a6c4447ce08f913a | vfio/pci: Fix integer overflows, bitmask check
The VFIO_DEVICE_SET_IRQS ioctl did not sufficiently sanitize
user-supplied integers, potentially allowing memory corruption. This
patch adds appropriate integer overflow checks, checks the range bounds
for VFIO_IRQ_SET_DATA_NONE, and also verifies that only single element
in the VFIO_IRQ_SET_DATA_TYPE_MASK bitmask is set.
VFIO_IRQ_SET_ACTION_TYPE_MASK is already correctly checked later in
vfio_pci_set_irqs_ioctl().
Furthermore, a kzalloc is changed to a kcalloc because the use of a
kzalloc with an integer multiplication allowed an integer overflow
condition to be reached without this patch. kcalloc checks for overflow
and should prevent a similar occurrence.
Signed-off-by: Vlad Tsyrklevich <vlad@tsyrklevich.net>
Signed-off-by: Alex Williamson <alex.williamson@redhat.com> | false | 7e49e89feb2e97fdd27fd1dc9883d202 | vfio_msi_enable | static int vfio_msi_enable(struct vfio_pci_device *vdev, int nvec, bool msix)
{
struct pci_dev *pdev = vdev->pdev;
unsigned int flag = msix ? PCI_IRQ_MSIX : PCI_IRQ_MSI;
int ret;
if (!is_irq_none(vdev))
return -EINVAL;
vdev->ctx = kcalloc(nvec, sizeof(struct vfio_pci_irq_ctx), GFP_KERNEL);
if (!vdev->ctx)
return -ENOMEM;
/* return the number of supported vectors if we can't get all: */
ret = pci_alloc_irq_vectors(pdev, 1, nvec, flag);
if (ret < nvec) {
if (ret > 0)
pci_free_irq_vectors(pdev);
kfree(vdev->ctx);
return ret;
}
vdev->num_ctx = nvec;
vdev->irq_type = msix ? VFIO_PCI_MSIX_IRQ_INDEX :
VFIO_PCI_MSI_IRQ_INDEX;
if (!msix) {
/*
* Compute the virtual hardware field for max msi vectors -
* it is the log base 2 of the number of vectors.
*/
vdev->msi_qmax = fls(nvec * 2 - 1) - 1;
}
return 0;
}
| [[259, "\tvdev->ctx = kcalloc(nvec, sizeof(struct vfio_pci_irq_ctx), GFP_KERNEL);\n"]] | [[259, "vdev->ctx = kcalloc(nvec, sizeof(struct vfio_pci_irq_ctx), GFP_KERNEL);"]] | [
"CVE-2016-9083",
"CVE-2016-9084"
] | [
"CWE-190",
"CWE-119"
] | 10 | {
"Execution Environment": [],
"Explanation": null,
"External Function": [],
"Function Argument": [
"nvec"
],
"Globals": [],
"Type Execution Declaration": []
} |
11 | linux | https://github.com/torvalds/linux | drivers/infiniband/hw/mlx5/qp.c | 0625b4ba1a5d4703c7fb01c497bd6c156908af00 | IB/mlx5: Fix leaking stack memory to userspace
mlx5_ib_create_qp_resp was never initialized and only the first 4 bytes
were written.
Fixes: 41d902cb7c32 ("RDMA/mlx5: Fix definition of mlx5_ib_create_qp_resp")
Cc: <stable@vger.kernel.org>
Acked-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com> | true | 4f14403d1b00e2825b924f7bae6f799d | create_qp_common | static int create_qp_common(struct mlx5_ib_dev *dev, struct ib_pd *pd,
struct ib_qp_init_attr *init_attr,
struct ib_udata *udata, struct mlx5_ib_qp *qp)
{
struct mlx5_ib_resources *devr = &dev->devr;
int inlen = MLX5_ST_SZ_BYTES(create_qp_in);
struct mlx5_core_dev *mdev = dev->mdev;
struct mlx5_ib_create_qp_resp resp;
struct mlx5_ib_cq *send_cq;
struct mlx5_ib_cq *recv_cq;
unsigned long flags;
u32 uidx = MLX5_IB_DEFAULT_UIDX;
struct mlx5_ib_create_qp ucmd;
struct mlx5_ib_qp_base *base;
int mlx5_st;
void *qpc;
u32 *in;
int err;
mutex_init(&qp->mutex);
spin_lock_init(&qp->sq.lock);
spin_lock_init(&qp->rq.lock);
mlx5_st = to_mlx5_st(init_attr->qp_type);
if (mlx5_st < 0)
return -EINVAL;
if (init_attr->rwq_ind_tbl) {
if (!udata)
return -ENOSYS;
err = create_rss_raw_qp_tir(dev, qp, pd, init_attr, udata);
return err;
}
if (init_attr->create_flags & IB_QP_CREATE_BLOCK_MULTICAST_LOOPBACK) {
if (!MLX5_CAP_GEN(mdev, block_lb_mc)) {
mlx5_ib_dbg(dev, "block multicast loopback isn't supported\n");
return -EINVAL;
} else {
qp->flags |= MLX5_IB_QP_BLOCK_MULTICAST_LOOPBACK;
}
}
if (init_attr->create_flags &
(IB_QP_CREATE_CROSS_CHANNEL |
IB_QP_CREATE_MANAGED_SEND |
IB_QP_CREATE_MANAGED_RECV)) {
if (!MLX5_CAP_GEN(mdev, cd)) {
mlx5_ib_dbg(dev, "cross-channel isn't supported\n");
return -EINVAL;
}
if (init_attr->create_flags & IB_QP_CREATE_CROSS_CHANNEL)
qp->flags |= MLX5_IB_QP_CROSS_CHANNEL;
if (init_attr->create_flags & IB_QP_CREATE_MANAGED_SEND)
qp->flags |= MLX5_IB_QP_MANAGED_SEND;
if (init_attr->create_flags & IB_QP_CREATE_MANAGED_RECV)
qp->flags |= MLX5_IB_QP_MANAGED_RECV;
}
if (init_attr->qp_type == IB_QPT_UD &&
(init_attr->create_flags & IB_QP_CREATE_IPOIB_UD_LSO))
if (!MLX5_CAP_GEN(mdev, ipoib_basic_offloads)) {
mlx5_ib_dbg(dev, "ipoib UD lso qp isn't supported\n");
return -EOPNOTSUPP;
}
if (init_attr->create_flags & IB_QP_CREATE_SCATTER_FCS) {
if (init_attr->qp_type != IB_QPT_RAW_PACKET) {
mlx5_ib_dbg(dev, "Scatter FCS is supported only for Raw Packet QPs");
return -EOPNOTSUPP;
}
if (!MLX5_CAP_GEN(dev->mdev, eth_net_offloads) ||
!MLX5_CAP_ETH(dev->mdev, scatter_fcs)) {
mlx5_ib_dbg(dev, "Scatter FCS isn't supported\n");
return -EOPNOTSUPP;
}
qp->flags |= MLX5_IB_QP_CAP_SCATTER_FCS;
}
if (init_attr->sq_sig_type == IB_SIGNAL_ALL_WR)
qp->sq_signal_bits = MLX5_WQE_CTRL_CQ_UPDATE;
if (init_attr->create_flags & IB_QP_CREATE_CVLAN_STRIPPING) {
if (!(MLX5_CAP_GEN(dev->mdev, eth_net_offloads) &&
MLX5_CAP_ETH(dev->mdev, vlan_cap)) ||
(init_attr->qp_type != IB_QPT_RAW_PACKET))
return -EOPNOTSUPP;
qp->flags |= MLX5_IB_QP_CVLAN_STRIPPING;
}
if (pd && pd->uobject) {
if (ib_copy_from_udata(&ucmd, udata, sizeof(ucmd))) {
mlx5_ib_dbg(dev, "copy failed\n");
return -EFAULT;
}
err = get_qp_user_index(to_mucontext(pd->uobject->context),
&ucmd, udata->inlen, &uidx);
if (err)
return err;
qp->wq_sig = !!(ucmd.flags & MLX5_QP_FLAG_SIGNATURE);
qp->scat_cqe = !!(ucmd.flags & MLX5_QP_FLAG_SCATTER_CQE);
if (ucmd.flags & MLX5_QP_FLAG_TUNNEL_OFFLOADS) {
if (init_attr->qp_type != IB_QPT_RAW_PACKET ||
!tunnel_offload_supported(mdev)) {
mlx5_ib_dbg(dev, "Tunnel offload isn't supported\n");
return -EOPNOTSUPP;
}
qp->tunnel_offload_en = true;
}
if (init_attr->create_flags & IB_QP_CREATE_SOURCE_QPN) {
if (init_attr->qp_type != IB_QPT_UD ||
(MLX5_CAP_GEN(dev->mdev, port_type) !=
MLX5_CAP_PORT_TYPE_IB) ||
!mlx5_get_flow_namespace(dev->mdev, MLX5_FLOW_NAMESPACE_BYPASS)) {
mlx5_ib_dbg(dev, "Source QP option isn't supported\n");
return -EOPNOTSUPP;
}
qp->flags |= MLX5_IB_QP_UNDERLAY;
qp->underlay_qpn = init_attr->source_qpn;
}
} else {
qp->wq_sig = !!wq_signature;
}
base = (init_attr->qp_type == IB_QPT_RAW_PACKET ||
qp->flags & MLX5_IB_QP_UNDERLAY) ?
&qp->raw_packet_qp.rq.base :
&qp->trans_qp.base;
qp->has_rq = qp_has_rq(init_attr);
err = set_rq_size(dev, &init_attr->cap, qp->has_rq,
qp, (pd && pd->uobject) ? &ucmd : NULL);
if (err) {
mlx5_ib_dbg(dev, "err %d\n", err);
return err;
}
if (pd) {
if (pd->uobject) {
__u32 max_wqes =
1 << MLX5_CAP_GEN(mdev, log_max_qp_sz);
mlx5_ib_dbg(dev, "requested sq_wqe_count (%d)\n", ucmd.sq_wqe_count);
if (ucmd.rq_wqe_shift != qp->rq.wqe_shift ||
ucmd.rq_wqe_count != qp->rq.wqe_cnt) {
mlx5_ib_dbg(dev, "invalid rq params\n");
return -EINVAL;
}
if (ucmd.sq_wqe_count > max_wqes) {
mlx5_ib_dbg(dev, "requested sq_wqe_count (%d) > max allowed (%d)\n",
ucmd.sq_wqe_count, max_wqes);
return -EINVAL;
}
if (init_attr->create_flags &
mlx5_ib_create_qp_sqpn_qp1()) {
mlx5_ib_dbg(dev, "user-space is not allowed to create UD QPs spoofing as QP1\n");
return -EINVAL;
}
err = create_user_qp(dev, pd, qp, udata, init_attr, &in,
&resp, &inlen, base);
if (err)
mlx5_ib_dbg(dev, "err %d\n", err);
} else {
err = create_kernel_qp(dev, init_attr, qp, &in, &inlen,
base);
if (err)
mlx5_ib_dbg(dev, "err %d\n", err);
}
if (err)
return err;
} else {
in = kvzalloc(inlen, GFP_KERNEL);
if (!in)
return -ENOMEM;
qp->create_type = MLX5_QP_EMPTY;
}
if (is_sqp(init_attr->qp_type))
qp->port = init_attr->port_num;
qpc = MLX5_ADDR_OF(create_qp_in, in, qpc);
MLX5_SET(qpc, qpc, st, mlx5_st);
MLX5_SET(qpc, qpc, pm_state, MLX5_QP_PM_MIGRATED);
if (init_attr->qp_type != MLX5_IB_QPT_REG_UMR)
MLX5_SET(qpc, qpc, pd, to_mpd(pd ? pd : devr->p0)->pdn);
else
MLX5_SET(qpc, qpc, latency_sensitive, 1);
if (qp->wq_sig)
MLX5_SET(qpc, qpc, wq_signature, 1);
if (qp->flags & MLX5_IB_QP_BLOCK_MULTICAST_LOOPBACK)
MLX5_SET(qpc, qpc, block_lb_mc, 1);
if (qp->flags & MLX5_IB_QP_CROSS_CHANNEL)
MLX5_SET(qpc, qpc, cd_master, 1);
if (qp->flags & MLX5_IB_QP_MANAGED_SEND)
MLX5_SET(qpc, qpc, cd_slave_send, 1);
if (qp->flags & MLX5_IB_QP_MANAGED_RECV)
MLX5_SET(qpc, qpc, cd_slave_receive, 1);
if (qp->scat_cqe && is_connected(init_attr->qp_type)) {
int rcqe_sz;
int scqe_sz;
rcqe_sz = mlx5_ib_get_cqe_size(dev, init_attr->recv_cq);
scqe_sz = mlx5_ib_get_cqe_size(dev, init_attr->send_cq);
if (rcqe_sz == 128)
MLX5_SET(qpc, qpc, cs_res, MLX5_RES_SCAT_DATA64_CQE);
else
MLX5_SET(qpc, qpc, cs_res, MLX5_RES_SCAT_DATA32_CQE);
if (init_attr->sq_sig_type == IB_SIGNAL_ALL_WR) {
if (scqe_sz == 128)
MLX5_SET(qpc, qpc, cs_req, MLX5_REQ_SCAT_DATA64_CQE);
else
MLX5_SET(qpc, qpc, cs_req, MLX5_REQ_SCAT_DATA32_CQE);
}
}
if (qp->rq.wqe_cnt) {
MLX5_SET(qpc, qpc, log_rq_stride, qp->rq.wqe_shift - 4);
MLX5_SET(qpc, qpc, log_rq_size, ilog2(qp->rq.wqe_cnt));
}
MLX5_SET(qpc, qpc, rq_type, get_rx_type(qp, init_attr));
if (qp->sq.wqe_cnt) {
MLX5_SET(qpc, qpc, log_sq_size, ilog2(qp->sq.wqe_cnt));
} else {
MLX5_SET(qpc, qpc, no_sq, 1);
if (init_attr->srq &&
init_attr->srq->srq_type == IB_SRQT_TM)
MLX5_SET(qpc, qpc, offload_type,
MLX5_QPC_OFFLOAD_TYPE_RNDV);
}
/* Set default resources */
switch (init_attr->qp_type) {
case IB_QPT_XRC_TGT:
MLX5_SET(qpc, qpc, cqn_rcv, to_mcq(devr->c0)->mcq.cqn);
MLX5_SET(qpc, qpc, cqn_snd, to_mcq(devr->c0)->mcq.cqn);
MLX5_SET(qpc, qpc, srqn_rmpn_xrqn, to_msrq(devr->s0)->msrq.srqn);
MLX5_SET(qpc, qpc, xrcd, to_mxrcd(init_attr->xrcd)->xrcdn);
break;
case IB_QPT_XRC_INI:
MLX5_SET(qpc, qpc, cqn_rcv, to_mcq(devr->c0)->mcq.cqn);
MLX5_SET(qpc, qpc, xrcd, to_mxrcd(devr->x1)->xrcdn);
MLX5_SET(qpc, qpc, srqn_rmpn_xrqn, to_msrq(devr->s0)->msrq.srqn);
break;
default:
if (init_attr->srq) {
MLX5_SET(qpc, qpc, xrcd, to_mxrcd(devr->x0)->xrcdn);
MLX5_SET(qpc, qpc, srqn_rmpn_xrqn, to_msrq(init_attr->srq)->msrq.srqn);
} else {
MLX5_SET(qpc, qpc, xrcd, to_mxrcd(devr->x1)->xrcdn);
MLX5_SET(qpc, qpc, srqn_rmpn_xrqn, to_msrq(devr->s1)->msrq.srqn);
}
}
if (init_attr->send_cq)
MLX5_SET(qpc, qpc, cqn_snd, to_mcq(init_attr->send_cq)->mcq.cqn);
if (init_attr->recv_cq)
MLX5_SET(qpc, qpc, cqn_rcv, to_mcq(init_attr->recv_cq)->mcq.cqn);
MLX5_SET64(qpc, qpc, dbr_addr, qp->db.dma);
/* 0xffffff means we ask to work with cqe version 0 */
if (MLX5_CAP_GEN(mdev, cqe_version) == MLX5_CQE_VERSION_V1)
MLX5_SET(qpc, qpc, user_index, uidx);
/* we use IB_QP_CREATE_IPOIB_UD_LSO to indicates ipoib qp */
if (init_attr->qp_type == IB_QPT_UD &&
(init_attr->create_flags & IB_QP_CREATE_IPOIB_UD_LSO)) {
MLX5_SET(qpc, qpc, ulp_stateless_offload_mode, 1);
qp->flags |= MLX5_IB_QP_LSO;
}
if (init_attr->create_flags & IB_QP_CREATE_PCI_WRITE_END_PADDING) {
if (!MLX5_CAP_GEN(dev->mdev, end_pad)) {
mlx5_ib_dbg(dev, "scatter end padding is not supported\n");
err = -EOPNOTSUPP;
goto err;
} else if (init_attr->qp_type != IB_QPT_RAW_PACKET) {
MLX5_SET(qpc, qpc, end_padding_mode,
MLX5_WQ_END_PAD_MODE_ALIGN);
} else {
qp->flags |= MLX5_IB_QP_PCI_WRITE_END_PADDING;
}
}
if (inlen < 0) {
err = -EINVAL;
goto err;
}
if (init_attr->qp_type == IB_QPT_RAW_PACKET ||
qp->flags & MLX5_IB_QP_UNDERLAY) {
qp->raw_packet_qp.sq.ubuffer.buf_addr = ucmd.sq_buf_addr;
raw_packet_qp_copy_info(qp, &qp->raw_packet_qp);
err = create_raw_packet_qp(dev, qp, in, inlen, pd);
} else {
err = mlx5_core_create_qp(dev->mdev, &base->mqp, in, inlen);
}
if (err) {
mlx5_ib_dbg(dev, "create qp failed\n");
goto err_create;
}
kvfree(in);
base->container_mibqp = qp;
base->mqp.event = mlx5_ib_qp_event;
get_cqs(init_attr->qp_type, init_attr->send_cq, init_attr->recv_cq,
&send_cq, &recv_cq);
spin_lock_irqsave(&dev->reset_flow_resource_lock, flags);
mlx5_ib_lock_cqs(send_cq, recv_cq);
/* Maintain device to QPs access, needed for further handling via reset
* flow
*/
list_add_tail(&qp->qps_list, &dev->qp_list);
/* Maintain CQ to QPs access, needed for further handling via reset flow
*/
if (send_cq)
list_add_tail(&qp->cq_send_list, &send_cq->list_send_qp);
if (recv_cq)
list_add_tail(&qp->cq_recv_list, &recv_cq->list_recv_qp);
mlx5_ib_unlock_cqs(send_cq, recv_cq);
spin_unlock_irqrestore(&dev->reset_flow_resource_lock, flags);
return 0;
err_create:
if (qp->create_type == MLX5_QP_USER)
destroy_qp_user(dev, pd, qp, base);
else if (qp->create_type == MLX5_QP_KERNEL)
destroy_qp_kernel(dev, qp);
err:
kvfree(in);
return err;
}
| [[1610, "\tstruct mlx5_ib_create_qp_resp resp;\n"]] | [[1610, "struct mlx5_ib_create_qp_resp resp;"]] | [
"CVE-2018-20855"
] | [
"CWE-119"
] | 12 | {
"Execution Environment": [],
"Explanation": null,
"External Function": [],
"Function Argument": [],
"Globals": [],
"Type Execution Declaration": [
"struct mlx5_ib_create_qp_resp"
]
} |
12 | linux | https://github.com/torvalds/linux | drivers/infiniband/hw/mlx5/qp.c | 0625b4ba1a5d4703c7fb01c497bd6c156908af00 | IB/mlx5: Fix leaking stack memory to userspace
mlx5_ib_create_qp_resp was never initialized and only the first 4 bytes
were written.
Fixes: 41d902cb7c32 ("RDMA/mlx5: Fix definition of mlx5_ib_create_qp_resp")
Cc: <stable@vger.kernel.org>
Acked-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com> | false | 0e311cfeb93cc5603699cbf680d9734f | create_qp_common | static int create_qp_common(struct mlx5_ib_dev *dev, struct ib_pd *pd,
struct ib_qp_init_attr *init_attr,
struct ib_udata *udata, struct mlx5_ib_qp *qp)
{
struct mlx5_ib_resources *devr = &dev->devr;
int inlen = MLX5_ST_SZ_BYTES(create_qp_in);
struct mlx5_core_dev *mdev = dev->mdev;
struct mlx5_ib_create_qp_resp resp = {};
struct mlx5_ib_cq *send_cq;
struct mlx5_ib_cq *recv_cq;
unsigned long flags;
u32 uidx = MLX5_IB_DEFAULT_UIDX;
struct mlx5_ib_create_qp ucmd;
struct mlx5_ib_qp_base *base;
int mlx5_st;
void *qpc;
u32 *in;
int err;
mutex_init(&qp->mutex);
spin_lock_init(&qp->sq.lock);
spin_lock_init(&qp->rq.lock);
mlx5_st = to_mlx5_st(init_attr->qp_type);
if (mlx5_st < 0)
return -EINVAL;
if (init_attr->rwq_ind_tbl) {
if (!udata)
return -ENOSYS;
err = create_rss_raw_qp_tir(dev, qp, pd, init_attr, udata);
return err;
}
if (init_attr->create_flags & IB_QP_CREATE_BLOCK_MULTICAST_LOOPBACK) {
if (!MLX5_CAP_GEN(mdev, block_lb_mc)) {
mlx5_ib_dbg(dev, "block multicast loopback isn't supported\n");
return -EINVAL;
} else {
qp->flags |= MLX5_IB_QP_BLOCK_MULTICAST_LOOPBACK;
}
}
if (init_attr->create_flags &
(IB_QP_CREATE_CROSS_CHANNEL |
IB_QP_CREATE_MANAGED_SEND |
IB_QP_CREATE_MANAGED_RECV)) {
if (!MLX5_CAP_GEN(mdev, cd)) {
mlx5_ib_dbg(dev, "cross-channel isn't supported\n");
return -EINVAL;
}
if (init_attr->create_flags & IB_QP_CREATE_CROSS_CHANNEL)
qp->flags |= MLX5_IB_QP_CROSS_CHANNEL;
if (init_attr->create_flags & IB_QP_CREATE_MANAGED_SEND)
qp->flags |= MLX5_IB_QP_MANAGED_SEND;
if (init_attr->create_flags & IB_QP_CREATE_MANAGED_RECV)
qp->flags |= MLX5_IB_QP_MANAGED_RECV;
}
if (init_attr->qp_type == IB_QPT_UD &&
(init_attr->create_flags & IB_QP_CREATE_IPOIB_UD_LSO))
if (!MLX5_CAP_GEN(mdev, ipoib_basic_offloads)) {
mlx5_ib_dbg(dev, "ipoib UD lso qp isn't supported\n");
return -EOPNOTSUPP;
}
if (init_attr->create_flags & IB_QP_CREATE_SCATTER_FCS) {
if (init_attr->qp_type != IB_QPT_RAW_PACKET) {
mlx5_ib_dbg(dev, "Scatter FCS is supported only for Raw Packet QPs");
return -EOPNOTSUPP;
}
if (!MLX5_CAP_GEN(dev->mdev, eth_net_offloads) ||
!MLX5_CAP_ETH(dev->mdev, scatter_fcs)) {
mlx5_ib_dbg(dev, "Scatter FCS isn't supported\n");
return -EOPNOTSUPP;
}
qp->flags |= MLX5_IB_QP_CAP_SCATTER_FCS;
}
if (init_attr->sq_sig_type == IB_SIGNAL_ALL_WR)
qp->sq_signal_bits = MLX5_WQE_CTRL_CQ_UPDATE;
if (init_attr->create_flags & IB_QP_CREATE_CVLAN_STRIPPING) {
if (!(MLX5_CAP_GEN(dev->mdev, eth_net_offloads) &&
MLX5_CAP_ETH(dev->mdev, vlan_cap)) ||
(init_attr->qp_type != IB_QPT_RAW_PACKET))
return -EOPNOTSUPP;
qp->flags |= MLX5_IB_QP_CVLAN_STRIPPING;
}
if (pd && pd->uobject) {
if (ib_copy_from_udata(&ucmd, udata, sizeof(ucmd))) {
mlx5_ib_dbg(dev, "copy failed\n");
return -EFAULT;
}
err = get_qp_user_index(to_mucontext(pd->uobject->context),
&ucmd, udata->inlen, &uidx);
if (err)
return err;
qp->wq_sig = !!(ucmd.flags & MLX5_QP_FLAG_SIGNATURE);
qp->scat_cqe = !!(ucmd.flags & MLX5_QP_FLAG_SCATTER_CQE);
if (ucmd.flags & MLX5_QP_FLAG_TUNNEL_OFFLOADS) {
if (init_attr->qp_type != IB_QPT_RAW_PACKET ||
!tunnel_offload_supported(mdev)) {
mlx5_ib_dbg(dev, "Tunnel offload isn't supported\n");
return -EOPNOTSUPP;
}
qp->tunnel_offload_en = true;
}
if (init_attr->create_flags & IB_QP_CREATE_SOURCE_QPN) {
if (init_attr->qp_type != IB_QPT_UD ||
(MLX5_CAP_GEN(dev->mdev, port_type) !=
MLX5_CAP_PORT_TYPE_IB) ||
!mlx5_get_flow_namespace(dev->mdev, MLX5_FLOW_NAMESPACE_BYPASS)) {
mlx5_ib_dbg(dev, "Source QP option isn't supported\n");
return -EOPNOTSUPP;
}
qp->flags |= MLX5_IB_QP_UNDERLAY;
qp->underlay_qpn = init_attr->source_qpn;
}
} else {
qp->wq_sig = !!wq_signature;
}
base = (init_attr->qp_type == IB_QPT_RAW_PACKET ||
qp->flags & MLX5_IB_QP_UNDERLAY) ?
&qp->raw_packet_qp.rq.base :
&qp->trans_qp.base;
qp->has_rq = qp_has_rq(init_attr);
err = set_rq_size(dev, &init_attr->cap, qp->has_rq,
qp, (pd && pd->uobject) ? &ucmd : NULL);
if (err) {
mlx5_ib_dbg(dev, "err %d\n", err);
return err;
}
if (pd) {
if (pd->uobject) {
__u32 max_wqes =
1 << MLX5_CAP_GEN(mdev, log_max_qp_sz);
mlx5_ib_dbg(dev, "requested sq_wqe_count (%d)\n", ucmd.sq_wqe_count);
if (ucmd.rq_wqe_shift != qp->rq.wqe_shift ||
ucmd.rq_wqe_count != qp->rq.wqe_cnt) {
mlx5_ib_dbg(dev, "invalid rq params\n");
return -EINVAL;
}
if (ucmd.sq_wqe_count > max_wqes) {
mlx5_ib_dbg(dev, "requested sq_wqe_count (%d) > max allowed (%d)\n",
ucmd.sq_wqe_count, max_wqes);
return -EINVAL;
}
if (init_attr->create_flags &
mlx5_ib_create_qp_sqpn_qp1()) {
mlx5_ib_dbg(dev, "user-space is not allowed to create UD QPs spoofing as QP1\n");
return -EINVAL;
}
err = create_user_qp(dev, pd, qp, udata, init_attr, &in,
&resp, &inlen, base);
if (err)
mlx5_ib_dbg(dev, "err %d\n", err);
} else {
err = create_kernel_qp(dev, init_attr, qp, &in, &inlen,
base);
if (err)
mlx5_ib_dbg(dev, "err %d\n", err);
}
if (err)
return err;
} else {
in = kvzalloc(inlen, GFP_KERNEL);
if (!in)
return -ENOMEM;
qp->create_type = MLX5_QP_EMPTY;
}
if (is_sqp(init_attr->qp_type))
qp->port = init_attr->port_num;
qpc = MLX5_ADDR_OF(create_qp_in, in, qpc);
MLX5_SET(qpc, qpc, st, mlx5_st);
MLX5_SET(qpc, qpc, pm_state, MLX5_QP_PM_MIGRATED);
if (init_attr->qp_type != MLX5_IB_QPT_REG_UMR)
MLX5_SET(qpc, qpc, pd, to_mpd(pd ? pd : devr->p0)->pdn);
else
MLX5_SET(qpc, qpc, latency_sensitive, 1);
if (qp->wq_sig)
MLX5_SET(qpc, qpc, wq_signature, 1);
if (qp->flags & MLX5_IB_QP_BLOCK_MULTICAST_LOOPBACK)
MLX5_SET(qpc, qpc, block_lb_mc, 1);
if (qp->flags & MLX5_IB_QP_CROSS_CHANNEL)
MLX5_SET(qpc, qpc, cd_master, 1);
if (qp->flags & MLX5_IB_QP_MANAGED_SEND)
MLX5_SET(qpc, qpc, cd_slave_send, 1);
if (qp->flags & MLX5_IB_QP_MANAGED_RECV)
MLX5_SET(qpc, qpc, cd_slave_receive, 1);
if (qp->scat_cqe && is_connected(init_attr->qp_type)) {
int rcqe_sz;
int scqe_sz;
rcqe_sz = mlx5_ib_get_cqe_size(dev, init_attr->recv_cq);
scqe_sz = mlx5_ib_get_cqe_size(dev, init_attr->send_cq);
if (rcqe_sz == 128)
MLX5_SET(qpc, qpc, cs_res, MLX5_RES_SCAT_DATA64_CQE);
else
MLX5_SET(qpc, qpc, cs_res, MLX5_RES_SCAT_DATA32_CQE);
if (init_attr->sq_sig_type == IB_SIGNAL_ALL_WR) {
if (scqe_sz == 128)
MLX5_SET(qpc, qpc, cs_req, MLX5_REQ_SCAT_DATA64_CQE);
else
MLX5_SET(qpc, qpc, cs_req, MLX5_REQ_SCAT_DATA32_CQE);
}
}
if (qp->rq.wqe_cnt) {
MLX5_SET(qpc, qpc, log_rq_stride, qp->rq.wqe_shift - 4);
MLX5_SET(qpc, qpc, log_rq_size, ilog2(qp->rq.wqe_cnt));
}
MLX5_SET(qpc, qpc, rq_type, get_rx_type(qp, init_attr));
if (qp->sq.wqe_cnt) {
MLX5_SET(qpc, qpc, log_sq_size, ilog2(qp->sq.wqe_cnt));
} else {
MLX5_SET(qpc, qpc, no_sq, 1);
if (init_attr->srq &&
init_attr->srq->srq_type == IB_SRQT_TM)
MLX5_SET(qpc, qpc, offload_type,
MLX5_QPC_OFFLOAD_TYPE_RNDV);
}
/* Set default resources */
switch (init_attr->qp_type) {
case IB_QPT_XRC_TGT:
MLX5_SET(qpc, qpc, cqn_rcv, to_mcq(devr->c0)->mcq.cqn);
MLX5_SET(qpc, qpc, cqn_snd, to_mcq(devr->c0)->mcq.cqn);
MLX5_SET(qpc, qpc, srqn_rmpn_xrqn, to_msrq(devr->s0)->msrq.srqn);
MLX5_SET(qpc, qpc, xrcd, to_mxrcd(init_attr->xrcd)->xrcdn);
break;
case IB_QPT_XRC_INI:
MLX5_SET(qpc, qpc, cqn_rcv, to_mcq(devr->c0)->mcq.cqn);
MLX5_SET(qpc, qpc, xrcd, to_mxrcd(devr->x1)->xrcdn);
MLX5_SET(qpc, qpc, srqn_rmpn_xrqn, to_msrq(devr->s0)->msrq.srqn);
break;
default:
if (init_attr->srq) {
MLX5_SET(qpc, qpc, xrcd, to_mxrcd(devr->x0)->xrcdn);
MLX5_SET(qpc, qpc, srqn_rmpn_xrqn, to_msrq(init_attr->srq)->msrq.srqn);
} else {
MLX5_SET(qpc, qpc, xrcd, to_mxrcd(devr->x1)->xrcdn);
MLX5_SET(qpc, qpc, srqn_rmpn_xrqn, to_msrq(devr->s1)->msrq.srqn);
}
}
if (init_attr->send_cq)
MLX5_SET(qpc, qpc, cqn_snd, to_mcq(init_attr->send_cq)->mcq.cqn);
if (init_attr->recv_cq)
MLX5_SET(qpc, qpc, cqn_rcv, to_mcq(init_attr->recv_cq)->mcq.cqn);
MLX5_SET64(qpc, qpc, dbr_addr, qp->db.dma);
/* 0xffffff means we ask to work with cqe version 0 */
if (MLX5_CAP_GEN(mdev, cqe_version) == MLX5_CQE_VERSION_V1)
MLX5_SET(qpc, qpc, user_index, uidx);
/* we use IB_QP_CREATE_IPOIB_UD_LSO to indicates ipoib qp */
if (init_attr->qp_type == IB_QPT_UD &&
(init_attr->create_flags & IB_QP_CREATE_IPOIB_UD_LSO)) {
MLX5_SET(qpc, qpc, ulp_stateless_offload_mode, 1);
qp->flags |= MLX5_IB_QP_LSO;
}
if (init_attr->create_flags & IB_QP_CREATE_PCI_WRITE_END_PADDING) {
if (!MLX5_CAP_GEN(dev->mdev, end_pad)) {
mlx5_ib_dbg(dev, "scatter end padding is not supported\n");
err = -EOPNOTSUPP;
goto err;
} else if (init_attr->qp_type != IB_QPT_RAW_PACKET) {
MLX5_SET(qpc, qpc, end_padding_mode,
MLX5_WQ_END_PAD_MODE_ALIGN);
} else {
qp->flags |= MLX5_IB_QP_PCI_WRITE_END_PADDING;
}
}
if (inlen < 0) {
err = -EINVAL;
goto err;
}
if (init_attr->qp_type == IB_QPT_RAW_PACKET ||
qp->flags & MLX5_IB_QP_UNDERLAY) {
qp->raw_packet_qp.sq.ubuffer.buf_addr = ucmd.sq_buf_addr;
raw_packet_qp_copy_info(qp, &qp->raw_packet_qp);
err = create_raw_packet_qp(dev, qp, in, inlen, pd);
} else {
err = mlx5_core_create_qp(dev->mdev, &base->mqp, in, inlen);
}
if (err) {
mlx5_ib_dbg(dev, "create qp failed\n");
goto err_create;
}
kvfree(in);
base->container_mibqp = qp;
base->mqp.event = mlx5_ib_qp_event;
get_cqs(init_attr->qp_type, init_attr->send_cq, init_attr->recv_cq,
&send_cq, &recv_cq);
spin_lock_irqsave(&dev->reset_flow_resource_lock, flags);
mlx5_ib_lock_cqs(send_cq, recv_cq);
/* Maintain device to QPs access, needed for further handling via reset
* flow
*/
list_add_tail(&qp->qps_list, &dev->qp_list);
/* Maintain CQ to QPs access, needed for further handling via reset flow
*/
if (send_cq)
list_add_tail(&qp->cq_send_list, &send_cq->list_send_qp);
if (recv_cq)
list_add_tail(&qp->cq_recv_list, &recv_cq->list_recv_qp);
mlx5_ib_unlock_cqs(send_cq, recv_cq);
spin_unlock_irqrestore(&dev->reset_flow_resource_lock, flags);
return 0;
err_create:
if (qp->create_type == MLX5_QP_USER)
destroy_qp_user(dev, pd, qp, base);
else if (qp->create_type == MLX5_QP_KERNEL)
destroy_qp_kernel(dev, qp);
err:
kvfree(in);
return err;
}
| [[1610, "\tstruct mlx5_ib_create_qp_resp resp = {};\n"]] | [[1610, "struct mlx5_ib_create_qp_resp resp = {};"]] | [
"CVE-2018-20855"
] | [
"CWE-119"
] | 12 | {
"Execution Environment": [],
"Explanation": null,
"External Function": [],
"Function Argument": [],
"Globals": [],
"Type Execution Declaration": [
"struct mlx5_ib_create_qp_resp"
]
} |
13 | linux | https://github.com/torvalds/linux | fs/cifs/smbencrypt.c | 06deeec77a5a689cc94b21a8a91a76e42176685d | cifs: Fix smbencrypt() to stop pointing a scatterlist at the stack
smbencrypt() points a scatterlist to the stack, which is breaks if
CONFIG_VMAP_STACK=y.
Fix it by switching to crypto_cipher_encrypt_one(). The new code
should be considerably faster as an added benefit.
This code is nearly identical to some code that Eric Biggers
suggested.
Cc: stable@vger.kernel.org # 4.9 only
Reported-by: Eric Biggers <ebiggers3@gmail.com>
Signed-off-by: Andy Lutomirski <luto@kernel.org>
Acked-by: Jeff Layton <jlayton@redhat.com>
Signed-off-by: Steve French <smfrench@gmail.com> | true | a89228b9e70e6c1f4dc50bcf2932a2f1 | smbhash | static int
smbhash(unsigned char *out, const unsigned char *in, unsigned char *key)
{
int rc;
unsigned char key2[8];
struct crypto_skcipher *tfm_des;
struct scatterlist sgin, sgout;
struct skcipher_request *req;
str_to_key(key, key2);
tfm_des = crypto_alloc_skcipher("ecb(des)", 0, CRYPTO_ALG_ASYNC);
if (IS_ERR(tfm_des)) {
rc = PTR_ERR(tfm_des);
cifs_dbg(VFS, "could not allocate des crypto API\n");
goto smbhash_err;
}
req = skcipher_request_alloc(tfm_des, GFP_KERNEL);
if (!req) {
rc = -ENOMEM;
cifs_dbg(VFS, "could not allocate des crypto API\n");
goto smbhash_free_skcipher;
}
crypto_skcipher_setkey(tfm_des, key2, 8);
sg_init_one(&sgin, in, 8);
sg_init_one(&sgout, out, 8);
skcipher_request_set_callback(req, 0, NULL, NULL);
skcipher_request_set_crypt(req, &sgin, &sgout, 8, NULL);
rc = crypto_skcipher_encrypt(req);
if (rc)
cifs_dbg(VFS, "could not encrypt crypt key rc: %d\n", rc);
skcipher_request_free(req);
smbhash_free_skcipher:
crypto_free_skcipher(tfm_des);
smbhash_err:
return rc;
}
| [[72, "\tint rc;\n"], [74, "\tstruct crypto_skcipher *tfm_des;\n"], [75, "\tstruct scatterlist sgin, sgout;\n"], [76, "\tstruct skcipher_request *req;\n"], [80, "\ttfm_des = crypto_alloc_skcipher(\"ecb(des)\", 0, CRYPTO_ALG_ASYNC);\n"], [82, "\t\trc = PTR_ERR(tfm_des);\n"], [83, "\t\tcifs_dbg(VFS, \"could not allocate des crypto API\\n\");\n"], [84, "\t\tgoto smbhash_err;\n"], [85, "\t}\n"], [86, "\n"], [87, "\treq = skcipher_request_alloc(tfm_des, GFP_KERNEL);\n"], [88, "\tif (!req) {\n"], [89, "\t\trc = -ENOMEM;\n"], [91, "\t\tgoto smbhash_free_skcipher;\n"], [94, "\tcrypto_skcipher_setkey(tfm_des, key2, 8);\n"], [95, "\n"], [96, "\tsg_init_one(&sgin, in, 8);\n"], [97, "\tsg_init_one(&sgout, out, 8);\n"], [99, "\tskcipher_request_set_callback(req, 0, NULL, NULL);\n"], [100, "\tskcipher_request_set_crypt(req, &sgin, &sgout, 8, NULL);\n"], [101, "\n"], [102, "\trc = crypto_skcipher_encrypt(req);\n"], [103, "\tif (rc)\n"], [104, "\t\tcifs_dbg(VFS, \"could not encrypt crypt key rc: %d\\n\", rc);\n"], [105, "\n"], [106, "\tskcipher_request_free(req);\n"], [107, "\n"], [108, "smbhash_free_skcipher:\n"], [109, "\tcrypto_free_skcipher(tfm_des);\n"], [110, "smbhash_err:\n"], [111, "\treturn rc;\n"]] | [[72, "int rc;"], [74, "struct crypto_skcipher *tfm_des;"], [75, "struct scatterlist sgin, sgout;"], [76, "struct skcipher_request *req;"], [80, "tfm_des = crypto_alloc_skcipher(\"ecb(des)\", 0, CRYPTO_ALG_ASYNC);"], [82, "rc = PTR_ERR(tfm_des);"], [83, "cifs_dbg(VFS, \"could not allocate des crypto API\\n\");"], [84, "goto smbhash_err;"], [85, "\t}\n"], [86, "\n"], [87, "req = skcipher_request_alloc(tfm_des, GFP_KERNEL);"], [88, "if (!req)"], [89, "rc = -ENOMEM;"], [91, "goto smbhash_free_skcipher;"], [94, "crypto_skcipher_setkey(tfm_des, key2, 8);"], [95, "\n"], [96, "sg_init_one(&sgin, in, 8);"], [97, "sg_init_one(&sgout, out, 8);"], [99, "skcipher_request_set_callback(req, 0, NULL, NULL);"], [100, "skcipher_request_set_crypt(req, &sgin, &sgout, 8, NULL);"], [101, "\n"], [102, "rc = crypto_skcipher_encrypt(req);"], [103, "if (rc)"], [104, "cifs_dbg(VFS, \"could not encrypt crypt key rc: %d\\n\", rc);"], [105, "\n"], [106, "skcipher_request_free(req);"], [107, "\n"], [108, "smbhash_free_skcipher:"], [109, "crypto_free_skcipher(tfm_des);"], [110, "smbhash_err:"], [111, "return rc;"]] | [
"CVE-2016-10154"
] | [
"CWE-119"
] | 14 | {
"Execution Environment": [
"CONFIG_VMAP_STACK"
],
"Explanation": null,
"External Function": [
"sg_init_one"
],
"Function Argument": [],
"Globals": [],
"Type Execution Declaration": []
} |
14 | linux | https://github.com/torvalds/linux | fs/cifs/smbencrypt.c | 06deeec77a5a689cc94b21a8a91a76e42176685d | cifs: Fix smbencrypt() to stop pointing a scatterlist at the stack
smbencrypt() points a scatterlist to the stack, which is breaks if
CONFIG_VMAP_STACK=y.
Fix it by switching to crypto_cipher_encrypt_one(). The new code
should be considerably faster as an added benefit.
This code is nearly identical to some code that Eric Biggers
suggested.
Cc: stable@vger.kernel.org # 4.9 only
Reported-by: Eric Biggers <ebiggers3@gmail.com>
Signed-off-by: Andy Lutomirski <luto@kernel.org>
Acked-by: Jeff Layton <jlayton@redhat.com>
Signed-off-by: Steve French <smfrench@gmail.com> | false | e8cc2fd6bec0fed306d2adce2c782f6f | smbhash | static int
smbhash(unsigned char *out, const unsigned char *in, unsigned char *key)
{
unsigned char key2[8];
struct crypto_cipher *tfm_des;
str_to_key(key, key2);
tfm_des = crypto_alloc_cipher("des", 0, 0);
if (IS_ERR(tfm_des)) {
cifs_dbg(VFS, "could not allocate des crypto API\n");
return PTR_ERR(tfm_des);
}
crypto_cipher_setkey(tfm_des, key2, 8);
crypto_cipher_encrypt_one(tfm_des, out, in);
crypto_free_cipher(tfm_des);
return 0;
}
| [[73, "\tstruct crypto_cipher *tfm_des;\n"], [77, "\ttfm_des = crypto_alloc_cipher(\"des\", 0, 0);\n"], [80, "\t\treturn PTR_ERR(tfm_des);\n"], [83, "\tcrypto_cipher_setkey(tfm_des, key2, 8);\n"], [84, "\tcrypto_cipher_encrypt_one(tfm_des, out, in);\n"], [85, "\tcrypto_free_cipher(tfm_des);\n"], [87, "\treturn 0;\n"]] | [[73, "struct crypto_cipher *tfm_des;"], [77, "tfm_des = crypto_alloc_cipher(\"des\", 0, 0);"], [80, "return PTR_ERR(tfm_des);"], [83, "crypto_cipher_setkey(tfm_des, key2, 8);"], [84, "crypto_cipher_encrypt_one(tfm_des, out, in);"], [85, "crypto_free_cipher(tfm_des);"], [87, "return 0;"]] | [
"CVE-2016-10154"
] | [
"CWE-119"
] | 14 | {
"Execution Environment": [
"CONFIG_VMAP_STACK"
],
"Explanation": null,
"External Function": [
"sg_init_one"
],
"Function Argument": [],
"Globals": [],
"Type Execution Declaration": []
} |
15 | linux | https://github.com/torvalds/linux | drivers/net/mlx4/port.c | 0926f91083f34d047abc74f1ca4fa6a9c161f7db | mlx4_en: Fix out of bounds array access
When searching for a free entry in either mlx4_register_vlan() or
mlx4_register_mac(), and there is no free entry, the loop terminates without
updating the local variable free thus causing out of array bounds access. Fix
this by adding a proper check outside the loop.
Signed-off-by: Eli Cohen <eli@mellanox.co.il>
Signed-off-by: David S. Miller <davem@davemloft.net> | true | 0bb0fd17cb0d592e6c0241114adba3b5 | mlx4_register_mac | int mlx4_register_mac(struct mlx4_dev *dev, u8 port, u64 mac, int *index)
{
struct mlx4_mac_table *table = &mlx4_priv(dev)->port[port].mac_table;
int i, err = 0;
int free = -1;
mlx4_dbg(dev, "Registering MAC: 0x%llx\n", (unsigned long long) mac);
mutex_lock(&table->mutex);
for (i = 0; i < MLX4_MAX_MAC_NUM - 1; i++) {
if (free < 0 && !table->refs[i]) {
free = i;
continue;
}
if (mac == (MLX4_MAC_MASK & be64_to_cpu(table->entries[i]))) {
/* MAC already registered, increase refernce count */
*index = i;
++table->refs[i];
goto out;
}
}
mlx4_dbg(dev, "Free MAC index is %d\n", free);
if (table->total == table->max) {
/* No free mac entries */
err = -ENOSPC;
goto out;
}
/* Register new MAC */
table->refs[free] = 1;
table->entries[free] = cpu_to_be64(mac | MLX4_MAC_VALID);
err = mlx4_set_port_mac_table(dev, port, table->entries);
if (unlikely(err)) {
mlx4_err(dev, "Failed adding MAC: 0x%llx\n", (unsigned long long) mac);
table->refs[free] = 0;
table->entries[free] = 0;
goto out;
}
*index = free;
++table->total;
out:
mutex_unlock(&table->mutex);
return err;
}
| [] | [] | [
"CVE-2010-5332"
] | [
"CWE-119"
] | 17 | {
"Execution Environment": [],
"Explanation": null,
"External Function": [
"mlx4_priv"
],
"Function Argument": [
"dev",
"port"
],
"Globals": [
"MLX4_MAX_MAC_NUM"
],
"Type Execution Declaration": []
} |
16 | linux | https://github.com/torvalds/linux | drivers/net/mlx4/port.c | 0926f91083f34d047abc74f1ca4fa6a9c161f7db | mlx4_en: Fix out of bounds array access
When searching for a free entry in either mlx4_register_vlan() or
mlx4_register_mac(), and there is no free entry, the loop terminates without
updating the local variable free thus causing out of array bounds access. Fix
this by adding a proper check outside the loop.
Signed-off-by: Eli Cohen <eli@mellanox.co.il>
Signed-off-by: David S. Miller <davem@davemloft.net> | true | b34d08283adbc51bc2a8ac9fef1d47d8 | mlx4_register_vlan | int mlx4_register_vlan(struct mlx4_dev *dev, u8 port, u16 vlan, int *index)
{
struct mlx4_vlan_table *table = &mlx4_priv(dev)->port[port].vlan_table;
int i, err = 0;
int free = -1;
mutex_lock(&table->mutex);
for (i = MLX4_VLAN_REGULAR; i < MLX4_MAX_VLAN_NUM; i++) {
if (free < 0 && (table->refs[i] == 0)) {
free = i;
continue;
}
if (table->refs[i] &&
(vlan == (MLX4_VLAN_MASK &
be32_to_cpu(table->entries[i])))) {
/* Vlan already registered, increase refernce count */
*index = i;
++table->refs[i];
goto out;
}
}
if (table->total == table->max) {
/* No free vlan entries */
err = -ENOSPC;
goto out;
}
/* Register new MAC */
table->refs[free] = 1;
table->entries[free] = cpu_to_be32(vlan | MLX4_VLAN_VALID);
err = mlx4_set_port_vlan_table(dev, port, table->entries);
if (unlikely(err)) {
mlx4_warn(dev, "Failed adding vlan: %u\n", vlan);
table->refs[free] = 0;
table->entries[free] = 0;
goto out;
}
*index = free;
++table->total;
out:
mutex_unlock(&table->mutex);
return err;
}
| [] | [] | [
"CVE-2010-5332"
] | [
"CWE-119"
] | 18 | {
"Execution Environment": [],
"Explanation": null,
"External Function": [
"mlx4_priv",
"be32_to_cpu",
"cpu_to_be32",
"mlx4_set_port_vlan_table",
"mlx4_warn"
],
"Function Argument": [
"dev",
"port",
"vlan",
"index"
],
"Globals": [
"MLX4_VLAN_REGULAR",
"MLX4_MAX_VLAN_NUM",
"MLX4_VLAN_MASK",
"MLX4_VLAN_VALID",
"ENOSPC",
"ENOMEM"
],
"Type Execution Declaration": []
} |
17 | linux | https://github.com/torvalds/linux | drivers/net/mlx4/port.c | 0926f91083f34d047abc74f1ca4fa6a9c161f7db | mlx4_en: Fix out of bounds array access
When searching for a free entry in either mlx4_register_vlan() or
mlx4_register_mac(), and there is no free entry, the loop terminates without
updating the local variable free thus causing out of array bounds access. Fix
this by adding a proper check outside the loop.
Signed-off-by: Eli Cohen <eli@mellanox.co.il>
Signed-off-by: David S. Miller <davem@davemloft.net> | false | 5e384db2b5bcf28504943915f4e0feff | mlx4_register_mac | int mlx4_register_mac(struct mlx4_dev *dev, u8 port, u64 mac, int *index)
{
struct mlx4_mac_table *table = &mlx4_priv(dev)->port[port].mac_table;
int i, err = 0;
int free = -1;
mlx4_dbg(dev, "Registering MAC: 0x%llx\n", (unsigned long long) mac);
mutex_lock(&table->mutex);
for (i = 0; i < MLX4_MAX_MAC_NUM - 1; i++) {
if (free < 0 && !table->refs[i]) {
free = i;
continue;
}
if (mac == (MLX4_MAC_MASK & be64_to_cpu(table->entries[i]))) {
/* MAC already registered, increase refernce count */
*index = i;
++table->refs[i];
goto out;
}
}
if (free < 0) {
err = -ENOMEM;
goto out;
}
mlx4_dbg(dev, "Free MAC index is %d\n", free);
if (table->total == table->max) {
/* No free mac entries */
err = -ENOSPC;
goto out;
}
/* Register new MAC */
table->refs[free] = 1;
table->entries[free] = cpu_to_be64(mac | MLX4_MAC_VALID);
err = mlx4_set_port_mac_table(dev, port, table->entries);
if (unlikely(err)) {
mlx4_err(dev, "Failed adding MAC: 0x%llx\n", (unsigned long long) mac);
table->refs[free] = 0;
table->entries[free] = 0;
goto out;
}
*index = free;
++table->total;
out:
mutex_unlock(&table->mutex);
return err;
}
| [[114, "\n"], [115, "\tif (free < 0) {\n"], [116, "\t\terr = -ENOMEM;\n"], [117, "\t\tgoto out;\n"], [118, "\t}\n"], [119, "\n"]] | [[114, "\n"], [115, "if (free < 0)"], [116, "err = -ENOMEM;"], [117, "goto out;"], [118, "\t}\n"], [119, "\n"]] | [
"CVE-2010-5332"
] | [
"CWE-119"
] | 17 | {
"Execution Environment": [],
"Explanation": null,
"External Function": [
"mlx4_priv"
],
"Function Argument": [
"dev",
"port"
],
"Globals": [
"MLX4_MAX_MAC_NUM"
],
"Type Execution Declaration": []
} |
18 | linux | https://github.com/torvalds/linux | drivers/net/mlx4/port.c | 0926f91083f34d047abc74f1ca4fa6a9c161f7db | mlx4_en: Fix out of bounds array access
When searching for a free entry in either mlx4_register_vlan() or
mlx4_register_mac(), and there is no free entry, the loop terminates without
updating the local variable free thus causing out of array bounds access. Fix
this by adding a proper check outside the loop.
Signed-off-by: Eli Cohen <eli@mellanox.co.il>
Signed-off-by: David S. Miller <davem@davemloft.net> | false | d2d908a7f0431e5675baba44caaafa0c | mlx4_register_vlan | int mlx4_register_vlan(struct mlx4_dev *dev, u8 port, u16 vlan, int *index)
{
struct mlx4_vlan_table *table = &mlx4_priv(dev)->port[port].vlan_table;
int i, err = 0;
int free = -1;
mutex_lock(&table->mutex);
for (i = MLX4_VLAN_REGULAR; i < MLX4_MAX_VLAN_NUM; i++) {
if (free < 0 && (table->refs[i] == 0)) {
free = i;
continue;
}
if (table->refs[i] &&
(vlan == (MLX4_VLAN_MASK &
be32_to_cpu(table->entries[i])))) {
/* Vlan already registered, increase refernce count */
*index = i;
++table->refs[i];
goto out;
}
}
if (free < 0) {
err = -ENOMEM;
goto out;
}
if (table->total == table->max) {
/* No free vlan entries */
err = -ENOSPC;
goto out;
}
/* Register new MAC */
table->refs[free] = 1;
table->entries[free] = cpu_to_be32(vlan | MLX4_VLAN_VALID);
err = mlx4_set_port_vlan_table(dev, port, table->entries);
if (unlikely(err)) {
mlx4_warn(dev, "Failed adding vlan: %u\n", vlan);
table->refs[free] = 0;
table->entries[free] = 0;
goto out;
}
*index = free;
++table->total;
out:
mutex_unlock(&table->mutex);
return err;
}
| [[214, "\tif (free < 0) {\n"], [215, "\t\terr = -ENOMEM;\n"], [216, "\t\tgoto out;\n"], [217, "\t}\n"], [218, "\n"]] | [[214, "if (free < 0)"], [215, "err = -ENOMEM;"], [216, "goto out;"], [217, "\t}\n"], [218, "\n"]] | [
"CVE-2010-5332"
] | [
"CWE-119"
] | 18 | {
"Execution Environment": [],
"Explanation": null,
"External Function": [
"mlx4_priv",
"be32_to_cpu",
"cpu_to_be32",
"mlx4_set_port_vlan_table",
"mlx4_warn"
],
"Function Argument": [
"dev",
"port",
"vlan",
"index"
],
"Globals": [
"MLX4_VLAN_REGULAR",
"MLX4_MAX_VLAN_NUM",
"MLX4_VLAN_MASK",
"MLX4_VLAN_VALID",
"ENOSPC",
"ENOMEM"
],
"Type Execution Declaration": []
} |
19 | linux | https://github.com/torvalds/linux | drivers/nvme/target/fc.c | 0c319d3a144d4b8f1ea2047fd614d2149b68f889 | nvmet-fc: ensure target queue id within range.
When searching for queue id's ensure they are within the expected range.
Signed-off-by: James Smart <james.smart@broadcom.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk> | true | c05ae774ee57b0ad442f8c36f38916df | nvmet_fc_find_target_queue | static struct nvmet_fc_tgt_queue *
nvmet_fc_find_target_queue(struct nvmet_fc_tgtport *tgtport,
u64 connection_id)
{
struct nvmet_fc_tgt_assoc *assoc;
struct nvmet_fc_tgt_queue *queue;
u64 association_id = nvmet_fc_getassociationid(connection_id);
u16 qid = nvmet_fc_getqueueid(connection_id);
unsigned long flags;
spin_lock_irqsave(&tgtport->lock, flags);
list_for_each_entry(assoc, &tgtport->assoc_list, a_list) {
if (association_id == assoc->association_id) {
queue = assoc->queues[qid];
if (queue &&
(!atomic_read(&queue->connected) ||
!nvmet_fc_tgt_q_get(queue)))
queue = NULL;
spin_unlock_irqrestore(&tgtport->lock, flags);
return queue;
}
}
spin_unlock_irqrestore(&tgtport->lock, flags);
return NULL;
}
| [] | [] | [
"CVE-2017-18379"
] | [
"CWE-119"
] | 20 | {
"Execution Environment": [],
"Explanation": null,
"External Function": [
"nvmet_fc_getqueueid"
],
"Function Argument": [],
"Globals": [
"NVMET_NR_QUEUES"
],
"Type Execution Declaration": [
"struct nvmet_fc_tgt_assoc",
"struct nvmet_fc_tgt_queue"
]
} |
20 | linux | https://github.com/torvalds/linux | drivers/nvme/target/fc.c | 0c319d3a144d4b8f1ea2047fd614d2149b68f889 | nvmet-fc: ensure target queue id within range.
When searching for queue id's ensure they are within the expected range.
Signed-off-by: James Smart <james.smart@broadcom.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk> | false | ae5fb548fe28f4fb4cdb0151e82cf8e2 | nvmet_fc_find_target_queue | static struct nvmet_fc_tgt_queue *
nvmet_fc_find_target_queue(struct nvmet_fc_tgtport *tgtport,
u64 connection_id)
{
struct nvmet_fc_tgt_assoc *assoc;
struct nvmet_fc_tgt_queue *queue;
u64 association_id = nvmet_fc_getassociationid(connection_id);
u16 qid = nvmet_fc_getqueueid(connection_id);
unsigned long flags;
if (qid > NVMET_NR_QUEUES)
return NULL;
spin_lock_irqsave(&tgtport->lock, flags);
list_for_each_entry(assoc, &tgtport->assoc_list, a_list) {
if (association_id == assoc->association_id) {
queue = assoc->queues[qid];
if (queue &&
(!atomic_read(&queue->connected) ||
!nvmet_fc_tgt_q_get(queue)))
queue = NULL;
spin_unlock_irqrestore(&tgtport->lock, flags);
return queue;
}
}
spin_unlock_irqrestore(&tgtport->lock, flags);
return NULL;
}
| [[786, "\tif (qid > NVMET_NR_QUEUES)\n"], [787, "\t\treturn NULL;\n"], [788, "\n"]] | [[786, "if (qid > NVMET_NR_QUEUES)"], [787, "return NULL;"], [788, "\n"]] | [
"CVE-2017-18379"
] | [
"CWE-119"
] | 20 | {
"Execution Environment": [],
"Explanation": null,
"External Function": [
"nvmet_fc_getqueueid"
],
"Function Argument": [],
"Globals": [
"NVMET_NR_QUEUES"
],
"Type Execution Declaration": [
"struct nvmet_fc_tgt_assoc",
"struct nvmet_fc_tgt_queue"
]
} |
21 | linux | https://github.com/torvalds/linux | fs/ioctl.c | 10eec60ce79187686e052092e5383c99b4420a20 | vfs: ioctl: prevent double-fetch in dedupe ioctl
This prevents a double-fetch from user space that can lead to to an
undersized allocation and heap overflow.
Fixes: 54dbc1517237 ("vfs: hoist the btrfs deduplication ioctl to the vfs")
Signed-off-by: Scott Bauer <sbauer@plzdonthack.me>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> | true | 88ed0b07f2f93f508fcb10ea0454b747 | ioctl_file_dedupe_range | static long ioctl_file_dedupe_range(struct file *file, void __user *arg)
{
struct file_dedupe_range __user *argp = arg;
struct file_dedupe_range *same = NULL;
int ret;
unsigned long size;
u16 count;
if (get_user(count, &argp->dest_count)) {
ret = -EFAULT;
goto out;
}
size = offsetof(struct file_dedupe_range __user, info[count]);
same = memdup_user(argp, size);
if (IS_ERR(same)) {
ret = PTR_ERR(same);
same = NULL;
goto out;
}
ret = vfs_dedupe_file_range(file, same);
if (ret)
goto out;
ret = copy_to_user(argp, same, size);
if (ret)
ret = -EFAULT;
out:
kfree(same);
return ret;
}
| [] | [] | [
"CVE-2016-6516"
] | [
"CWE-362",
"CWE-119"
] | 22 | {
"Execution Environment": [],
"Explanation": null,
"External Function": [
"get_user",
"memdup_user",
"copy_to_user"
],
"Function Argument": [
"arg"
],
"Globals": [],
"Type Execution Declaration": [
"struct file_dedupe_range __user",
"struct file_dedupe_range"
]
} |
22 | linux | https://github.com/torvalds/linux | fs/ioctl.c | 10eec60ce79187686e052092e5383c99b4420a20 | vfs: ioctl: prevent double-fetch in dedupe ioctl
This prevents a double-fetch from user space that can lead to to an
undersized allocation and heap overflow.
Fixes: 54dbc1517237 ("vfs: hoist the btrfs deduplication ioctl to the vfs")
Signed-off-by: Scott Bauer <sbauer@plzdonthack.me>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> | false | ad737bed6069921506d9ce434712bad7 | ioctl_file_dedupe_range | static long ioctl_file_dedupe_range(struct file *file, void __user *arg)
{
struct file_dedupe_range __user *argp = arg;
struct file_dedupe_range *same = NULL;
int ret;
unsigned long size;
u16 count;
if (get_user(count, &argp->dest_count)) {
ret = -EFAULT;
goto out;
}
size = offsetof(struct file_dedupe_range __user, info[count]);
same = memdup_user(argp, size);
if (IS_ERR(same)) {
ret = PTR_ERR(same);
same = NULL;
goto out;
}
same->dest_count = count;
ret = vfs_dedupe_file_range(file, same);
if (ret)
goto out;
ret = copy_to_user(argp, same, size);
if (ret)
ret = -EFAULT;
out:
kfree(same);
return ret;
}
| [[593, "\tsame->dest_count = count;\n"]] | [[593, "same->dest_count = count;"]] | [
"CVE-2016-6516"
] | [
"CWE-362",
"CWE-119"
] | 22 | {
"Execution Environment": [],
"Explanation": null,
"External Function": [
"get_user",
"memdup_user",
"copy_to_user"
],
"Function Argument": [
"arg"
],
"Globals": [],
"Type Execution Declaration": [
"struct file_dedupe_range __user",
"struct file_dedupe_range"
]
} |
23 | linux | https://github.com/torvalds/linux | drivers/net/ethernet/sunplus/spl2sw_driver.c | 12aece8b01507a2d357a1861f470e83621fbb6f2 | eth: sp7021: fix use after free bug in spl2sw_nvmem_get_mac_address
This frees "mac" and tries to display its address as part of the error
message on the next line. Swap the order.
Fixes: fd3040b9394c ("net: ethernet: Add driver for Sunplus SP7021")
Signed-off-by: Zheng Wang <zyytlz.wz@163.com>
Signed-off-by: David S. Miller <davem@davemloft.net> | true | edf98bb67e48bc2765e89283212bb2a6 | spl2sw_nvmem_get_mac_address | static int spl2sw_nvmem_get_mac_address(struct device *dev, struct device_node *np,
void *addrbuf)
{
struct nvmem_cell *cell;
ssize_t len;
u8 *mac;
/* Get nvmem cell of mac-address from dts. */
cell = of_nvmem_cell_get(np, "mac-address");
if (IS_ERR(cell))
return PTR_ERR(cell);
/* Read mac address from nvmem cell. */
mac = nvmem_cell_read(cell, &len);
nvmem_cell_put(cell);
if (IS_ERR(mac))
return PTR_ERR(mac);
if (len != ETH_ALEN) {
kfree(mac);
dev_info(dev, "Invalid length of mac address in nvmem!\n");
return -EINVAL;
}
/* Byte order of some samples are reversed.
* Convert byte order here.
*/
spl2sw_check_mac_vendor_id_and_convert(mac);
/* Check if mac address is valid */
if (!is_valid_ether_addr(mac)) {
kfree(mac);
dev_info(dev, "Invalid mac address in nvmem (%pM)!\n", mac);
return -EINVAL;
}
ether_addr_copy(addrbuf, mac);
kfree(mac);
return 0;
}
| [[251, "\t\tkfree(mac);\n"]] | [[251, "kfree(mac);"]] | [
"CVE-2022-3541"
] | [
"CWE-119"
] | 24 | {
"Execution Environment": [],
"Explanation": null,
"External Function": [
"of_nvmem_cell_get",
"nvmem_cell_read",
"nvmem_cell_put",
"is_valid_ether_addr",
"dev_info"
],
"Function Argument": [],
"Globals": [],
"Type Execution Declaration": []
} |
24 | linux | https://github.com/torvalds/linux | drivers/net/ethernet/sunplus/spl2sw_driver.c | 12aece8b01507a2d357a1861f470e83621fbb6f2 | eth: sp7021: fix use after free bug in spl2sw_nvmem_get_mac_address
This frees "mac" and tries to display its address as part of the error
message on the next line. Swap the order.
Fixes: fd3040b9394c ("net: ethernet: Add driver for Sunplus SP7021")
Signed-off-by: Zheng Wang <zyytlz.wz@163.com>
Signed-off-by: David S. Miller <davem@davemloft.net> | false | b9422cbb5d31e8e69bf752bd7a6c6ed5 | spl2sw_nvmem_get_mac_address | static int spl2sw_nvmem_get_mac_address(struct device *dev, struct device_node *np,
void *addrbuf)
{
struct nvmem_cell *cell;
ssize_t len;
u8 *mac;
/* Get nvmem cell of mac-address from dts. */
cell = of_nvmem_cell_get(np, "mac-address");
if (IS_ERR(cell))
return PTR_ERR(cell);
/* Read mac address from nvmem cell. */
mac = nvmem_cell_read(cell, &len);
nvmem_cell_put(cell);
if (IS_ERR(mac))
return PTR_ERR(mac);
if (len != ETH_ALEN) {
kfree(mac);
dev_info(dev, "Invalid length of mac address in nvmem!\n");
return -EINVAL;
}
/* Byte order of some samples are reversed.
* Convert byte order here.
*/
spl2sw_check_mac_vendor_id_and_convert(mac);
/* Check if mac address is valid */
if (!is_valid_ether_addr(mac)) {
dev_info(dev, "Invalid mac address in nvmem (%pM)!\n", mac);
kfree(mac);
return -EINVAL;
}
ether_addr_copy(addrbuf, mac);
kfree(mac);
return 0;
}
| [[252, "\t\tkfree(mac);\n"]] | [[252, "kfree(mac);"]] | [
"CVE-2022-3541"
] | [
"CWE-119"
] | 24 | {
"Execution Environment": [],
"Explanation": null,
"External Function": [
"of_nvmem_cell_get",
"nvmem_cell_read",
"nvmem_cell_put",
"is_valid_ether_addr",
"dev_info"
],
"Function Argument": [],
"Globals": [],
"Type Execution Declaration": []
} |
25 | linux | https://github.com/torvalds/linux | drivers/target/loopback/tcm_loop.c | 12f09ccb4612734a53e47ed5302e0479c10a50f8 | loopback: off by one in tcm_loop_make_naa_tpg()
This is an off by one 'tgpt' check in tcm_loop_make_naa_tpg() that could result
in memory corruption.
Signed-off-by: Dan Carpenter <error27@gmail.com>
Signed-off-by: Nicholas A. Bellinger <nab@linux-iscsi.org> | true | bb10218e4db9527815edd9343f459cff | tcm_loop_make_naa_tpg | struct se_portal_group *tcm_loop_make_naa_tpg(
struct se_wwn *wwn,
struct config_group *group,
const char *name)
{
struct tcm_loop_hba *tl_hba = container_of(wwn,
struct tcm_loop_hba, tl_hba_wwn);
struct tcm_loop_tpg *tl_tpg;
char *tpgt_str, *end_ptr;
int ret;
unsigned short int tpgt;
tpgt_str = strstr(name, "tpgt_");
if (!tpgt_str) {
printk(KERN_ERR "Unable to locate \"tpgt_#\" directory"
" group\n");
return ERR_PTR(-EINVAL);
}
tpgt_str += 5; /* Skip ahead of "tpgt_" */
tpgt = (unsigned short int) simple_strtoul(tpgt_str, &end_ptr, 0);
if (tpgt > TL_TPGS_PER_HBA) {
printk(KERN_ERR "Passed tpgt: %hu exceeds TL_TPGS_PER_HBA:"
" %u\n", tpgt, TL_TPGS_PER_HBA);
return ERR_PTR(-EINVAL);
}
tl_tpg = &tl_hba->tl_hba_tpgs[tpgt];
tl_tpg->tl_hba = tl_hba;
tl_tpg->tl_tpgt = tpgt;
/*
* Register the tl_tpg as a emulated SAS TCM Target Endpoint
*/
ret = core_tpg_register(&tcm_loop_fabric_configfs->tf_ops,
wwn, &tl_tpg->tl_se_tpg, tl_tpg,
TRANSPORT_TPG_TYPE_NORMAL);
if (ret < 0)
return ERR_PTR(-ENOMEM);
printk(KERN_INFO "TCM_Loop_ConfigFS: Allocated Emulated %s"
" Target Port %s,t,0x%04x\n", tcm_loop_dump_proto_id(tl_hba),
config_item_name(&wwn->wwn_group.cg_item), tpgt);
return &tl_tpg->tl_se_tpg;
}
| [[1208, "\tif (tpgt > TL_TPGS_PER_HBA) {\n"]] | [[1208, "if (tpgt > TL_TPGS_PER_HBA)"]] | [
"CVE-2011-5327"
] | [
"CWE-119"
] | 26 | {
"Execution Environment": [],
"Explanation": null,
"External Function": [],
"Function Argument": [],
"Globals": [
"TL_TPGS_PER_HBA"
],
"Type Execution Declaration": [
"struct tcm_loop_hba",
"struct tcm_loop_tpg"
]
} |
26 | linux | https://github.com/torvalds/linux | drivers/target/loopback/tcm_loop.c | 12f09ccb4612734a53e47ed5302e0479c10a50f8 | loopback: off by one in tcm_loop_make_naa_tpg()
This is an off by one 'tgpt' check in tcm_loop_make_naa_tpg() that could result
in memory corruption.
Signed-off-by: Dan Carpenter <error27@gmail.com>
Signed-off-by: Nicholas A. Bellinger <nab@linux-iscsi.org> | false | ba9488c551927f9c924adb9bcb8fce29 | tcm_loop_make_naa_tpg | struct se_portal_group *tcm_loop_make_naa_tpg(
struct se_wwn *wwn,
struct config_group *group,
const char *name)
{
struct tcm_loop_hba *tl_hba = container_of(wwn,
struct tcm_loop_hba, tl_hba_wwn);
struct tcm_loop_tpg *tl_tpg;
char *tpgt_str, *end_ptr;
int ret;
unsigned short int tpgt;
tpgt_str = strstr(name, "tpgt_");
if (!tpgt_str) {
printk(KERN_ERR "Unable to locate \"tpgt_#\" directory"
" group\n");
return ERR_PTR(-EINVAL);
}
tpgt_str += 5; /* Skip ahead of "tpgt_" */
tpgt = (unsigned short int) simple_strtoul(tpgt_str, &end_ptr, 0);
if (tpgt >= TL_TPGS_PER_HBA) {
printk(KERN_ERR "Passed tpgt: %hu exceeds TL_TPGS_PER_HBA:"
" %u\n", tpgt, TL_TPGS_PER_HBA);
return ERR_PTR(-EINVAL);
}
tl_tpg = &tl_hba->tl_hba_tpgs[tpgt];
tl_tpg->tl_hba = tl_hba;
tl_tpg->tl_tpgt = tpgt;
/*
* Register the tl_tpg as a emulated SAS TCM Target Endpoint
*/
ret = core_tpg_register(&tcm_loop_fabric_configfs->tf_ops,
wwn, &tl_tpg->tl_se_tpg, tl_tpg,
TRANSPORT_TPG_TYPE_NORMAL);
if (ret < 0)
return ERR_PTR(-ENOMEM);
printk(KERN_INFO "TCM_Loop_ConfigFS: Allocated Emulated %s"
" Target Port %s,t,0x%04x\n", tcm_loop_dump_proto_id(tl_hba),
config_item_name(&wwn->wwn_group.cg_item), tpgt);
return &tl_tpg->tl_se_tpg;
}
| [[1208, "\tif (tpgt >= TL_TPGS_PER_HBA) {\n"]] | [[1208, "if (tpgt >= TL_TPGS_PER_HBA)"]] | [
"CVE-2011-5327"
] | [
"CWE-119"
] | 26 | {
"Execution Environment": [],
"Explanation": null,
"External Function": [],
"Function Argument": [],
"Globals": [
"TL_TPGS_PER_HBA"
],
"Type Execution Declaration": [
"struct tcm_loop_hba",
"struct tcm_loop_tpg"
]
} |
27 | linux | https://github.com/torvalds/linux | fs/nfsd/nfsxdr.c | 13bf9fbff0e5e099e2b6f003a0ab8ae145436309 | nfsd: stricter decoding of write-like NFSv2/v3 ops
The NFSv2/v3 code does not systematically check whether we decode past
the end of the buffer. This generally appears to be harmless, but there
are a few places where we do arithmetic on the pointers involved and
don't account for the possibility that a length could be negative. Add
checks to catch these.
Reported-by: Tuomas Haanpää <thaan@synopsys.com>
Reported-by: Ari Kauppi <ari@synopsys.com>
Reviewed-by: NeilBrown <neilb@suse.com>
Cc: stable@vger.kernel.org
Signed-off-by: J. Bruce Fields <bfields@redhat.com> | true | 312254bef4649bf3181c0060b93bc490 | nfssvc_decode_writeargs | int
nfssvc_decode_writeargs(struct svc_rqst *rqstp, __be32 *p,
struct nfsd_writeargs *args)
{
unsigned int len, hdr, dlen;
struct kvec *head = rqstp->rq_arg.head;
int v;
p = decode_fh(p, &args->fh);
if (!p)
return 0;
p++; /* beginoffset */
args->offset = ntohl(*p++); /* offset */
p++; /* totalcount */
len = args->len = ntohl(*p++);
/*
* The protocol specifies a maximum of 8192 bytes.
*/
if (len > NFSSVC_MAXBLKSIZE_V2)
return 0;
/*
* Check to make sure that we got the right number of
* bytes.
*/
hdr = (void*)p - head->iov_base;
dlen = head->iov_len + rqstp->rq_arg.page_len - hdr;
/*
* Round the length of the data which was specified up to
* the next multiple of XDR units and then compare that
* against the length which was actually received.
* Note that when RPCSEC/GSS (for example) is used, the
* data buffer can be padded so dlen might be larger
* than required. It must never be smaller.
*/
if (dlen < XDR_QUADLEN(len)*4)
return 0;
rqstp->rq_vec[0].iov_base = (void*)p;
rqstp->rq_vec[0].iov_len = head->iov_len - hdr;
v = 0;
while (len > rqstp->rq_vec[v].iov_len) {
len -= rqstp->rq_vec[v].iov_len;
v++;
rqstp->rq_vec[v].iov_base = page_address(rqstp->rq_pages[v]);
rqstp->rq_vec[v].iov_len = PAGE_SIZE;
}
rqstp->rq_vec[v].iov_len = len;
args->vlen = v + 1;
return 1;
}
| [] | [] | [
"CVE-2017-7895"
] | [
"CWE-119"
] | 28 | {
"Execution Environment": [],
"Explanation": null,
"External Function": [
"decode_fh",
"page_address"
],
"Function Argument": [
"rqstp"
],
"Globals": [
"NFSSVC_MAXBLKSIZE_V2",
"PAGE_SIZE",
"XDR_QUADLEN"
],
"Type Execution Declaration": []
} |
28 | linux | https://github.com/torvalds/linux | fs/nfsd/nfsxdr.c | 13bf9fbff0e5e099e2b6f003a0ab8ae145436309 | nfsd: stricter decoding of write-like NFSv2/v3 ops
The NFSv2/v3 code does not systematically check whether we decode past
the end of the buffer. This generally appears to be harmless, but there
are a few places where we do arithmetic on the pointers involved and
don't account for the possibility that a length could be negative. Add
checks to catch these.
Reported-by: Tuomas Haanpää <thaan@synopsys.com>
Reported-by: Ari Kauppi <ari@synopsys.com>
Reviewed-by: NeilBrown <neilb@suse.com>
Cc: stable@vger.kernel.org
Signed-off-by: J. Bruce Fields <bfields@redhat.com> | false | e0616d42a55dbbc6ea9361716131ed66 | nfssvc_decode_writeargs | int
nfssvc_decode_writeargs(struct svc_rqst *rqstp, __be32 *p,
struct nfsd_writeargs *args)
{
unsigned int len, hdr, dlen;
struct kvec *head = rqstp->rq_arg.head;
int v;
p = decode_fh(p, &args->fh);
if (!p)
return 0;
p++; /* beginoffset */
args->offset = ntohl(*p++); /* offset */
p++; /* totalcount */
len = args->len = ntohl(*p++);
/*
* The protocol specifies a maximum of 8192 bytes.
*/
if (len > NFSSVC_MAXBLKSIZE_V2)
return 0;
/*
* Check to make sure that we got the right number of
* bytes.
*/
hdr = (void*)p - head->iov_base;
if (hdr > head->iov_len)
return 0;
dlen = head->iov_len + rqstp->rq_arg.page_len - hdr;
/*
* Round the length of the data which was specified up to
* the next multiple of XDR units and then compare that
* against the length which was actually received.
* Note that when RPCSEC/GSS (for example) is used, the
* data buffer can be padded so dlen might be larger
* than required. It must never be smaller.
*/
if (dlen < XDR_QUADLEN(len)*4)
return 0;
rqstp->rq_vec[0].iov_base = (void*)p;
rqstp->rq_vec[0].iov_len = head->iov_len - hdr;
v = 0;
while (len > rqstp->rq_vec[v].iov_len) {
len -= rqstp->rq_vec[v].iov_len;
v++;
rqstp->rq_vec[v].iov_base = page_address(rqstp->rq_pages[v]);
rqstp->rq_vec[v].iov_len = PAGE_SIZE;
}
rqstp->rq_vec[v].iov_len = len;
args->vlen = v + 1;
return 1;
}
| [[305, "\tif (hdr > head->iov_len)\n"], [306, "\t\treturn 0;\n"]] | [[305, "if (hdr > head->iov_len)"], [306, "return 0;"]] | [
"CVE-2017-7895"
] | [
"CWE-119"
] | 28 | {
"Execution Environment": [],
"Explanation": null,
"External Function": [
"decode_fh",
"page_address"
],
"Function Argument": [
"rqstp"
],
"Globals": [
"NFSSVC_MAXBLKSIZE_V2",
"PAGE_SIZE",
"XDR_QUADLEN"
],
"Type Execution Declaration": []
} |
29 | linux | https://github.com/torvalds/linux | fs/jbd2/transaction.c | 15291164b22a357cb211b618adfef4fa82fc0de3 | jbd2: clear BH_Delay & BH_Unwritten in journal_unmap_buffer
journal_unmap_buffer()'s zap_buffer: code clears a lot of buffer head
state ala discard_buffer(), but does not touch _Delay or _Unwritten as
discard_buffer() does.
This can be problematic in some areas of the ext4 code which assume
that if they have found a buffer marked unwritten or delay, then it's
a live one. Perhaps those spots should check whether it is mapped
as well, but if jbd2 is going to tear down a buffer, let's really
tear it down completely.
Without this I get some fsx failures on sub-page-block filesystems
up until v3.2, at which point 4e96b2dbbf1d7e81f22047a50f862555a6cb87cb
and 189e868fa8fdca702eb9db9d8afc46b5cb9144c9 make the failures go
away, because buried within that large change is some more flag
clearing. I still think it's worth doing in jbd2, since
->invalidatepage leads here directly, and it's the right place
to clear away these flags.
Signed-off-by: Eric Sandeen <sandeen@redhat.com>
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
Cc: stable@vger.kernel.org | true | 1708a3552db849780aa5aa7df12f8b7a | journal_unmap_buffer | static int journal_unmap_buffer(journal_t *journal, struct buffer_head *bh)
{
transaction_t *transaction;
struct journal_head *jh;
int may_free = 1;
int ret;
BUFFER_TRACE(bh, "entry");
/*
* It is safe to proceed here without the j_list_lock because the
* buffers cannot be stolen by try_to_free_buffers as long as we are
* holding the page lock. --sct
*/
if (!buffer_jbd(bh))
goto zap_buffer_unlocked;
/* OK, we have data buffer in journaled mode */
write_lock(&journal->j_state_lock);
jbd_lock_bh_state(bh);
spin_lock(&journal->j_list_lock);
jh = jbd2_journal_grab_journal_head(bh);
if (!jh)
goto zap_buffer_no_jh;
/*
* We cannot remove the buffer from checkpoint lists until the
* transaction adding inode to orphan list (let's call it T)
* is committed. Otherwise if the transaction changing the
* buffer would be cleaned from the journal before T is
* committed, a crash will cause that the correct contents of
* the buffer will be lost. On the other hand we have to
* clear the buffer dirty bit at latest at the moment when the
* transaction marking the buffer as freed in the filesystem
* structures is committed because from that moment on the
* buffer can be reallocated and used by a different page.
* Since the block hasn't been freed yet but the inode has
* already been added to orphan list, it is safe for us to add
* the buffer to BJ_Forget list of the newest transaction.
*/
transaction = jh->b_transaction;
if (transaction == NULL) {
/* First case: not on any transaction. If it
* has no checkpoint link, then we can zap it:
* it's a writeback-mode buffer so we don't care
* if it hits disk safely. */
if (!jh->b_cp_transaction) {
JBUFFER_TRACE(jh, "not on any transaction: zap");
goto zap_buffer;
}
if (!buffer_dirty(bh)) {
/* bdflush has written it. We can drop it now */
goto zap_buffer;
}
/* OK, it must be in the journal but still not
* written fully to disk: it's metadata or
* journaled data... */
if (journal->j_running_transaction) {
/* ... and once the current transaction has
* committed, the buffer won't be needed any
* longer. */
JBUFFER_TRACE(jh, "checkpointed: add to BJ_Forget");
ret = __dispose_buffer(jh,
journal->j_running_transaction);
jbd2_journal_put_journal_head(jh);
spin_unlock(&journal->j_list_lock);
jbd_unlock_bh_state(bh);
write_unlock(&journal->j_state_lock);
return ret;
} else {
/* There is no currently-running transaction. So the
* orphan record which we wrote for this file must have
* passed into commit. We must attach this buffer to
* the committing transaction, if it exists. */
if (journal->j_committing_transaction) {
JBUFFER_TRACE(jh, "give to committing trans");
ret = __dispose_buffer(jh,
journal->j_committing_transaction);
jbd2_journal_put_journal_head(jh);
spin_unlock(&journal->j_list_lock);
jbd_unlock_bh_state(bh);
write_unlock(&journal->j_state_lock);
return ret;
} else {
/* The orphan record's transaction has
* committed. We can cleanse this buffer */
clear_buffer_jbddirty(bh);
goto zap_buffer;
}
}
} else if (transaction == journal->j_committing_transaction) {
JBUFFER_TRACE(jh, "on committing transaction");
/*
* The buffer is committing, we simply cannot touch
* it. So we just set j_next_transaction to the
* running transaction (if there is one) and mark
* buffer as freed so that commit code knows it should
* clear dirty bits when it is done with the buffer.
*/
set_buffer_freed(bh);
if (journal->j_running_transaction && buffer_jbddirty(bh))
jh->b_next_transaction = journal->j_running_transaction;
jbd2_journal_put_journal_head(jh);
spin_unlock(&journal->j_list_lock);
jbd_unlock_bh_state(bh);
write_unlock(&journal->j_state_lock);
return 0;
} else {
/* Good, the buffer belongs to the running transaction.
* We are writing our own transaction's data, not any
* previous one's, so it is safe to throw it away
* (remember that we expect the filesystem to have set
* i_size already for this truncate so recovery will not
* expose the disk blocks we are discarding here.) */
J_ASSERT_JH(jh, transaction == journal->j_running_transaction);
JBUFFER_TRACE(jh, "on running transaction");
may_free = __dispose_buffer(jh, transaction);
}
zap_buffer:
jbd2_journal_put_journal_head(jh);
zap_buffer_no_jh:
spin_unlock(&journal->j_list_lock);
jbd_unlock_bh_state(bh);
write_unlock(&journal->j_state_lock);
zap_buffer_unlocked:
clear_buffer_dirty(bh);
J_ASSERT_BH(bh, !buffer_jbddirty(bh));
clear_buffer_mapped(bh);
clear_buffer_req(bh);
clear_buffer_new(bh);
bh->b_bdev = NULL;
return may_free;
}
| [] | [] | [
"CVE-2011-4086"
] | [
"CWE-119"
] | 30 | {
"Execution Environment": [
"the ext4 filesystem must be mounted with a journal"
],
"Explanation": null,
"External Function": [],
"Function Argument": [],
"Globals": [],
"Type Execution Declaration": []
} |
30 | linux | https://github.com/torvalds/linux | fs/jbd2/transaction.c | 15291164b22a357cb211b618adfef4fa82fc0de3 | jbd2: clear BH_Delay & BH_Unwritten in journal_unmap_buffer
journal_unmap_buffer()'s zap_buffer: code clears a lot of buffer head
state ala discard_buffer(), but does not touch _Delay or _Unwritten as
discard_buffer() does.
This can be problematic in some areas of the ext4 code which assume
that if they have found a buffer marked unwritten or delay, then it's
a live one. Perhaps those spots should check whether it is mapped
as well, but if jbd2 is going to tear down a buffer, let's really
tear it down completely.
Without this I get some fsx failures on sub-page-block filesystems
up until v3.2, at which point 4e96b2dbbf1d7e81f22047a50f862555a6cb87cb
and 189e868fa8fdca702eb9db9d8afc46b5cb9144c9 make the failures go
away, because buried within that large change is some more flag
clearing. I still think it's worth doing in jbd2, since
->invalidatepage leads here directly, and it's the right place
to clear away these flags.
Signed-off-by: Eric Sandeen <sandeen@redhat.com>
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
Cc: stable@vger.kernel.org | false | e85e99d5d65ba40f87a83bf83e2677ee | journal_unmap_buffer | static int journal_unmap_buffer(journal_t *journal, struct buffer_head *bh)
{
transaction_t *transaction;
struct journal_head *jh;
int may_free = 1;
int ret;
BUFFER_TRACE(bh, "entry");
/*
* It is safe to proceed here without the j_list_lock because the
* buffers cannot be stolen by try_to_free_buffers as long as we are
* holding the page lock. --sct
*/
if (!buffer_jbd(bh))
goto zap_buffer_unlocked;
/* OK, we have data buffer in journaled mode */
write_lock(&journal->j_state_lock);
jbd_lock_bh_state(bh);
spin_lock(&journal->j_list_lock);
jh = jbd2_journal_grab_journal_head(bh);
if (!jh)
goto zap_buffer_no_jh;
/*
* We cannot remove the buffer from checkpoint lists until the
* transaction adding inode to orphan list (let's call it T)
* is committed. Otherwise if the transaction changing the
* buffer would be cleaned from the journal before T is
* committed, a crash will cause that the correct contents of
* the buffer will be lost. On the other hand we have to
* clear the buffer dirty bit at latest at the moment when the
* transaction marking the buffer as freed in the filesystem
* structures is committed because from that moment on the
* buffer can be reallocated and used by a different page.
* Since the block hasn't been freed yet but the inode has
* already been added to orphan list, it is safe for us to add
* the buffer to BJ_Forget list of the newest transaction.
*/
transaction = jh->b_transaction;
if (transaction == NULL) {
/* First case: not on any transaction. If it
* has no checkpoint link, then we can zap it:
* it's a writeback-mode buffer so we don't care
* if it hits disk safely. */
if (!jh->b_cp_transaction) {
JBUFFER_TRACE(jh, "not on any transaction: zap");
goto zap_buffer;
}
if (!buffer_dirty(bh)) {
/* bdflush has written it. We can drop it now */
goto zap_buffer;
}
/* OK, it must be in the journal but still not
* written fully to disk: it's metadata or
* journaled data... */
if (journal->j_running_transaction) {
/* ... and once the current transaction has
* committed, the buffer won't be needed any
* longer. */
JBUFFER_TRACE(jh, "checkpointed: add to BJ_Forget");
ret = __dispose_buffer(jh,
journal->j_running_transaction);
jbd2_journal_put_journal_head(jh);
spin_unlock(&journal->j_list_lock);
jbd_unlock_bh_state(bh);
write_unlock(&journal->j_state_lock);
return ret;
} else {
/* There is no currently-running transaction. So the
* orphan record which we wrote for this file must have
* passed into commit. We must attach this buffer to
* the committing transaction, if it exists. */
if (journal->j_committing_transaction) {
JBUFFER_TRACE(jh, "give to committing trans");
ret = __dispose_buffer(jh,
journal->j_committing_transaction);
jbd2_journal_put_journal_head(jh);
spin_unlock(&journal->j_list_lock);
jbd_unlock_bh_state(bh);
write_unlock(&journal->j_state_lock);
return ret;
} else {
/* The orphan record's transaction has
* committed. We can cleanse this buffer */
clear_buffer_jbddirty(bh);
goto zap_buffer;
}
}
} else if (transaction == journal->j_committing_transaction) {
JBUFFER_TRACE(jh, "on committing transaction");
/*
* The buffer is committing, we simply cannot touch
* it. So we just set j_next_transaction to the
* running transaction (if there is one) and mark
* buffer as freed so that commit code knows it should
* clear dirty bits when it is done with the buffer.
*/
set_buffer_freed(bh);
if (journal->j_running_transaction && buffer_jbddirty(bh))
jh->b_next_transaction = journal->j_running_transaction;
jbd2_journal_put_journal_head(jh);
spin_unlock(&journal->j_list_lock);
jbd_unlock_bh_state(bh);
write_unlock(&journal->j_state_lock);
return 0;
} else {
/* Good, the buffer belongs to the running transaction.
* We are writing our own transaction's data, not any
* previous one's, so it is safe to throw it away
* (remember that we expect the filesystem to have set
* i_size already for this truncate so recovery will not
* expose the disk blocks we are discarding here.) */
J_ASSERT_JH(jh, transaction == journal->j_running_transaction);
JBUFFER_TRACE(jh, "on running transaction");
may_free = __dispose_buffer(jh, transaction);
}
zap_buffer:
jbd2_journal_put_journal_head(jh);
zap_buffer_no_jh:
spin_unlock(&journal->j_list_lock);
jbd_unlock_bh_state(bh);
write_unlock(&journal->j_state_lock);
zap_buffer_unlocked:
clear_buffer_dirty(bh);
J_ASSERT_BH(bh, !buffer_jbddirty(bh));
clear_buffer_mapped(bh);
clear_buffer_req(bh);
clear_buffer_new(bh);
clear_buffer_delay(bh);
clear_buffer_unwritten(bh);
bh->b_bdev = NULL;
return may_free;
}
| [[1952, "\tclear_buffer_delay(bh);\n"], [1953, "\tclear_buffer_unwritten(bh);\n"]] | [[1952, "clear_buffer_delay(bh);"], [1953, "clear_buffer_unwritten(bh);"]] | [
"CVE-2011-4086"
] | [
"CWE-119"
] | 30 | {
"Execution Environment": [
"the ext4 filesystem must be mounted with a journal"
],
"Explanation": null,
"External Function": [],
"Function Argument": [],
"Globals": [],
"Type Execution Declaration": []
} |
31 | linux | https://github.com/torvalds/linux | mm/memory.c | 16ce101db85db694a91380aa4c89b25530871d33 | mm/memory.c: fix race when faulting a device private page
Patch series "Fix several device private page reference counting issues",
v2
This series aims to fix a number of page reference counting issues in
drivers dealing with device private ZONE_DEVICE pages. These result in
use-after-free type bugs, either from accessing a struct page which no
longer exists because it has been removed or accessing fields within the
struct page which are no longer valid because the page has been freed.
During normal usage it is unlikely these will cause any problems. However
without these fixes it is possible to crash the kernel from userspace.
These crashes can be triggered either by unloading the kernel module or
unbinding the device from the driver prior to a userspace task exiting.
In modules such as Nouveau it is also possible to trigger some of these
issues by explicitly closing the device file-descriptor prior to the task
exiting and then accessing device private memory.
This involves some minor changes to both PowerPC and AMD GPU code.
Unfortunately I lack hardware to test either of those so any help there
would be appreciated. The changes mimic what is done in for both Nouveau
and hmm-tests though so I doubt they will cause problems.
This patch (of 8):
When the CPU tries to access a device private page the migrate_to_ram()
callback associated with the pgmap for the page is called. However no
reference is taken on the faulting page. Therefore a concurrent migration
of the device private page can free the page and possibly the underlying
pgmap. This results in a race which can crash the kernel due to the
migrate_to_ram() function pointer becoming invalid. It also means drivers
can't reliably read the zone_device_data field because the page may have
been freed with memunmap_pages().
Close the race by getting a reference on the page while holding the ptl to
ensure it has not been freed. Unfortunately the elevated reference count
will cause the migration required to handle the fault to fail. To avoid
this failure pass the faulting page into the migrate_vma functions so that
if an elevated reference count is found it can be checked to see if it's
expected or not.
[mpe@ellerman.id.au: fix build]
Link: https://lkml.kernel.org/r/87fsgbf3gh.fsf@mpe.ellerman.id.au
Link: https://lkml.kernel.org/r/cover.60659b549d8509ddecafad4f498ee7f03bb23c69.1664366292.git-series.apopple@nvidia.com
Link: https://lkml.kernel.org/r/d3e813178a59e565e8d78d9b9a4e2562f6494f90.1664366292.git-series.apopple@nvidia.com
Signed-off-by: Alistair Popple <apopple@nvidia.com>
Acked-by: Felix Kuehling <Felix.Kuehling@amd.com>
Cc: Jason Gunthorpe <jgg@nvidia.com>
Cc: John Hubbard <jhubbard@nvidia.com>
Cc: Ralph Campbell <rcampbell@nvidia.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Lyude Paul <lyude@redhat.com>
Cc: Alex Deucher <alexander.deucher@amd.com>
Cc: Alex Sierra <alex.sierra@amd.com>
Cc: Ben Skeggs <bskeggs@redhat.com>
Cc: Christian König <christian.koenig@amd.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: "Huang, Ying" <ying.huang@intel.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Yang Shi <shy828301@gmail.com>
Cc: Zi Yan <ziy@nvidia.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org> | true | 6c3324ccd98e2b9019df7554fdfc4e67 | do_swap_page | vm_fault_t do_swap_page(struct vm_fault *vmf)
{
struct vm_area_struct *vma = vmf->vma;
struct folio *swapcache, *folio = NULL;
struct page *page;
struct swap_info_struct *si = NULL;
rmap_t rmap_flags = RMAP_NONE;
bool exclusive = false;
swp_entry_t entry;
pte_t pte;
int locked;
vm_fault_t ret = 0;
void *shadow = NULL;
if (!pte_unmap_same(vmf))
goto out;
entry = pte_to_swp_entry(vmf->orig_pte);
if (unlikely(non_swap_entry(entry))) {
if (is_migration_entry(entry)) {
migration_entry_wait(vma->vm_mm, vmf->pmd,
vmf->address);
} else if (is_device_exclusive_entry(entry)) {
vmf->page = pfn_swap_entry_to_page(entry);
ret = remove_device_exclusive_entry(vmf);
} else if (is_device_private_entry(entry)) {
vmf->page = pfn_swap_entry_to_page(entry);
ret = vmf->page->pgmap->ops->migrate_to_ram(vmf);
} else if (is_hwpoison_entry(entry)) {
ret = VM_FAULT_HWPOISON;
} else if (is_swapin_error_entry(entry)) {
ret = VM_FAULT_SIGBUS;
} else if (is_pte_marker_entry(entry)) {
ret = handle_pte_marker(vmf);
} else {
print_bad_pte(vma, vmf->address, vmf->orig_pte, NULL);
ret = VM_FAULT_SIGBUS;
}
goto out;
}
/* Prevent swapoff from happening to us. */
si = get_swap_device(entry);
if (unlikely(!si))
goto out;
folio = swap_cache_get_folio(entry, vma, vmf->address);
if (folio)
page = folio_file_page(folio, swp_offset(entry));
swapcache = folio;
if (!folio) {
if (data_race(si->flags & SWP_SYNCHRONOUS_IO) &&
__swap_count(entry) == 1) {
/* skip swapcache */
folio = vma_alloc_folio(GFP_HIGHUSER_MOVABLE, 0,
vma, vmf->address, false);
page = &folio->page;
if (folio) {
__folio_set_locked(folio);
__folio_set_swapbacked(folio);
if (mem_cgroup_swapin_charge_folio(folio,
vma->vm_mm, GFP_KERNEL,
entry)) {
ret = VM_FAULT_OOM;
goto out_page;
}
mem_cgroup_swapin_uncharge_swap(entry);
shadow = get_shadow_from_swap_cache(entry);
if (shadow)
workingset_refault(folio, shadow);
folio_add_lru(folio);
/* To provide entry to swap_readpage() */
folio_set_swap_entry(folio, entry);
swap_readpage(page, true, NULL);
folio->private = NULL;
}
} else {
page = swapin_readahead(entry, GFP_HIGHUSER_MOVABLE,
vmf);
if (page)
folio = page_folio(page);
swapcache = folio;
}
if (!folio) {
/*
* Back out if somebody else faulted in this pte
* while we released the pte lock.
*/
vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd,
vmf->address, &vmf->ptl);
if (likely(pte_same(*vmf->pte, vmf->orig_pte)))
ret = VM_FAULT_OOM;
goto unlock;
}
/* Had to read the page from swap area: Major fault */
ret = VM_FAULT_MAJOR;
count_vm_event(PGMAJFAULT);
count_memcg_event_mm(vma->vm_mm, PGMAJFAULT);
} else if (PageHWPoison(page)) {
/*
* hwpoisoned dirty swapcache pages are kept for killing
* owner processes (which may be unknown at hwpoison time)
*/
ret = VM_FAULT_HWPOISON;
goto out_release;
}
locked = folio_lock_or_retry(folio, vma->vm_mm, vmf->flags);
if (!locked) {
ret |= VM_FAULT_RETRY;
goto out_release;
}
if (swapcache) {
/*
* Make sure folio_free_swap() or swapoff did not release the
* swapcache from under us. The page pin, and pte_same test
* below, are not enough to exclude that. Even if it is still
* swapcache, we need to check that the page's swap has not
* changed.
*/
if (unlikely(!folio_test_swapcache(folio) ||
page_private(page) != entry.val))
goto out_page;
/*
* KSM sometimes has to copy on read faults, for example, if
* page->index of !PageKSM() pages would be nonlinear inside the
* anon VMA -- PageKSM() is lost on actual swapout.
*/
page = ksm_might_need_to_copy(page, vma, vmf->address);
if (unlikely(!page)) {
ret = VM_FAULT_OOM;
goto out_page;
}
folio = page_folio(page);
/*
* If we want to map a page that's in the swapcache writable, we
* have to detect via the refcount if we're really the exclusive
* owner. Try removing the extra reference from the local LRU
* pagevecs if required.
*/
if ((vmf->flags & FAULT_FLAG_WRITE) && folio == swapcache &&
!folio_test_ksm(folio) && !folio_test_lru(folio))
lru_add_drain();
}
cgroup_throttle_swaprate(page, GFP_KERNEL);
/*
* Back out if somebody else already faulted in this pte.
*/
vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd, vmf->address,
&vmf->ptl);
if (unlikely(!pte_same(*vmf->pte, vmf->orig_pte)))
goto out_nomap;
if (unlikely(!folio_test_uptodate(folio))) {
ret = VM_FAULT_SIGBUS;
goto out_nomap;
}
/*
* PG_anon_exclusive reuses PG_mappedtodisk for anon pages. A swap pte
* must never point at an anonymous page in the swapcache that is
* PG_anon_exclusive. Sanity check that this holds and especially, that
* no filesystem set PG_mappedtodisk on a page in the swapcache. Sanity
* check after taking the PT lock and making sure that nobody
* concurrently faulted in this page and set PG_anon_exclusive.
*/
BUG_ON(!folio_test_anon(folio) && folio_test_mappedtodisk(folio));
BUG_ON(folio_test_anon(folio) && PageAnonExclusive(page));
/*
* Check under PT lock (to protect against concurrent fork() sharing
* the swap entry concurrently) for certainly exclusive pages.
*/
if (!folio_test_ksm(folio)) {
/*
* Note that pte_swp_exclusive() == false for architectures
* without __HAVE_ARCH_PTE_SWP_EXCLUSIVE.
*/
exclusive = pte_swp_exclusive(vmf->orig_pte);
if (folio != swapcache) {
/*
* We have a fresh page that is not exposed to the
* swapcache -> certainly exclusive.
*/
exclusive = true;
} else if (exclusive && folio_test_writeback(folio) &&
data_race(si->flags & SWP_STABLE_WRITES)) {
/*
* This is tricky: not all swap backends support
* concurrent page modifications while under writeback.
*
* So if we stumble over such a page in the swapcache
* we must not set the page exclusive, otherwise we can
* map it writable without further checks and modify it
* while still under writeback.
*
* For these problematic swap backends, simply drop the
* exclusive marker: this is perfectly fine as we start
* writeback only if we fully unmapped the page and
* there are no unexpected references on the page after
* unmapping succeeded. After fully unmapped, no
* further GUP references (FOLL_GET and FOLL_PIN) can
* appear, so dropping the exclusive marker and mapping
* it only R/O is fine.
*/
exclusive = false;
}
}
/*
* Remove the swap entry and conditionally try to free up the swapcache.
* We're already holding a reference on the page but haven't mapped it
* yet.
*/
swap_free(entry);
if (should_try_to_free_swap(folio, vma, vmf->flags))
folio_free_swap(folio);
inc_mm_counter_fast(vma->vm_mm, MM_ANONPAGES);
dec_mm_counter_fast(vma->vm_mm, MM_SWAPENTS);
pte = mk_pte(page, vma->vm_page_prot);
/*
* Same logic as in do_wp_page(); however, optimize for pages that are
* certainly not shared either because we just allocated them without
* exposing them to the swapcache or because the swap entry indicates
* exclusivity.
*/
if (!folio_test_ksm(folio) &&
(exclusive || folio_ref_count(folio) == 1)) {
if (vmf->flags & FAULT_FLAG_WRITE) {
pte = maybe_mkwrite(pte_mkdirty(pte), vma);
vmf->flags &= ~FAULT_FLAG_WRITE;
ret |= VM_FAULT_WRITE;
}
rmap_flags |= RMAP_EXCLUSIVE;
}
flush_icache_page(vma, page);
if (pte_swp_soft_dirty(vmf->orig_pte))
pte = pte_mksoft_dirty(pte);
if (pte_swp_uffd_wp(vmf->orig_pte)) {
pte = pte_mkuffd_wp(pte);
pte = pte_wrprotect(pte);
}
vmf->orig_pte = pte;
/* ksm created a completely new copy */
if (unlikely(folio != swapcache && swapcache)) {
page_add_new_anon_rmap(page, vma, vmf->address);
folio_add_lru_vma(folio, vma);
} else {
page_add_anon_rmap(page, vma, vmf->address, rmap_flags);
}
VM_BUG_ON(!folio_test_anon(folio) ||
(pte_write(pte) && !PageAnonExclusive(page)));
set_pte_at(vma->vm_mm, vmf->address, vmf->pte, pte);
arch_do_swap_page(vma->vm_mm, vma, vmf->address, pte, vmf->orig_pte);
folio_unlock(folio);
if (folio != swapcache && swapcache) {
/*
* Hold the lock to avoid the swap entry to be reused
* until we take the PT lock for the pte_same() check
* (to avoid false positives from pte_same). For
* further safety release the lock after the swap_free
* so that the swap count won't change under a
* parallel locked swapcache.
*/
folio_unlock(swapcache);
folio_put(swapcache);
}
if (vmf->flags & FAULT_FLAG_WRITE) {
ret |= do_wp_page(vmf);
if (ret & VM_FAULT_ERROR)
ret &= VM_FAULT_ERROR;
goto out;
}
/* No need to invalidate - it was non-present before */
update_mmu_cache(vma, vmf->address, vmf->pte);
unlock:
pte_unmap_unlock(vmf->pte, vmf->ptl);
out:
if (si)
put_swap_device(si);
return ret;
out_nomap:
pte_unmap_unlock(vmf->pte, vmf->ptl);
out_page:
folio_unlock(folio);
out_release:
folio_put(folio);
if (folio != swapcache && swapcache) {
folio_unlock(swapcache);
folio_put(swapcache);
}
if (si)
put_swap_device(si);
return ret;
}
| [[3753, "\t\t\tret = vmf->page->pgmap->ops->migrate_to_ram(vmf);\n"]] | [[3753, "ret = vmf->page->pgmap->ops->migrate_to_ram(vmf);"]] | [
"CVE-2022-3523"
] | [
"CWE-416",
"CWE-119"
] | 32 | {
"Execution Environment": [],
"Explanation": null,
"External Function": [
"vmf->page->pgmap->ops->migrate_to_ram"
],
"Function Argument": [],
"Globals": [],
"Type Execution Declaration": [
"struct vm_fault",
"struct page",
"struct dev_pagemap_ops"
]
} |
32 | linux | https://github.com/torvalds/linux | mm/memory.c | 16ce101db85db694a91380aa4c89b25530871d33 | mm/memory.c: fix race when faulting a device private page
Patch series "Fix several device private page reference counting issues",
v2
This series aims to fix a number of page reference counting issues in
drivers dealing with device private ZONE_DEVICE pages. These result in
use-after-free type bugs, either from accessing a struct page which no
longer exists because it has been removed or accessing fields within the
struct page which are no longer valid because the page has been freed.
During normal usage it is unlikely these will cause any problems. However
without these fixes it is possible to crash the kernel from userspace.
These crashes can be triggered either by unloading the kernel module or
unbinding the device from the driver prior to a userspace task exiting.
In modules such as Nouveau it is also possible to trigger some of these
issues by explicitly closing the device file-descriptor prior to the task
exiting and then accessing device private memory.
This involves some minor changes to both PowerPC and AMD GPU code.
Unfortunately I lack hardware to test either of those so any help there
would be appreciated. The changes mimic what is done in for both Nouveau
and hmm-tests though so I doubt they will cause problems.
This patch (of 8):
When the CPU tries to access a device private page the migrate_to_ram()
callback associated with the pgmap for the page is called. However no
reference is taken on the faulting page. Therefore a concurrent migration
of the device private page can free the page and possibly the underlying
pgmap. This results in a race which can crash the kernel due to the
migrate_to_ram() function pointer becoming invalid. It also means drivers
can't reliably read the zone_device_data field because the page may have
been freed with memunmap_pages().
Close the race by getting a reference on the page while holding the ptl to
ensure it has not been freed. Unfortunately the elevated reference count
will cause the migration required to handle the fault to fail. To avoid
this failure pass the faulting page into the migrate_vma functions so that
if an elevated reference count is found it can be checked to see if it's
expected or not.
[mpe@ellerman.id.au: fix build]
Link: https://lkml.kernel.org/r/87fsgbf3gh.fsf@mpe.ellerman.id.au
Link: https://lkml.kernel.org/r/cover.60659b549d8509ddecafad4f498ee7f03bb23c69.1664366292.git-series.apopple@nvidia.com
Link: https://lkml.kernel.org/r/d3e813178a59e565e8d78d9b9a4e2562f6494f90.1664366292.git-series.apopple@nvidia.com
Signed-off-by: Alistair Popple <apopple@nvidia.com>
Acked-by: Felix Kuehling <Felix.Kuehling@amd.com>
Cc: Jason Gunthorpe <jgg@nvidia.com>
Cc: John Hubbard <jhubbard@nvidia.com>
Cc: Ralph Campbell <rcampbell@nvidia.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Lyude Paul <lyude@redhat.com>
Cc: Alex Deucher <alexander.deucher@amd.com>
Cc: Alex Sierra <alex.sierra@amd.com>
Cc: Ben Skeggs <bskeggs@redhat.com>
Cc: Christian König <christian.koenig@amd.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: "Huang, Ying" <ying.huang@intel.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Yang Shi <shy828301@gmail.com>
Cc: Zi Yan <ziy@nvidia.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org> | false | b2a6dcd0780f4ef35d80208cafe62d51 | do_swap_page | vm_fault_t do_swap_page(struct vm_fault *vmf)
{
struct vm_area_struct *vma = vmf->vma;
struct folio *swapcache, *folio = NULL;
struct page *page;
struct swap_info_struct *si = NULL;
rmap_t rmap_flags = RMAP_NONE;
bool exclusive = false;
swp_entry_t entry;
pte_t pte;
int locked;
vm_fault_t ret = 0;
void *shadow = NULL;
if (!pte_unmap_same(vmf))
goto out;
entry = pte_to_swp_entry(vmf->orig_pte);
if (unlikely(non_swap_entry(entry))) {
if (is_migration_entry(entry)) {
migration_entry_wait(vma->vm_mm, vmf->pmd,
vmf->address);
} else if (is_device_exclusive_entry(entry)) {
vmf->page = pfn_swap_entry_to_page(entry);
ret = remove_device_exclusive_entry(vmf);
} else if (is_device_private_entry(entry)) {
vmf->page = pfn_swap_entry_to_page(entry);
vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd,
vmf->address, &vmf->ptl);
if (unlikely(!pte_same(*vmf->pte, vmf->orig_pte))) {
spin_unlock(vmf->ptl);
goto out;
}
/*
* Get a page reference while we know the page can't be
* freed.
*/
get_page(vmf->page);
pte_unmap_unlock(vmf->pte, vmf->ptl);
vmf->page->pgmap->ops->migrate_to_ram(vmf);
put_page(vmf->page);
} else if (is_hwpoison_entry(entry)) {
ret = VM_FAULT_HWPOISON;
} else if (is_swapin_error_entry(entry)) {
ret = VM_FAULT_SIGBUS;
} else if (is_pte_marker_entry(entry)) {
ret = handle_pte_marker(vmf);
} else {
print_bad_pte(vma, vmf->address, vmf->orig_pte, NULL);
ret = VM_FAULT_SIGBUS;
}
goto out;
}
/* Prevent swapoff from happening to us. */
si = get_swap_device(entry);
if (unlikely(!si))
goto out;
folio = swap_cache_get_folio(entry, vma, vmf->address);
if (folio)
page = folio_file_page(folio, swp_offset(entry));
swapcache = folio;
if (!folio) {
if (data_race(si->flags & SWP_SYNCHRONOUS_IO) &&
__swap_count(entry) == 1) {
/* skip swapcache */
folio = vma_alloc_folio(GFP_HIGHUSER_MOVABLE, 0,
vma, vmf->address, false);
page = &folio->page;
if (folio) {
__folio_set_locked(folio);
__folio_set_swapbacked(folio);
if (mem_cgroup_swapin_charge_folio(folio,
vma->vm_mm, GFP_KERNEL,
entry)) {
ret = VM_FAULT_OOM;
goto out_page;
}
mem_cgroup_swapin_uncharge_swap(entry);
shadow = get_shadow_from_swap_cache(entry);
if (shadow)
workingset_refault(folio, shadow);
folio_add_lru(folio);
/* To provide entry to swap_readpage() */
folio_set_swap_entry(folio, entry);
swap_readpage(page, true, NULL);
folio->private = NULL;
}
} else {
page = swapin_readahead(entry, GFP_HIGHUSER_MOVABLE,
vmf);
if (page)
folio = page_folio(page);
swapcache = folio;
}
if (!folio) {
/*
* Back out if somebody else faulted in this pte
* while we released the pte lock.
*/
vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd,
vmf->address, &vmf->ptl);
if (likely(pte_same(*vmf->pte, vmf->orig_pte)))
ret = VM_FAULT_OOM;
goto unlock;
}
/* Had to read the page from swap area: Major fault */
ret = VM_FAULT_MAJOR;
count_vm_event(PGMAJFAULT);
count_memcg_event_mm(vma->vm_mm, PGMAJFAULT);
} else if (PageHWPoison(page)) {
/*
* hwpoisoned dirty swapcache pages are kept for killing
* owner processes (which may be unknown at hwpoison time)
*/
ret = VM_FAULT_HWPOISON;
goto out_release;
}
locked = folio_lock_or_retry(folio, vma->vm_mm, vmf->flags);
if (!locked) {
ret |= VM_FAULT_RETRY;
goto out_release;
}
if (swapcache) {
/*
* Make sure folio_free_swap() or swapoff did not release the
* swapcache from under us. The page pin, and pte_same test
* below, are not enough to exclude that. Even if it is still
* swapcache, we need to check that the page's swap has not
* changed.
*/
if (unlikely(!folio_test_swapcache(folio) ||
page_private(page) != entry.val))
goto out_page;
/*
* KSM sometimes has to copy on read faults, for example, if
* page->index of !PageKSM() pages would be nonlinear inside the
* anon VMA -- PageKSM() is lost on actual swapout.
*/
page = ksm_might_need_to_copy(page, vma, vmf->address);
if (unlikely(!page)) {
ret = VM_FAULT_OOM;
goto out_page;
}
folio = page_folio(page);
/*
* If we want to map a page that's in the swapcache writable, we
* have to detect via the refcount if we're really the exclusive
* owner. Try removing the extra reference from the local LRU
* pagevecs if required.
*/
if ((vmf->flags & FAULT_FLAG_WRITE) && folio == swapcache &&
!folio_test_ksm(folio) && !folio_test_lru(folio))
lru_add_drain();
}
cgroup_throttle_swaprate(page, GFP_KERNEL);
/*
* Back out if somebody else already faulted in this pte.
*/
vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd, vmf->address,
&vmf->ptl);
if (unlikely(!pte_same(*vmf->pte, vmf->orig_pte)))
goto out_nomap;
if (unlikely(!folio_test_uptodate(folio))) {
ret = VM_FAULT_SIGBUS;
goto out_nomap;
}
/*
* PG_anon_exclusive reuses PG_mappedtodisk for anon pages. A swap pte
* must never point at an anonymous page in the swapcache that is
* PG_anon_exclusive. Sanity check that this holds and especially, that
* no filesystem set PG_mappedtodisk on a page in the swapcache. Sanity
* check after taking the PT lock and making sure that nobody
* concurrently faulted in this page and set PG_anon_exclusive.
*/
BUG_ON(!folio_test_anon(folio) && folio_test_mappedtodisk(folio));
BUG_ON(folio_test_anon(folio) && PageAnonExclusive(page));
/*
* Check under PT lock (to protect against concurrent fork() sharing
* the swap entry concurrently) for certainly exclusive pages.
*/
if (!folio_test_ksm(folio)) {
/*
* Note that pte_swp_exclusive() == false for architectures
* without __HAVE_ARCH_PTE_SWP_EXCLUSIVE.
*/
exclusive = pte_swp_exclusive(vmf->orig_pte);
if (folio != swapcache) {
/*
* We have a fresh page that is not exposed to the
* swapcache -> certainly exclusive.
*/
exclusive = true;
} else if (exclusive && folio_test_writeback(folio) &&
data_race(si->flags & SWP_STABLE_WRITES)) {
/*
* This is tricky: not all swap backends support
* concurrent page modifications while under writeback.
*
* So if we stumble over such a page in the swapcache
* we must not set the page exclusive, otherwise we can
* map it writable without further checks and modify it
* while still under writeback.
*
* For these problematic swap backends, simply drop the
* exclusive marker: this is perfectly fine as we start
* writeback only if we fully unmapped the page and
* there are no unexpected references on the page after
* unmapping succeeded. After fully unmapped, no
* further GUP references (FOLL_GET and FOLL_PIN) can
* appear, so dropping the exclusive marker and mapping
* it only R/O is fine.
*/
exclusive = false;
}
}
/*
* Remove the swap entry and conditionally try to free up the swapcache.
* We're already holding a reference on the page but haven't mapped it
* yet.
*/
swap_free(entry);
if (should_try_to_free_swap(folio, vma, vmf->flags))
folio_free_swap(folio);
inc_mm_counter_fast(vma->vm_mm, MM_ANONPAGES);
dec_mm_counter_fast(vma->vm_mm, MM_SWAPENTS);
pte = mk_pte(page, vma->vm_page_prot);
/*
* Same logic as in do_wp_page(); however, optimize for pages that are
* certainly not shared either because we just allocated them without
* exposing them to the swapcache or because the swap entry indicates
* exclusivity.
*/
if (!folio_test_ksm(folio) &&
(exclusive || folio_ref_count(folio) == 1)) {
if (vmf->flags & FAULT_FLAG_WRITE) {
pte = maybe_mkwrite(pte_mkdirty(pte), vma);
vmf->flags &= ~FAULT_FLAG_WRITE;
ret |= VM_FAULT_WRITE;
}
rmap_flags |= RMAP_EXCLUSIVE;
}
flush_icache_page(vma, page);
if (pte_swp_soft_dirty(vmf->orig_pte))
pte = pte_mksoft_dirty(pte);
if (pte_swp_uffd_wp(vmf->orig_pte)) {
pte = pte_mkuffd_wp(pte);
pte = pte_wrprotect(pte);
}
vmf->orig_pte = pte;
/* ksm created a completely new copy */
if (unlikely(folio != swapcache && swapcache)) {
page_add_new_anon_rmap(page, vma, vmf->address);
folio_add_lru_vma(folio, vma);
} else {
page_add_anon_rmap(page, vma, vmf->address, rmap_flags);
}
VM_BUG_ON(!folio_test_anon(folio) ||
(pte_write(pte) && !PageAnonExclusive(page)));
set_pte_at(vma->vm_mm, vmf->address, vmf->pte, pte);
arch_do_swap_page(vma->vm_mm, vma, vmf->address, pte, vmf->orig_pte);
folio_unlock(folio);
if (folio != swapcache && swapcache) {
/*
* Hold the lock to avoid the swap entry to be reused
* until we take the PT lock for the pte_same() check
* (to avoid false positives from pte_same). For
* further safety release the lock after the swap_free
* so that the swap count won't change under a
* parallel locked swapcache.
*/
folio_unlock(swapcache);
folio_put(swapcache);
}
if (vmf->flags & FAULT_FLAG_WRITE) {
ret |= do_wp_page(vmf);
if (ret & VM_FAULT_ERROR)
ret &= VM_FAULT_ERROR;
goto out;
}
/* No need to invalidate - it was non-present before */
update_mmu_cache(vma, vmf->address, vmf->pte);
unlock:
pte_unmap_unlock(vmf->pte, vmf->ptl);
out:
if (si)
put_swap_device(si);
return ret;
out_nomap:
pte_unmap_unlock(vmf->pte, vmf->ptl);
out_page:
folio_unlock(folio);
out_release:
folio_put(folio);
if (folio != swapcache && swapcache) {
folio_unlock(swapcache);
folio_put(swapcache);
}
if (si)
put_swap_device(si);
return ret;
}
| [[3753, "\t\t\tvmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd,\n"], [3754, "\t\t\t\t\tvmf->address, &vmf->ptl);\n"], [3755, "\t\t\tif (unlikely(!pte_same(*vmf->pte, vmf->orig_pte))) {\n"], [3756, "\t\t\t\tspin_unlock(vmf->ptl);\n"], [3757, "\t\t\t\tgoto out;\n"], [3758, "\t\t\t}\n"], [3759, "\n"], [3760, "\t\t\t/*\n"], [3761, "\t\t\t * Get a page reference while we know the page can't be\n"], [3762, "\t\t\t * freed.\n"], [3763, "\t\t\t */\n"], [3764, "\t\t\tget_page(vmf->page);\n"], [3765, "\t\t\tpte_unmap_unlock(vmf->pte, vmf->ptl);\n"], [3766, "\t\t\tvmf->page->pgmap->ops->migrate_to_ram(vmf);\n"], [3767, "\t\t\tput_page(vmf->page);\n"]] | [[3753, "vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd,\n\t\t\t\t\tvmf->address, &vmf->ptl);"], [3755, "if (unlikely(!pte_same(*vmf->pte, vmf->orig_pte)))"], [3756, "spin_unlock(vmf->ptl);"], [3757, "goto out;"], [3758, "\t\t\t}\n"], [3759, "\n"], [3760, "/*\n\t\t\t * Get a page reference while we know the page can't be\n\t\t\t * freed.\n\t\t\t */"], [3764, "get_page(vmf->page);"], [3765, "pte_unmap_unlock(vmf->pte, vmf->ptl);"], [3766, "vmf->page->pgmap->ops->migrate_to_ram(vmf);"], [3767, "put_page(vmf->page);"]] | [
"CVE-2022-3523"
] | [
"CWE-416",
"CWE-119"
] | 32 | {
"Execution Environment": [],
"Explanation": null,
"External Function": [
"vmf->page->pgmap->ops->migrate_to_ram"
],
"Function Argument": [],
"Globals": [],
"Type Execution Declaration": [
"struct vm_fault",
"struct page",
"struct dev_pagemap_ops"
]
} |
33 | linux | https://github.com/torvalds/linux | kernel/bpf/verifier.c | 179d1c5602997fef5a940c6ddcf31212cbfebd14 | bpf: don't prune branches when a scalar is replaced with a pointer
This could be made safe by passing through a reference to env and checking
for env->allow_ptr_leaks, but it would only work one way and is probably
not worth the hassle - not doing it will not directly lead to program
rejection.
Fixes: f1174f77b50c ("bpf/verifier: rework value tracking")
Signed-off-by: Jann Horn <jannh@google.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> | true | efa1225f12ed754b1dcae072082ed9ca | regsafe | static bool regsafe(struct bpf_reg_state *rold, struct bpf_reg_state *rcur,
struct idpair *idmap)
{
if (!(rold->live & REG_LIVE_READ))
/* explored state didn't use this */
return true;
if (memcmp(rold, rcur, offsetof(struct bpf_reg_state, live)) == 0)
return true;
if (rold->type == NOT_INIT)
/* explored state can't have used this */
return true;
if (rcur->type == NOT_INIT)
return false;
switch (rold->type) {
case SCALAR_VALUE:
if (rcur->type == SCALAR_VALUE) {
/* new val must satisfy old val knowledge */
return range_within(rold, rcur) &&
tnum_in(rold->var_off, rcur->var_off);
} else {
/* if we knew anything about the old value, we're not
* equal, because we can't know anything about the
* scalar value of the pointer in the new value.
*/
return rold->umin_value == 0 &&
rold->umax_value == U64_MAX &&
rold->smin_value == S64_MIN &&
rold->smax_value == S64_MAX &&
tnum_is_unknown(rold->var_off);
}
case PTR_TO_MAP_VALUE:
/* If the new min/max/var_off satisfy the old ones and
* everything else matches, we are OK.
* We don't care about the 'id' value, because nothing
* uses it for PTR_TO_MAP_VALUE (only for ..._OR_NULL)
*/
return memcmp(rold, rcur, offsetof(struct bpf_reg_state, id)) == 0 &&
range_within(rold, rcur) &&
tnum_in(rold->var_off, rcur->var_off);
case PTR_TO_MAP_VALUE_OR_NULL:
/* a PTR_TO_MAP_VALUE could be safe to use as a
* PTR_TO_MAP_VALUE_OR_NULL into the same map.
* However, if the old PTR_TO_MAP_VALUE_OR_NULL then got NULL-
* checked, doing so could have affected others with the same
* id, and we can't check for that because we lost the id when
* we converted to a PTR_TO_MAP_VALUE.
*/
if (rcur->type != PTR_TO_MAP_VALUE_OR_NULL)
return false;
if (memcmp(rold, rcur, offsetof(struct bpf_reg_state, id)))
return false;
/* Check our ids match any regs they're supposed to */
return check_ids(rold->id, rcur->id, idmap);
case PTR_TO_PACKET_META:
case PTR_TO_PACKET:
if (rcur->type != rold->type)
return false;
/* We must have at least as much range as the old ptr
* did, so that any accesses which were safe before are
* still safe. This is true even if old range < old off,
* since someone could have accessed through (ptr - k), or
* even done ptr -= k in a register, to get a safe access.
*/
if (rold->range > rcur->range)
return false;
/* If the offsets don't match, we can't trust our alignment;
* nor can we be sure that we won't fall out of range.
*/
if (rold->off != rcur->off)
return false;
/* id relations must be preserved */
if (rold->id && !check_ids(rold->id, rcur->id, idmap))
return false;
/* new val must satisfy old val knowledge */
return range_within(rold, rcur) &&
tnum_in(rold->var_off, rcur->var_off);
case PTR_TO_CTX:
case CONST_PTR_TO_MAP:
case PTR_TO_STACK:
case PTR_TO_PACKET_END:
/* Only valid matches are exact, which memcmp() above
* would have accepted
*/
default:
/* Don't know what's going on, just say it's not safe */
return false;
}
/* Shouldn't get here; if we do, say it's not safe */
WARN_ON_ONCE(1);
return false;
}
| [[3470, "\t\t\t/* if we knew anything about the old value, we're not\n"], [3471, "\t\t\t * equal, because we can't know anything about the\n"], [3472, "\t\t\t * scalar value of the pointer in the new value.\n"], [3474, "\t\t\treturn rold->umin_value == 0 &&\n"], [3475, "\t\t\t rold->umax_value == U64_MAX &&\n"], [3476, "\t\t\t rold->smin_value == S64_MIN &&\n"], [3477, "\t\t\t rold->smax_value == S64_MAX &&\n"], [3478, "\t\t\t tnum_is_unknown(rold->var_off);\n"]] | [[3470, "/* if we knew anything about the old value, we're not\n\t\t\t * equal, because we can't know anything about the\n\t\t\t * scalar value of the pointer in the new value.\n\t\t\t */"], [3474, "return rold->umin_value == 0 &&\n\t\t\t rold->umax_value == U64_MAX &&\n\t\t\t rold->smin_value == S64_MIN &&\n\t\t\t rold->smax_value == S64_MAX &&\n\t\t\t tnum_is_unknown(rold->var_off);"]] | [
"CVE-2017-17855"
] | [
"CWE-119"
] | 34 | {
"Execution Environment": [],
"Explanation": null,
"External Function": [
"tnum_is_unknown"
],
"Function Argument": [],
"Globals": [],
"Type Execution Declaration": [
"U64_MAX",
"S64_MIN"
]
} |
34 | linux | https://github.com/torvalds/linux | kernel/bpf/verifier.c | 179d1c5602997fef5a940c6ddcf31212cbfebd14 | bpf: don't prune branches when a scalar is replaced with a pointer
This could be made safe by passing through a reference to env and checking
for env->allow_ptr_leaks, but it would only work one way and is probably
not worth the hassle - not doing it will not directly lead to program
rejection.
Fixes: f1174f77b50c ("bpf/verifier: rework value tracking")
Signed-off-by: Jann Horn <jannh@google.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> | false | 5f6d745208838c9654712c9620472a78 | regsafe | static bool regsafe(struct bpf_reg_state *rold, struct bpf_reg_state *rcur,
struct idpair *idmap)
{
if (!(rold->live & REG_LIVE_READ))
/* explored state didn't use this */
return true;
if (memcmp(rold, rcur, offsetof(struct bpf_reg_state, live)) == 0)
return true;
if (rold->type == NOT_INIT)
/* explored state can't have used this */
return true;
if (rcur->type == NOT_INIT)
return false;
switch (rold->type) {
case SCALAR_VALUE:
if (rcur->type == SCALAR_VALUE) {
/* new val must satisfy old val knowledge */
return range_within(rold, rcur) &&
tnum_in(rold->var_off, rcur->var_off);
} else {
/* We're trying to use a pointer in place of a scalar.
* Even if the scalar was unbounded, this could lead to
* pointer leaks because scalars are allowed to leak
* while pointers are not. We could make this safe in
* special cases if root is calling us, but it's
* probably not worth the hassle.
*/
return false;
}
case PTR_TO_MAP_VALUE:
/* If the new min/max/var_off satisfy the old ones and
* everything else matches, we are OK.
* We don't care about the 'id' value, because nothing
* uses it for PTR_TO_MAP_VALUE (only for ..._OR_NULL)
*/
return memcmp(rold, rcur, offsetof(struct bpf_reg_state, id)) == 0 &&
range_within(rold, rcur) &&
tnum_in(rold->var_off, rcur->var_off);
case PTR_TO_MAP_VALUE_OR_NULL:
/* a PTR_TO_MAP_VALUE could be safe to use as a
* PTR_TO_MAP_VALUE_OR_NULL into the same map.
* However, if the old PTR_TO_MAP_VALUE_OR_NULL then got NULL-
* checked, doing so could have affected others with the same
* id, and we can't check for that because we lost the id when
* we converted to a PTR_TO_MAP_VALUE.
*/
if (rcur->type != PTR_TO_MAP_VALUE_OR_NULL)
return false;
if (memcmp(rold, rcur, offsetof(struct bpf_reg_state, id)))
return false;
/* Check our ids match any regs they're supposed to */
return check_ids(rold->id, rcur->id, idmap);
case PTR_TO_PACKET_META:
case PTR_TO_PACKET:
if (rcur->type != rold->type)
return false;
/* We must have at least as much range as the old ptr
* did, so that any accesses which were safe before are
* still safe. This is true even if old range < old off,
* since someone could have accessed through (ptr - k), or
* even done ptr -= k in a register, to get a safe access.
*/
if (rold->range > rcur->range)
return false;
/* If the offsets don't match, we can't trust our alignment;
* nor can we be sure that we won't fall out of range.
*/
if (rold->off != rcur->off)
return false;
/* id relations must be preserved */
if (rold->id && !check_ids(rold->id, rcur->id, idmap))
return false;
/* new val must satisfy old val knowledge */
return range_within(rold, rcur) &&
tnum_in(rold->var_off, rcur->var_off);
case PTR_TO_CTX:
case CONST_PTR_TO_MAP:
case PTR_TO_STACK:
case PTR_TO_PACKET_END:
/* Only valid matches are exact, which memcmp() above
* would have accepted
*/
default:
/* Don't know what's going on, just say it's not safe */
return false;
}
/* Shouldn't get here; if we do, say it's not safe */
WARN_ON_ONCE(1);
return false;
}
| [[3470, "\t\t\t/* We're trying to use a pointer in place of a scalar.\n"], [3471, "\t\t\t * Even if the scalar was unbounded, this could lead to\n"], [3472, "\t\t\t * pointer leaks because scalars are allowed to leak\n"], [3473, "\t\t\t * while pointers are not. We could make this safe in\n"], [3474, "\t\t\t * special cases if root is calling us, but it's\n"], [3475, "\t\t\t * probably not worth the hassle.\n"], [3477, "\t\t\treturn false;\n"]] | [[3470, "/* We're trying to use a pointer in place of a scalar.\n\t\t\t * Even if the scalar was unbounded, this could lead to\n\t\t\t * pointer leaks because scalars are allowed to leak\n\t\t\t * while pointers are not. We could make this safe in\n\t\t\t * special cases if root is calling us, but it's\n\t\t\t * probably not worth the hassle.\n\t\t\t */"], [3477, "return false;"]] | [
"CVE-2017-17855"
] | [
"CWE-119"
] | 34 | {
"Execution Environment": [],
"Explanation": null,
"External Function": [
"tnum_is_unknown"
],
"Function Argument": [],
"Globals": [],
"Type Execution Declaration": [
"U64_MAX",
"S64_MIN"
]
} |
35 | linux | https://github.com/torvalds/linux | drivers/net/ethernet/mediatek/mtk_ppe.c | 17a5f6a78dc7b8db385de346092d7d9f9dc24df6 | net: ethernet: mtk_eth_soc: use after free in __mtk_ppe_check_skb()
The __mtk_foe_entry_clear() function frees "entry" so we have to use
the _safe() version of hlist_for_each_entry() to prevent a use after
free.
Fixes: 33fc42de3327 ("net: ethernet: mtk_eth_soc: support creating mac address based offload entries")
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net> | true | 854ecd98b68ec4fcd7d7f1340b7fa1f0 | __mtk_ppe_check_skb | void __mtk_ppe_check_skb(struct mtk_ppe *ppe, struct sk_buff *skb, u16 hash)
{
struct hlist_head *head = &ppe->foe_flow[hash / 2];
struct mtk_foe_entry *hwe = &ppe->foe_table[hash];
struct mtk_flow_entry *entry;
struct mtk_foe_bridge key = {};
struct ethhdr *eh;
bool found = false;
u8 *tag;
spin_lock_bh(&ppe_lock);
if (FIELD_GET(MTK_FOE_IB1_STATE, hwe->ib1) == MTK_FOE_STATE_BIND)
goto out;
hlist_for_each_entry(entry, head, list) {
if (entry->type == MTK_FLOW_TYPE_L2_SUBFLOW) {
if (unlikely(FIELD_GET(MTK_FOE_IB1_STATE, hwe->ib1) ==
MTK_FOE_STATE_BIND))
continue;
entry->hash = 0xffff;
__mtk_foe_entry_clear(ppe, entry);
continue;
}
if (found || !mtk_flow_entry_match(entry, hwe)) {
if (entry->hash != 0xffff)
entry->hash = 0xffff;
continue;
}
entry->hash = hash;
__mtk_foe_entry_commit(ppe, &entry->data, hash);
found = true;
}
if (found)
goto out;
eh = eth_hdr(skb);
ether_addr_copy(key.dest_mac, eh->h_dest);
ether_addr_copy(key.src_mac, eh->h_source);
tag = skb->data - 2;
key.vlan = 0;
switch (skb->protocol) {
// #if IS_ENABLED(CONFIG_NET_DSA)
case htons(ETH_P_XDSA):
if (!netdev_uses_dsa(skb->dev) ||
skb->dev->dsa_ptr->tag_ops->proto != DSA_TAG_PROTO_MTK)
goto out;
tag += 4;
if (get_unaligned_be16(tag) != ETH_P_8021Q)
break;
fallthrough;
#endif
case htons(ETH_P_8021Q):
key.vlan = get_unaligned_be16(tag + 2) & VLAN_VID_MASK;
break;
default:
break;
}
entry = rhashtable_lookup_fast(&ppe->l2_flows, &key, mtk_flow_l2_ht_params);
if (!entry)
goto out;
mtk_foe_entry_commit_subflow(ppe, entry, hash);
out:
spin_unlock_bh(&ppe_lock);
}
| [[612, "\thlist_for_each_entry(entry, head, list) {\n"]] | [[612, "\thlist_for_each_entry(entry, head, list) {\n"]] | [
"CVE-2022-3636"
] | [
"CWE-416",
"CWE-119"
] | 36 | {
"Execution Environment": [],
"Explanation": null,
"External Function": [
"__mtk_foe_entry_clear"
],
"Function Argument": [],
"Globals": [],
"Type Execution Declaration": []
} |
36 | linux | https://github.com/torvalds/linux | drivers/net/ethernet/mediatek/mtk_ppe.c | 17a5f6a78dc7b8db385de346092d7d9f9dc24df6 | net: ethernet: mtk_eth_soc: use after free in __mtk_ppe_check_skb()
The __mtk_foe_entry_clear() function frees "entry" so we have to use
the _safe() version of hlist_for_each_entry() to prevent a use after
free.
Fixes: 33fc42de3327 ("net: ethernet: mtk_eth_soc: support creating mac address based offload entries")
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net> | false | 0eab16ac49e9ae77017812443475b94a | __mtk_ppe_check_skb | void __mtk_ppe_check_skb(struct mtk_ppe *ppe, struct sk_buff *skb, u16 hash)
{
struct hlist_head *head = &ppe->foe_flow[hash / 2];
struct mtk_foe_entry *hwe = &ppe->foe_table[hash];
struct mtk_flow_entry *entry;
struct mtk_foe_bridge key = {};
struct hlist_node *n;
struct ethhdr *eh;
bool found = false;
u8 *tag;
spin_lock_bh(&ppe_lock);
if (FIELD_GET(MTK_FOE_IB1_STATE, hwe->ib1) == MTK_FOE_STATE_BIND)
goto out;
hlist_for_each_entry_safe(entry, n, head, list) {
if (entry->type == MTK_FLOW_TYPE_L2_SUBFLOW) {
if (unlikely(FIELD_GET(MTK_FOE_IB1_STATE, hwe->ib1) ==
MTK_FOE_STATE_BIND))
continue;
entry->hash = 0xffff;
__mtk_foe_entry_clear(ppe, entry);
continue;
}
if (found || !mtk_flow_entry_match(entry, hwe)) {
if (entry->hash != 0xffff)
entry->hash = 0xffff;
continue;
}
entry->hash = hash;
__mtk_foe_entry_commit(ppe, &entry->data, hash);
found = true;
}
if (found)
goto out;
eh = eth_hdr(skb);
ether_addr_copy(key.dest_mac, eh->h_dest);
ether_addr_copy(key.src_mac, eh->h_source);
tag = skb->data - 2;
key.vlan = 0;
switch (skb->protocol) {
// #if IS_ENABLED(CONFIG_NET_DSA)
case htons(ETH_P_XDSA):
if (!netdev_uses_dsa(skb->dev) ||
skb->dev->dsa_ptr->tag_ops->proto != DSA_TAG_PROTO_MTK)
goto out;
tag += 4;
if (get_unaligned_be16(tag) != ETH_P_8021Q)
break;
fallthrough;
#endif
case htons(ETH_P_8021Q):
key.vlan = get_unaligned_be16(tag + 2) & VLAN_VID_MASK;
break;
default:
break;
}
entry = rhashtable_lookup_fast(&ppe->l2_flows, &key, mtk_flow_l2_ht_params);
if (!entry)
goto out;
mtk_foe_entry_commit_subflow(ppe, entry, hash);
out:
spin_unlock_bh(&ppe_lock);
}
| [[603, "\tstruct hlist_node *n;\n"], [613, "\thlist_for_each_entry_safe(entry, n, head, list) {\n"]] | [[603, "struct hlist_node *n;"], [613, "\thlist_for_each_entry_safe(entry, n, head, list) {\n"]] | [
"CVE-2022-3636"
] | [
"CWE-416",
"CWE-119"
] | 36 | {
"Execution Environment": [],
"Explanation": null,
"External Function": [
"__mtk_foe_entry_clear"
],
"Function Argument": [],
"Globals": [],
"Type Execution Declaration": []
} |
37 | linux | https://github.com/torvalds/linux | drivers/mtd/spi-nor/cadence-quadspi.c | 193e87143c290ec16838f5368adc0e0bc94eb931 | mtd: spi-nor: Off by one in cqspi_setup_flash()
There are CQSPI_MAX_CHIPSELECT elements in the ->f_pdata array so the >
should be >=.
Fixes: 140623410536 ('mtd: spi-nor: Add driver for Cadence Quad SPI Flash Controller')
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Reviewed-by: Marek Vasut <marex@denx.de>
Signed-off-by: Cyrille Pitchen <cyrille.pitchen@atmel.com> | true | 0b477c06c2026d64a183015547046464 | cqspi_setup_flash | static int cqspi_setup_flash(struct cqspi_st *cqspi, struct device_node *np)
{
struct platform_device *pdev = cqspi->pdev;
struct device *dev = &pdev->dev;
struct cqspi_flash_pdata *f_pdata;
struct spi_nor *nor;
struct mtd_info *mtd;
unsigned int cs;
int i, ret;
/* Get flash device data */
for_each_available_child_of_node(dev->of_node, np) {
if (of_property_read_u32(np, "reg", &cs)) {
dev_err(dev, "Couldn't determine chip select.\n");
goto err;
}
if (cs > CQSPI_MAX_CHIPSELECT) {
dev_err(dev, "Chip select %d out of range.\n", cs);
goto err;
}
f_pdata = &cqspi->f_pdata[cs];
f_pdata->cqspi = cqspi;
f_pdata->cs = cs;
ret = cqspi_of_get_flash_pdata(pdev, f_pdata, np);
if (ret)
goto err;
nor = &f_pdata->nor;
mtd = &nor->mtd;
mtd->priv = nor;
nor->dev = dev;
spi_nor_set_flash_node(nor, np);
nor->priv = f_pdata;
nor->read_reg = cqspi_read_reg;
nor->write_reg = cqspi_write_reg;
nor->read = cqspi_read;
nor->write = cqspi_write;
nor->erase = cqspi_erase;
nor->prepare = cqspi_prep;
nor->unprepare = cqspi_unprep;
mtd->name = devm_kasprintf(dev, GFP_KERNEL, "%s.%d",
dev_name(dev), cs);
if (!mtd->name) {
ret = -ENOMEM;
goto err;
}
ret = spi_nor_scan(nor, NULL, SPI_NOR_QUAD);
if (ret)
goto err;
ret = mtd_device_register(mtd, NULL, 0);
if (ret)
goto err;
f_pdata->registered = true;
}
return 0;
err:
for (i = 0; i < CQSPI_MAX_CHIPSELECT; i++)
if (cqspi->f_pdata[i].registered)
mtd_device_unregister(&cqspi->f_pdata[i].nor.mtd);
return ret;
}
| [[1085, "\t\tif (cs > CQSPI_MAX_CHIPSELECT) {\n"]] | [[1085, "if (cs > CQSPI_MAX_CHIPSELECT)"]] | [
"CVE-2016-10764"
] | [
"CWE-119"
] | 38 | {
"Execution Environment": [],
"Explanation": null,
"External Function": [],
"Function Argument": [],
"Globals": [
"CQSPI_MAX_CHIPSELECT"
],
"Type Execution Declaration": [
"struct cqspi_st"
]
} |
38 | linux | https://github.com/torvalds/linux | drivers/mtd/spi-nor/cadence-quadspi.c | 193e87143c290ec16838f5368adc0e0bc94eb931 | mtd: spi-nor: Off by one in cqspi_setup_flash()
There are CQSPI_MAX_CHIPSELECT elements in the ->f_pdata array so the >
should be >=.
Fixes: 140623410536 ('mtd: spi-nor: Add driver for Cadence Quad SPI Flash Controller')
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Reviewed-by: Marek Vasut <marex@denx.de>
Signed-off-by: Cyrille Pitchen <cyrille.pitchen@atmel.com> | false | 189ced77266d4818ff184322a3931409 | cqspi_setup_flash | static int cqspi_setup_flash(struct cqspi_st *cqspi, struct device_node *np)
{
struct platform_device *pdev = cqspi->pdev;
struct device *dev = &pdev->dev;
struct cqspi_flash_pdata *f_pdata;
struct spi_nor *nor;
struct mtd_info *mtd;
unsigned int cs;
int i, ret;
/* Get flash device data */
for_each_available_child_of_node(dev->of_node, np) {
if (of_property_read_u32(np, "reg", &cs)) {
dev_err(dev, "Couldn't determine chip select.\n");
goto err;
}
if (cs >= CQSPI_MAX_CHIPSELECT) {
dev_err(dev, "Chip select %d out of range.\n", cs);
goto err;
}
f_pdata = &cqspi->f_pdata[cs];
f_pdata->cqspi = cqspi;
f_pdata->cs = cs;
ret = cqspi_of_get_flash_pdata(pdev, f_pdata, np);
if (ret)
goto err;
nor = &f_pdata->nor;
mtd = &nor->mtd;
mtd->priv = nor;
nor->dev = dev;
spi_nor_set_flash_node(nor, np);
nor->priv = f_pdata;
nor->read_reg = cqspi_read_reg;
nor->write_reg = cqspi_write_reg;
nor->read = cqspi_read;
nor->write = cqspi_write;
nor->erase = cqspi_erase;
nor->prepare = cqspi_prep;
nor->unprepare = cqspi_unprep;
mtd->name = devm_kasprintf(dev, GFP_KERNEL, "%s.%d",
dev_name(dev), cs);
if (!mtd->name) {
ret = -ENOMEM;
goto err;
}
ret = spi_nor_scan(nor, NULL, SPI_NOR_QUAD);
if (ret)
goto err;
ret = mtd_device_register(mtd, NULL, 0);
if (ret)
goto err;
f_pdata->registered = true;
}
return 0;
err:
for (i = 0; i < CQSPI_MAX_CHIPSELECT; i++)
if (cqspi->f_pdata[i].registered)
mtd_device_unregister(&cqspi->f_pdata[i].nor.mtd);
return ret;
}
| [[1085, "\t\tif (cs >= CQSPI_MAX_CHIPSELECT) {\n"]] | [[1085, "if (cs >= CQSPI_MAX_CHIPSELECT)"]] | [
"CVE-2016-10764"
] | [
"CWE-119"
] | 38 | {
"Execution Environment": [],
"Explanation": null,
"External Function": [],
"Function Argument": [],
"Globals": [
"CQSPI_MAX_CHIPSELECT"
],
"Type Execution Declaration": [
"struct cqspi_st"
]
} |
39 | linux | https://github.com/torvalds/linux | fs/udf/super.c | 1df2ae31c724e57be9d7ac00d78db8a5dabdd050 | udf: Fortify loading of sparing table
Add sanity checks when loading sparing table from disk to avoid accessing
unallocated memory or writing to it.
Signed-off-by: Jan Kara <jack@suse.cz> | true | b247c8b050141105ac9dbfe5e55b7bfd | udf_load_logicalvol | static int udf_load_logicalvol(struct super_block *sb, sector_t block,
struct kernel_lb_addr *fileset)
{
struct logicalVolDesc *lvd;
int i, j, offset;
uint8_t type;
struct udf_sb_info *sbi = UDF_SB(sb);
struct genericPartitionMap *gpm;
uint16_t ident;
struct buffer_head *bh;
unsigned int table_len;
int ret = 0;
bh = udf_read_tagged(sb, block, block, &ident);
if (!bh)
return 1;
BUG_ON(ident != TAG_IDENT_LVD);
lvd = (struct logicalVolDesc *)bh->b_data;
table_len = le32_to_cpu(lvd->mapTableLength);
if (sizeof(*lvd) + table_len > sb->s_blocksize) {
udf_err(sb, "error loading logical volume descriptor: "
"Partition table too long (%u > %lu)\n", table_len,
sb->s_blocksize - sizeof(*lvd));
goto out_bh;
}
ret = udf_sb_alloc_partition_maps(sb, le32_to_cpu(lvd->numPartitionMaps));
if (ret)
goto out_bh;
for (i = 0, offset = 0;
i < sbi->s_partitions && offset < table_len;
i++, offset += gpm->partitionMapLength) {
struct udf_part_map *map = &sbi->s_partmaps[i];
gpm = (struct genericPartitionMap *)
&(lvd->partitionMaps[offset]);
type = gpm->partitionMapType;
if (type == 1) {
struct genericPartitionMap1 *gpm1 =
(struct genericPartitionMap1 *)gpm;
map->s_partition_type = UDF_TYPE1_MAP15;
map->s_volumeseqnum = le16_to_cpu(gpm1->volSeqNum);
map->s_partition_num = le16_to_cpu(gpm1->partitionNum);
map->s_partition_func = NULL;
} else if (type == 2) {
struct udfPartitionMap2 *upm2 =
(struct udfPartitionMap2 *)gpm;
if (!strncmp(upm2->partIdent.ident, UDF_ID_VIRTUAL,
strlen(UDF_ID_VIRTUAL))) {
u16 suf =
le16_to_cpu(((__le16 *)upm2->partIdent.
identSuffix)[0]);
if (suf < 0x0200) {
map->s_partition_type =
UDF_VIRTUAL_MAP15;
map->s_partition_func =
udf_get_pblock_virt15;
} else {
map->s_partition_type =
UDF_VIRTUAL_MAP20;
map->s_partition_func =
udf_get_pblock_virt20;
}
} else if (!strncmp(upm2->partIdent.ident,
UDF_ID_SPARABLE,
strlen(UDF_ID_SPARABLE))) {
uint32_t loc;
struct sparingTable *st;
struct sparablePartitionMap *spm =
(struct sparablePartitionMap *)gpm;
map->s_partition_type = UDF_SPARABLE_MAP15;
map->s_type_specific.s_sparing.s_packet_len =
le16_to_cpu(spm->packetLength);
for (j = 0; j < spm->numSparingTables; j++) {
struct buffer_head *bh2;
loc = le32_to_cpu(
spm->locSparingTable[j]);
bh2 = udf_read_tagged(sb, loc, loc,
&ident);
map->s_type_specific.s_sparing.
s_spar_map[j] = bh2;
if (bh2 == NULL)
continue;
st = (struct sparingTable *)bh2->b_data;
if (ident != 0 || strncmp(
st->sparingIdent.ident,
UDF_ID_SPARING,
strlen(UDF_ID_SPARING))) {
brelse(bh2);
map->s_type_specific.s_sparing.
s_spar_map[j] = NULL;
}
}
map->s_partition_func = udf_get_pblock_spar15;
} else if (!strncmp(upm2->partIdent.ident,
UDF_ID_METADATA,
strlen(UDF_ID_METADATA))) {
struct udf_meta_data *mdata =
&map->s_type_specific.s_metadata;
struct metadataPartitionMap *mdm =
(struct metadataPartitionMap *)
&(lvd->partitionMaps[offset]);
udf_debug("Parsing Logical vol part %d type %d id=%s\n",
i, type, UDF_ID_METADATA);
map->s_partition_type = UDF_METADATA_MAP25;
map->s_partition_func = udf_get_pblock_meta25;
mdata->s_meta_file_loc =
le32_to_cpu(mdm->metadataFileLoc);
mdata->s_mirror_file_loc =
le32_to_cpu(mdm->metadataMirrorFileLoc);
mdata->s_bitmap_file_loc =
le32_to_cpu(mdm->metadataBitmapFileLoc);
mdata->s_alloc_unit_size =
le32_to_cpu(mdm->allocUnitSize);
mdata->s_align_unit_size =
le16_to_cpu(mdm->alignUnitSize);
if (mdm->flags & 0x01)
mdata->s_flags |= MF_DUPLICATE_MD;
udf_debug("Metadata Ident suffix=0x%x\n",
le16_to_cpu(*(__le16 *)
mdm->partIdent.identSuffix));
udf_debug("Metadata part num=%d\n",
le16_to_cpu(mdm->partitionNum));
udf_debug("Metadata part alloc unit size=%d\n",
le32_to_cpu(mdm->allocUnitSize));
udf_debug("Metadata file loc=%d\n",
le32_to_cpu(mdm->metadataFileLoc));
udf_debug("Mirror file loc=%d\n",
le32_to_cpu(mdm->metadataMirrorFileLoc));
udf_debug("Bitmap file loc=%d\n",
le32_to_cpu(mdm->metadataBitmapFileLoc));
udf_debug("Flags: %d %d\n",
mdata->s_flags, mdm->flags);
} else {
udf_debug("Unknown ident: %s\n",
upm2->partIdent.ident);
continue;
}
map->s_volumeseqnum = le16_to_cpu(upm2->volSeqNum);
map->s_partition_num = le16_to_cpu(upm2->partitionNum);
}
udf_debug("Partition (%d:%d) type %d on volume %d\n",
i, map->s_partition_num, type, map->s_volumeseqnum);
}
if (fileset) {
struct long_ad *la = (struct long_ad *)&(lvd->logicalVolContentsUse[0]);
*fileset = lelb_to_cpu(la->extLocation);
udf_debug("FileSet found in LogicalVolDesc at block=%d, partition=%d\n",
fileset->logicalBlockNum,
fileset->partitionReferenceNum);
}
if (lvd->integritySeqExt.extLength)
udf_load_logicalvolint(sb, leea_to_cpu(lvd->integritySeqExt));
out_bh:
brelse(bh);
return ret;
}
| [[1222, "\tint i, j, offset;\n"], [1284, "\t\t\t\tuint32_t loc;\n"], [1285, "\t\t\t\tstruct sparingTable *st;\n"], [1286, "\t\t\t\tstruct sparablePartitionMap *spm =\n"], [1287, "\t\t\t\t\t(struct sparablePartitionMap *)gpm;\n"], [1288, "\n"], [1289, "\t\t\t\tmap->s_partition_type = UDF_SPARABLE_MAP15;\n"], [1290, "\t\t\t\tmap->s_type_specific.s_sparing.s_packet_len =\n"], [1291, "\t\t\t\t\t\tle16_to_cpu(spm->packetLength);\n"], [1292, "\t\t\t\tfor (j = 0; j < spm->numSparingTables; j++) {\n"], [1293, "\t\t\t\t\tstruct buffer_head *bh2;\n"], [1294, "\n"], [1295, "\t\t\t\t\tloc = le32_to_cpu(\n"], [1296, "\t\t\t\t\t\tspm->locSparingTable[j]);\n"], [1297, "\t\t\t\t\tbh2 = udf_read_tagged(sb, loc, loc,\n"], [1298, "\t\t\t\t\t\t\t &ident);\n"], [1299, "\t\t\t\t\tmap->s_type_specific.s_sparing.\n"], [1300, "\t\t\t\t\t\t\ts_spar_map[j] = bh2;\n"], [1301, "\n"], [1302, "\t\t\t\t\tif (bh2 == NULL)\n"], [1303, "\t\t\t\t\t\tcontinue;\n"], [1304, "\n"], [1305, "\t\t\t\t\tst = (struct sparingTable *)bh2->b_data;\n"], [1306, "\t\t\t\t\tif (ident != 0 || strncmp(\n"], [1307, "\t\t\t\t\t\tst->sparingIdent.ident,\n"], [1308, "\t\t\t\t\t\tUDF_ID_SPARING,\n"], [1309, "\t\t\t\t\t\tstrlen(UDF_ID_SPARING))) {\n"], [1310, "\t\t\t\t\t\tbrelse(bh2);\n"], [1311, "\t\t\t\t\t\tmap->s_type_specific.s_sparing.\n"], [1312, "\t\t\t\t\t\t\ts_spar_map[j] = NULL;\n"], [1313, "\t\t\t\t\t}\n"], [1314, "\t\t\t\t}\n"], [1315, "\t\t\t\tmap->s_partition_func = udf_get_pblock_spar15;\n"]] | [[1222, "int i, j, offset;"], [1284, "uint32_t loc;"], [1285, "struct sparingTable *st;"], [1286, "struct sparablePartitionMap *spm =\n\t\t\t\t\t(struct sparablePartitionMap *)gpm;"], [1288, "\n"], [1289, "map->s_partition_type = UDF_SPARABLE_MAP15;"], [1290, "map->s_type_specific.s_sparing.s_packet_len =\n\t\t\t\t\t\tle16_to_cpu(spm->packetLength);"], [1292, "for (j = 0; j < spm->numSparingTables; j++) {"], [1293, "struct buffer_head *bh2;"], [1294, "\n"], [1295, "loc = le32_to_cpu(\n\t\t\t\t\t\tspm->locSparingTable[j]);"], [1297, "bh2 = udf_read_tagged(sb, loc, loc,\n\t\t\t\t\t\t\t &ident);"], [1299, "map->s_type_specific.s_sparing.\n\t\t\t\t\t\t\ts_spar_map[j] = bh2;"], [1301, "\n"], [1302, "if (bh2 == NULL)"], [1303, "continue;"], [1304, "\n"], [1305, "st = (struct sparingTable *)bh2->b_data;"], [1306, "if (ident != 0 || strncmp(\n\t\t\t\t\t\tst->sparingIdent.ident,\n\t\t\t\t\t\tUDF_ID_SPARING,\n\t\t\t\t\t\tstrlen(UDF_ID_SPARING)))"], [1310, "brelse(bh2);"], [1311, "map->s_type_specific.s_sparing.\n\t\t\t\t\t\t\ts_spar_map[j] = NULL;"], [1313, "\t\t\t\t\t}\n"], [1314, "\t\t\t\t}\n"], [1315, "map->s_partition_func = udf_get_pblock_spar15;"]] | [
"CVE-2012-3400"
] | [
"CWE-787"
] | 40 | {
"Execution Environment": [],
"Explanation": null,
"External Function": [
"udf_read_tagged"
],
"Function Argument": [
"sb"
],
"Globals": [],
"Type Execution Declaration": []
} |
40 | linux | https://github.com/torvalds/linux | fs/udf/super.c | 1df2ae31c724e57be9d7ac00d78db8a5dabdd050 | udf: Fortify loading of sparing table
Add sanity checks when loading sparing table from disk to avoid accessing
unallocated memory or writing to it.
Signed-off-by: Jan Kara <jack@suse.cz> | false | e02f163864e3bca32c2530599656b6e9 | udf_load_logicalvol | static int udf_load_logicalvol(struct super_block *sb, sector_t block,
struct kernel_lb_addr *fileset)
{
struct logicalVolDesc *lvd;
int i, offset;
uint8_t type;
struct udf_sb_info *sbi = UDF_SB(sb);
struct genericPartitionMap *gpm;
uint16_t ident;
struct buffer_head *bh;
unsigned int table_len;
int ret = 0;
bh = udf_read_tagged(sb, block, block, &ident);
if (!bh)
return 1;
BUG_ON(ident != TAG_IDENT_LVD);
lvd = (struct logicalVolDesc *)bh->b_data;
table_len = le32_to_cpu(lvd->mapTableLength);
if (sizeof(*lvd) + table_len > sb->s_blocksize) {
udf_err(sb, "error loading logical volume descriptor: "
"Partition table too long (%u > %lu)\n", table_len,
sb->s_blocksize - sizeof(*lvd));
goto out_bh;
}
ret = udf_sb_alloc_partition_maps(sb, le32_to_cpu(lvd->numPartitionMaps));
if (ret)
goto out_bh;
for (i = 0, offset = 0;
i < sbi->s_partitions && offset < table_len;
i++, offset += gpm->partitionMapLength) {
struct udf_part_map *map = &sbi->s_partmaps[i];
gpm = (struct genericPartitionMap *)
&(lvd->partitionMaps[offset]);
type = gpm->partitionMapType;
if (type == 1) {
struct genericPartitionMap1 *gpm1 =
(struct genericPartitionMap1 *)gpm;
map->s_partition_type = UDF_TYPE1_MAP15;
map->s_volumeseqnum = le16_to_cpu(gpm1->volSeqNum);
map->s_partition_num = le16_to_cpu(gpm1->partitionNum);
map->s_partition_func = NULL;
} else if (type == 2) {
struct udfPartitionMap2 *upm2 =
(struct udfPartitionMap2 *)gpm;
if (!strncmp(upm2->partIdent.ident, UDF_ID_VIRTUAL,
strlen(UDF_ID_VIRTUAL))) {
u16 suf =
le16_to_cpu(((__le16 *)upm2->partIdent.
identSuffix)[0]);
if (suf < 0x0200) {
map->s_partition_type =
UDF_VIRTUAL_MAP15;
map->s_partition_func =
udf_get_pblock_virt15;
} else {
map->s_partition_type =
UDF_VIRTUAL_MAP20;
map->s_partition_func =
udf_get_pblock_virt20;
}
} else if (!strncmp(upm2->partIdent.ident,
UDF_ID_SPARABLE,
strlen(UDF_ID_SPARABLE))) {
if (udf_load_sparable_map(sb, map,
(struct sparablePartitionMap *)gpm) < 0)
goto out_bh;
} else if (!strncmp(upm2->partIdent.ident,
UDF_ID_METADATA,
strlen(UDF_ID_METADATA))) {
struct udf_meta_data *mdata =
&map->s_type_specific.s_metadata;
struct metadataPartitionMap *mdm =
(struct metadataPartitionMap *)
&(lvd->partitionMaps[offset]);
udf_debug("Parsing Logical vol part %d type %d id=%s\n",
i, type, UDF_ID_METADATA);
map->s_partition_type = UDF_METADATA_MAP25;
map->s_partition_func = udf_get_pblock_meta25;
mdata->s_meta_file_loc =
le32_to_cpu(mdm->metadataFileLoc);
mdata->s_mirror_file_loc =
le32_to_cpu(mdm->metadataMirrorFileLoc);
mdata->s_bitmap_file_loc =
le32_to_cpu(mdm->metadataBitmapFileLoc);
mdata->s_alloc_unit_size =
le32_to_cpu(mdm->allocUnitSize);
mdata->s_align_unit_size =
le16_to_cpu(mdm->alignUnitSize);
if (mdm->flags & 0x01)
mdata->s_flags |= MF_DUPLICATE_MD;
udf_debug("Metadata Ident suffix=0x%x\n",
le16_to_cpu(*(__le16 *)
mdm->partIdent.identSuffix));
udf_debug("Metadata part num=%d\n",
le16_to_cpu(mdm->partitionNum));
udf_debug("Metadata part alloc unit size=%d\n",
le32_to_cpu(mdm->allocUnitSize));
udf_debug("Metadata file loc=%d\n",
le32_to_cpu(mdm->metadataFileLoc));
udf_debug("Mirror file loc=%d\n",
le32_to_cpu(mdm->metadataMirrorFileLoc));
udf_debug("Bitmap file loc=%d\n",
le32_to_cpu(mdm->metadataBitmapFileLoc));
udf_debug("Flags: %d %d\n",
mdata->s_flags, mdm->flags);
} else {
udf_debug("Unknown ident: %s\n",
upm2->partIdent.ident);
continue;
}
map->s_volumeseqnum = le16_to_cpu(upm2->volSeqNum);
map->s_partition_num = le16_to_cpu(upm2->partitionNum);
}
udf_debug("Partition (%d:%d) type %d on volume %d\n",
i, map->s_partition_num, type, map->s_volumeseqnum);
}
if (fileset) {
struct long_ad *la = (struct long_ad *)&(lvd->logicalVolContentsUse[0]);
*fileset = lelb_to_cpu(la->extLocation);
udf_debug("FileSet found in LogicalVolDesc at block=%d, partition=%d\n",
fileset->logicalBlockNum,
fileset->partitionReferenceNum);
}
if (lvd->integritySeqExt.extLength)
udf_load_logicalvolint(sb, leea_to_cpu(lvd->integritySeqExt));
out_bh:
brelse(bh);
return ret;
}
| [[1271, "\tint i, offset;\n"], [1333, "\t\t\t\tif (udf_load_sparable_map(sb, map,\n"], [1334, "\t\t\t\t (struct sparablePartitionMap *)gpm) < 0)\n"], [1335, "\t\t\t\t\tgoto out_bh;\n"]] | [[1271, "int i, offset;"], [1333, "if (udf_load_sparable_map(sb, map,\n\t\t\t\t (struct sparablePartitionMap *)gpm) < 0)"], [1335, "goto out_bh;"]] | [
"CVE-2012-3400"
] | [
"CWE-787"
] | 40 | {
"Execution Environment": [],
"Explanation": null,
"External Function": [
"udf_read_tagged"
],
"Function Argument": [
"sb"
],
"Globals": [],
"Type Execution Declaration": []
} |
41 | linux | https://github.com/torvalds/linux | mm/hugetlb.c | 1e3921471354244f70fe268586ff94a97a6dd4df | userfaultfd: hugetlbfs: prevent UFFDIO_COPY to fill beyond the end of i_size
This oops:
kernel BUG at fs/hugetlbfs/inode.c:484!
RIP: remove_inode_hugepages+0x3d0/0x410
Call Trace:
hugetlbfs_setattr+0xd9/0x130
notify_change+0x292/0x410
do_truncate+0x65/0xa0
do_sys_ftruncate.constprop.3+0x11a/0x180
SyS_ftruncate+0xe/0x10
tracesys+0xd9/0xde
was caused by the lack of i_size check in hugetlb_mcopy_atomic_pte.
mmap() can still succeed beyond the end of the i_size after vmtruncate
zapped vmas in those ranges, but the faults must not succeed, and that
includes UFFDIO_COPY.
We could differentiate the retval to userland to represent a SIGBUS like
a page fault would do (vs SIGSEGV), but it doesn't seem very useful and
we'd need to pick a random retval as there's no meaningful syscall
retval that would differentiate from SIGSEGV and SIGBUS, there's just
-EFAULT.
Link: http://lkml.kernel.org/r/20171016223914.2421-2-aarcange@redhat.com
Signed-off-by: Andrea Arcangeli <aarcange@redhat.com>
Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com>
Cc: Mike Rapoport <rppt@linux.vnet.ibm.com>
Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> | true | 4cfc89b5450fdc127cd532d6296ec224 | hugetlb_mcopy_atomic_pte | int hugetlb_mcopy_atomic_pte(struct mm_struct *dst_mm,
pte_t *dst_pte,
struct vm_area_struct *dst_vma,
unsigned long dst_addr,
unsigned long src_addr,
struct page **pagep)
{
int vm_shared = dst_vma->vm_flags & VM_SHARED;
struct hstate *h = hstate_vma(dst_vma);
pte_t _dst_pte;
spinlock_t *ptl;
int ret;
struct page *page;
if (!*pagep) {
ret = -ENOMEM;
page = alloc_huge_page(dst_vma, dst_addr, 0);
if (IS_ERR(page))
goto out;
ret = copy_huge_page_from_user(page,
(const void __user *) src_addr,
pages_per_huge_page(h), false);
/* fallback to copy_from_user outside mmap_sem */
if (unlikely(ret)) {
ret = -EFAULT;
*pagep = page;
/* don't free the page */
goto out;
}
} else {
page = *pagep;
*pagep = NULL;
}
/*
* The memory barrier inside __SetPageUptodate makes sure that
* preceding stores to the page contents become visible before
* the set_pte_at() write.
*/
__SetPageUptodate(page);
set_page_huge_active(page);
/*
* If shared, add to page cache
*/
if (vm_shared) {
struct address_space *mapping = dst_vma->vm_file->f_mapping;
pgoff_t idx = vma_hugecache_offset(h, dst_vma, dst_addr);
ret = huge_add_to_page_cache(page, mapping, idx);
if (ret)
goto out_release_nounlock;
}
ptl = huge_pte_lockptr(h, dst_mm, dst_pte);
spin_lock(ptl);
ret = -EEXIST;
if (!huge_pte_none(huge_ptep_get(dst_pte)))
goto out_release_unlock;
if (vm_shared) {
page_dup_rmap(page, true);
} else {
ClearPagePrivate(page);
hugepage_add_new_anon_rmap(page, dst_vma, dst_addr);
}
_dst_pte = make_huge_pte(dst_vma, page, dst_vma->vm_flags & VM_WRITE);
if (dst_vma->vm_flags & VM_WRITE)
_dst_pte = huge_pte_mkdirty(_dst_pte);
_dst_pte = pte_mkyoung(_dst_pte);
set_huge_pte_at(dst_mm, dst_addr, dst_pte, _dst_pte);
(void)huge_ptep_set_access_flags(dst_vma, dst_addr, dst_pte, _dst_pte,
dst_vma->vm_flags & VM_WRITE);
hugetlb_count_add(pages_per_huge_page(h), dst_mm);
/* No need to invalidate - it was non-present before */
update_mmu_cache(dst_vma, dst_addr, dst_pte);
spin_unlock(ptl);
if (vm_shared)
unlock_page(page);
ret = 0;
out:
return ret;
out_release_unlock:
spin_unlock(ptl);
if (vm_shared)
unlock_page(page);
out_release_nounlock:
put_page(page);
goto out;
}
| [[4028, "\t\tstruct address_space *mapping = dst_vma->vm_file->f_mapping;\n"], [4029, "\t\tpgoff_t idx = vma_hugecache_offset(h, dst_vma, dst_addr);\n"]] | [[4028, "struct address_space *mapping = dst_vma->vm_file->f_mapping;"], [4029, "pgoff_t idx = vma_hugecache_offset(h, dst_vma, dst_addr);"]] | [
"CVE-2017-15128"
] | [
"CWE-119"
] | 42 | {
"Execution Environment": [],
"Explanation": null,
"External Function": [
"vma_hugecache_offset",
"i_size_read",
"huge_page_shift"
],
"Function Argument": [],
"Globals": [],
"Type Execution Declaration": []
} |
42 | linux | https://github.com/torvalds/linux | mm/hugetlb.c | 1e3921471354244f70fe268586ff94a97a6dd4df | userfaultfd: hugetlbfs: prevent UFFDIO_COPY to fill beyond the end of i_size
This oops:
kernel BUG at fs/hugetlbfs/inode.c:484!
RIP: remove_inode_hugepages+0x3d0/0x410
Call Trace:
hugetlbfs_setattr+0xd9/0x130
notify_change+0x292/0x410
do_truncate+0x65/0xa0
do_sys_ftruncate.constprop.3+0x11a/0x180
SyS_ftruncate+0xe/0x10
tracesys+0xd9/0xde
was caused by the lack of i_size check in hugetlb_mcopy_atomic_pte.
mmap() can still succeed beyond the end of the i_size after vmtruncate
zapped vmas in those ranges, but the faults must not succeed, and that
includes UFFDIO_COPY.
We could differentiate the retval to userland to represent a SIGBUS like
a page fault would do (vs SIGSEGV), but it doesn't seem very useful and
we'd need to pick a random retval as there's no meaningful syscall
retval that would differentiate from SIGSEGV and SIGBUS, there's just
-EFAULT.
Link: http://lkml.kernel.org/r/20171016223914.2421-2-aarcange@redhat.com
Signed-off-by: Andrea Arcangeli <aarcange@redhat.com>
Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com>
Cc: Mike Rapoport <rppt@linux.vnet.ibm.com>
Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> | false | 0074ab9caf4d6acfdfd6c7aa2e999b42 | hugetlb_mcopy_atomic_pte | int hugetlb_mcopy_atomic_pte(struct mm_struct *dst_mm,
pte_t *dst_pte,
struct vm_area_struct *dst_vma,
unsigned long dst_addr,
unsigned long src_addr,
struct page **pagep)
{
struct address_space *mapping;
pgoff_t idx;
unsigned long size;
int vm_shared = dst_vma->vm_flags & VM_SHARED;
struct hstate *h = hstate_vma(dst_vma);
pte_t _dst_pte;
spinlock_t *ptl;
int ret;
struct page *page;
if (!*pagep) {
ret = -ENOMEM;
page = alloc_huge_page(dst_vma, dst_addr, 0);
if (IS_ERR(page))
goto out;
ret = copy_huge_page_from_user(page,
(const void __user *) src_addr,
pages_per_huge_page(h), false);
/* fallback to copy_from_user outside mmap_sem */
if (unlikely(ret)) {
ret = -EFAULT;
*pagep = page;
/* don't free the page */
goto out;
}
} else {
page = *pagep;
*pagep = NULL;
}
/*
* The memory barrier inside __SetPageUptodate makes sure that
* preceding stores to the page contents become visible before
* the set_pte_at() write.
*/
__SetPageUptodate(page);
set_page_huge_active(page);
mapping = dst_vma->vm_file->f_mapping;
idx = vma_hugecache_offset(h, dst_vma, dst_addr);
/*
* If shared, add to page cache
*/
if (vm_shared) {
size = i_size_read(mapping->host) >> huge_page_shift(h);
ret = -EFAULT;
if (idx >= size)
goto out_release_nounlock;
/*
* Serialization between remove_inode_hugepages() and
* huge_add_to_page_cache() below happens through the
* hugetlb_fault_mutex_table that here must be hold by
* the caller.
*/
ret = huge_add_to_page_cache(page, mapping, idx);
if (ret)
goto out_release_nounlock;
}
ptl = huge_pte_lockptr(h, dst_mm, dst_pte);
spin_lock(ptl);
/*
* Recheck the i_size after holding PT lock to make sure not
* to leave any page mapped (as page_mapped()) beyond the end
* of the i_size (remove_inode_hugepages() is strict about
* enforcing that). If we bail out here, we'll also leave a
* page in the radix tree in the vm_shared case beyond the end
* of the i_size, but remove_inode_hugepages() will take care
* of it as soon as we drop the hugetlb_fault_mutex_table.
*/
size = i_size_read(mapping->host) >> huge_page_shift(h);
ret = -EFAULT;
if (idx >= size)
goto out_release_unlock;
ret = -EEXIST;
if (!huge_pte_none(huge_ptep_get(dst_pte)))
goto out_release_unlock;
if (vm_shared) {
page_dup_rmap(page, true);
} else {
ClearPagePrivate(page);
hugepage_add_new_anon_rmap(page, dst_vma, dst_addr);
}
_dst_pte = make_huge_pte(dst_vma, page, dst_vma->vm_flags & VM_WRITE);
if (dst_vma->vm_flags & VM_WRITE)
_dst_pte = huge_pte_mkdirty(_dst_pte);
_dst_pte = pte_mkyoung(_dst_pte);
set_huge_pte_at(dst_mm, dst_addr, dst_pte, _dst_pte);
(void)huge_ptep_set_access_flags(dst_vma, dst_addr, dst_pte, _dst_pte,
dst_vma->vm_flags & VM_WRITE);
hugetlb_count_add(pages_per_huge_page(h), dst_mm);
/* No need to invalidate - it was non-present before */
update_mmu_cache(dst_vma, dst_addr, dst_pte);
spin_unlock(ptl);
if (vm_shared)
unlock_page(page);
ret = 0;
out:
return ret;
out_release_unlock:
spin_unlock(ptl);
if (vm_shared)
unlock_page(page);
out_release_nounlock:
put_page(page);
goto out;
}
| [[3987, "\tstruct address_space *mapping;\n"], [3988, "\tpgoff_t idx;\n"], [3989, "\tunsigned long size;\n"], [4027, "\tmapping = dst_vma->vm_file->f_mapping;\n"], [4028, "\tidx = vma_hugecache_offset(h, dst_vma, dst_addr);\n"], [4029, "\n"], [4034, "\t\tsize = i_size_read(mapping->host) >> huge_page_shift(h);\n"], [4035, "\t\tret = -EFAULT;\n"], [4036, "\t\tif (idx >= size)\n"], [4037, "\t\t\tgoto out_release_nounlock;\n"], [4039, "\t\t/*\n"], [4040, "\t\t * Serialization between remove_inode_hugepages() and\n"], [4041, "\t\t * huge_add_to_page_cache() below happens through the\n"], [4042, "\t\t * hugetlb_fault_mutex_table that here must be hold by\n"], [4043, "\t\t * the caller.\n"], [4044, "\t\t */\n"], [4053, "\t/*\n"], [4054, "\t * Recheck the i_size after holding PT lock to make sure not\n"], [4055, "\t * to leave any page mapped (as page_mapped()) beyond the end\n"], [4056, "\t * of the i_size (remove_inode_hugepages() is strict about\n"], [4057, "\t * enforcing that). If we bail out here, we'll also leave a\n"], [4058, "\t * page in the radix tree in the vm_shared case beyond the end\n"], [4059, "\t * of the i_size, but remove_inode_hugepages() will take care\n"], [4060, "\t * of it as soon as we drop the hugetlb_fault_mutex_table.\n"], [4061, "\t */\n"], [4062, "\tsize = i_size_read(mapping->host) >> huge_page_shift(h);\n"], [4063, "\tret = -EFAULT;\n"], [4064, "\tif (idx >= size)\n"], [4065, "\t\tgoto out_release_unlock;\n"], [4066, "\n"]] | [[3987, "struct address_space *mapping;"], [3988, "pgoff_t idx;"], [3989, "unsigned long size;"], [4027, "mapping = dst_vma->vm_file->f_mapping;"], [4028, "idx = vma_hugecache_offset(h, dst_vma, dst_addr);"], [4029, "\n"], [4034, "size = i_size_read(mapping->host) >> huge_page_shift(h);"], [4035, "ret = -EFAULT;"], [4036, "if (idx >= size)"], [4037, "goto out_release_nounlock;"], [4039, "/*\n\t\t * Serialization between remove_inode_hugepages() and\n\t\t * huge_add_to_page_cache() below happens through the\n\t\t * hugetlb_fault_mutex_table that here must be hold by\n\t\t * the caller.\n\t\t */"], [4053, "/*\n\t * Recheck the i_size after holding PT lock to make sure not\n\t * to leave any page mapped (as page_mapped()) beyond the end\n\t * of the i_size (remove_inode_hugepages() is strict about\n\t * enforcing that). If we bail out here, we'll also leave a\n\t * page in the radix tree in the vm_shared case beyond the end\n\t * of the i_size, but remove_inode_hugepages() will take care\n\t * of it as soon as we drop the hugetlb_fault_mutex_table.\n\t */"], [4062, "size = i_size_read(mapping->host) >> huge_page_shift(h);"], [4063, "ret = -EFAULT;"], [4064, "if (idx >= size)"], [4065, "goto out_release_unlock;"], [4066, "\n"]] | [
"CVE-2017-15128"
] | [
"CWE-119"
] | 42 | {
"Execution Environment": [],
"Explanation": null,
"External Function": [
"vma_hugecache_offset",
"i_size_read",
"huge_page_shift"
],
"Function Argument": [],
"Globals": [],
"Type Execution Declaration": []
} |
43 | linux | https://github.com/torvalds/linux | arch/um/kernel/exitcode.c | 201f99f170df14ba52ea4c52847779042b7a623b | uml: check length in exitcode_proc_write()
We don't cap the size of buffer from the user so we could write past the
end of the array here. Only root can write to this file.
Reported-by: Nico Golde <nico@ngolde.de>
Reported-by: Fabian Yamaguchi <fabs@goesec.de>
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Cc: stable@kernel.org
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> | true | 7328e354f5509ce7bd2b4b7485e9bd59 | exitcode_proc_write | static ssize_t exitcode_proc_write(struct file *file,
const char __user *buffer, size_t count, loff_t *pos)
{
char *end, buf[sizeof("nnnnn\0")];
int tmp;
if (copy_from_user(buf, buffer, count))
return -EFAULT;
tmp = simple_strtol(buf, &end, 0);
if ((*end != '\0') && !isspace(*end))
return -EINVAL;
uml_exitcode = tmp;
return count;
}
| [[45, "\tif (copy_from_user(buf, buffer, count))\n"]] | [[45, "if (copy_from_user(buf, buffer, count))"]] | [
"CVE-2013-4512"
] | [
"CWE-119"
] | 44 | {
"Execution Environment": [],
"Explanation": null,
"External Function": [],
"Function Argument": [],
"Globals": [],
"Type Execution Declaration": []
} |
44 | linux | https://github.com/torvalds/linux | arch/um/kernel/exitcode.c | 201f99f170df14ba52ea4c52847779042b7a623b | uml: check length in exitcode_proc_write()
We don't cap the size of buffer from the user so we could write past the
end of the array here. Only root can write to this file.
Reported-by: Nico Golde <nico@ngolde.de>
Reported-by: Fabian Yamaguchi <fabs@goesec.de>
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Cc: stable@kernel.org
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> | false | 6638d3a83d07900c8f306ab882a3e827 | exitcode_proc_write | static ssize_t exitcode_proc_write(struct file *file,
const char __user *buffer, size_t count, loff_t *pos)
{
char *end, buf[sizeof("nnnnn\0")];
size_t size;
int tmp;
size = min(count, sizeof(buf));
if (copy_from_user(buf, buffer, size))
return -EFAULT;
tmp = simple_strtol(buf, &end, 0);
if ((*end != '\0') && !isspace(*end))
return -EINVAL;
uml_exitcode = tmp;
return count;
}
| [[43, "\tsize_t size;\n"], [46, "\tsize = min(count, sizeof(buf));\n"], [47, "\tif (copy_from_user(buf, buffer, size))\n"]] | [[43, "size_t size;"], [46, "size = min(count, sizeof(buf));"], [47, "if (copy_from_user(buf, buffer, size))"]] | [
"CVE-2013-4512"
] | [
"CWE-119"
] | 44 | {
"Execution Environment": [],
"Explanation": null,
"External Function": [],
"Function Argument": [],
"Globals": [],
"Type Execution Declaration": []
} |
45 | linux | https://github.com/torvalds/linux | kernel/bpf/verifier.c | 22c7fa171a02d310e3a3f6ed46a698ca8a0060ed | bpf: Reject variable offset alu on PTR_TO_FLOW_KEYS
For PTR_TO_FLOW_KEYS, check_flow_keys_access() only uses fixed off
for validation. However, variable offset ptr alu is not prohibited
for this ptr kind. So the variable offset is not checked.
The following prog is accepted:
func#0 @0
0: R1=ctx() R10=fp0
0: (bf) r6 = r1 ; R1=ctx() R6_w=ctx()
1: (79) r7 = *(u64 *)(r6 +144) ; R6_w=ctx() R7_w=flow_keys()
2: (b7) r8 = 1024 ; R8_w=1024
3: (37) r8 /= 1 ; R8_w=scalar()
4: (57) r8 &= 1024 ; R8_w=scalar(smin=smin32=0,
smax=umax=smax32=umax32=1024,var_off=(0x0; 0x400))
5: (0f) r7 += r8
mark_precise: frame0: last_idx 5 first_idx 0 subseq_idx -1
mark_precise: frame0: regs=r8 stack= before 4: (57) r8 &= 1024
mark_precise: frame0: regs=r8 stack= before 3: (37) r8 /= 1
mark_precise: frame0: regs=r8 stack= before 2: (b7) r8 = 1024
6: R7_w=flow_keys(smin=smin32=0,smax=umax=smax32=umax32=1024,var_off
=(0x0; 0x400)) R8_w=scalar(smin=smin32=0,smax=umax=smax32=umax32=1024,
var_off=(0x0; 0x400))
6: (79) r0 = *(u64 *)(r7 +0) ; R0_w=scalar()
7: (95) exit
This prog loads flow_keys to r7, and adds the variable offset r8
to r7, and finally causes out-of-bounds access:
BUG: unable to handle page fault for address: ffffc90014c80038
[...]
Call Trace:
<TASK>
bpf_dispatcher_nop_func include/linux/bpf.h:1231 [inline]
__bpf_prog_run include/linux/filter.h:651 [inline]
bpf_prog_run include/linux/filter.h:658 [inline]
bpf_prog_run_pin_on_cpu include/linux/filter.h:675 [inline]
bpf_flow_dissect+0x15f/0x350 net/core/flow_dissector.c:991
bpf_prog_test_run_flow_dissector+0x39d/0x620 net/bpf/test_run.c:1359
bpf_prog_test_run kernel/bpf/syscall.c:4107 [inline]
__sys_bpf+0xf8f/0x4560 kernel/bpf/syscall.c:5475
__do_sys_bpf kernel/bpf/syscall.c:5561 [inline]
__se_sys_bpf kernel/bpf/syscall.c:5559 [inline]
__x64_sys_bpf+0x73/0xb0 kernel/bpf/syscall.c:5559
do_syscall_x64 arch/x86/entry/common.c:52 [inline]
do_syscall_64+0x3f/0x110 arch/x86/entry/common.c:83
entry_SYSCALL_64_after_hwframe+0x63/0x6b
Fix this by rejecting ptr alu with variable offset on flow_keys.
Applying the patch rejects the program with "R7 pointer arithmetic
on flow_keys prohibited".
Fixes: d58e468b1112 ("flow_dissector: implements flow dissector BPF hook")
Signed-off-by: Hao Sun <sunhao.th@gmail.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Yonghong Song <yonghong.song@linux.dev>
Link: https://lore.kernel.org/bpf/20240115082028.9992-1-sunhao.th@gmail.com | true | 6baddb16d0d1405bbb7b352baeb779ce | adjust_ptr_min_max_vals | static int adjust_ptr_min_max_vals(struct bpf_verifier_env *env,
struct bpf_insn *insn,
const struct bpf_reg_state *ptr_reg,
const struct bpf_reg_state *off_reg)
{
struct bpf_verifier_state *vstate = env->cur_state;
struct bpf_func_state *state = vstate->frame[vstate->curframe];
struct bpf_reg_state *regs = state->regs, *dst_reg;
bool known = tnum_is_const(off_reg->var_off);
s64 smin_val = off_reg->smin_value, smax_val = off_reg->smax_value,
smin_ptr = ptr_reg->smin_value, smax_ptr = ptr_reg->smax_value;
u64 umin_val = off_reg->umin_value, umax_val = off_reg->umax_value,
umin_ptr = ptr_reg->umin_value, umax_ptr = ptr_reg->umax_value;
struct bpf_sanitize_info info = {};
u8 opcode = BPF_OP(insn->code);
u32 dst = insn->dst_reg;
int ret;
dst_reg = ®s[dst];
if ((known && (smin_val != smax_val || umin_val != umax_val)) ||
smin_val > smax_val || umin_val > umax_val) {
/* Taint dst register if offset had invalid bounds derived from
* e.g. dead branches.
*/
__mark_reg_unknown(env, dst_reg);
return 0;
}
if (BPF_CLASS(insn->code) != BPF_ALU64) {
/* 32-bit ALU ops on pointers produce (meaningless) scalars */
if (opcode == BPF_SUB && env->allow_ptr_leaks) {
__mark_reg_unknown(env, dst_reg);
return 0;
}
verbose(env,
"R%d 32-bit pointer arithmetic prohibited\n",
dst);
return -EACCES;
}
if (ptr_reg->type & PTR_MAYBE_NULL) {
verbose(env, "R%d pointer arithmetic on %s prohibited, null-check it first\n",
dst, reg_type_str(env, ptr_reg->type));
return -EACCES;
}
switch (base_type(ptr_reg->type)) {
case CONST_PTR_TO_MAP:
/* smin_val represents the known value */
if (known && smin_val == 0 && opcode == BPF_ADD)
break;
fallthrough;
case PTR_TO_PACKET_END:
case PTR_TO_SOCKET:
case PTR_TO_SOCK_COMMON:
case PTR_TO_TCP_SOCK:
case PTR_TO_XDP_SOCK:
verbose(env, "R%d pointer arithmetic on %s prohibited\n",
dst, reg_type_str(env, ptr_reg->type));
return -EACCES;
default:
break;
}
/* In case of 'scalar += pointer', dst_reg inherits pointer type and id.
* The id may be overwritten later if we create a new variable offset.
*/
dst_reg->type = ptr_reg->type;
dst_reg->id = ptr_reg->id;
if (!check_reg_sane_offset(env, off_reg, ptr_reg->type) ||
!check_reg_sane_offset(env, ptr_reg, ptr_reg->type))
return -EINVAL;
/* pointer types do not carry 32-bit bounds at the moment. */
__mark_reg32_unbounded(dst_reg);
if (sanitize_needed(opcode)) {
ret = sanitize_ptr_alu(env, insn, ptr_reg, off_reg, dst_reg,
&info, false);
if (ret < 0)
return sanitize_err(env, insn, ret, off_reg, dst_reg);
}
switch (opcode) {
case BPF_ADD:
/* We can take a fixed offset as long as it doesn't overflow
* the s32 'off' field
*/
if (known && (ptr_reg->off + smin_val ==
(s64)(s32)(ptr_reg->off + smin_val))) {
/* pointer += K. Accumulate it into fixed offset */
dst_reg->smin_value = smin_ptr;
dst_reg->smax_value = smax_ptr;
dst_reg->umin_value = umin_ptr;
dst_reg->umax_value = umax_ptr;
dst_reg->var_off = ptr_reg->var_off;
dst_reg->off = ptr_reg->off + smin_val;
dst_reg->raw = ptr_reg->raw;
break;
}
/* A new variable offset is created. Note that off_reg->off
* == 0, since it's a scalar.
* dst_reg gets the pointer type and since some positive
* integer value was added to the pointer, give it a new 'id'
* if it's a PTR_TO_PACKET.
* this creates a new 'base' pointer, off_reg (variable) gets
* added into the variable offset, and we copy the fixed offset
* from ptr_reg.
*/
if (signed_add_overflows(smin_ptr, smin_val) ||
signed_add_overflows(smax_ptr, smax_val)) {
dst_reg->smin_value = S64_MIN;
dst_reg->smax_value = S64_MAX;
} else {
dst_reg->smin_value = smin_ptr + smin_val;
dst_reg->smax_value = smax_ptr + smax_val;
}
if (umin_ptr + umin_val < umin_ptr ||
umax_ptr + umax_val < umax_ptr) {
dst_reg->umin_value = 0;
dst_reg->umax_value = U64_MAX;
} else {
dst_reg->umin_value = umin_ptr + umin_val;
dst_reg->umax_value = umax_ptr + umax_val;
}
dst_reg->var_off = tnum_add(ptr_reg->var_off, off_reg->var_off);
dst_reg->off = ptr_reg->off;
dst_reg->raw = ptr_reg->raw;
if (reg_is_pkt_pointer(ptr_reg)) {
dst_reg->id = ++env->id_gen;
/* something was added to pkt_ptr, set range to zero */
memset(&dst_reg->raw, 0, sizeof(dst_reg->raw));
}
break;
case BPF_SUB:
if (dst_reg == off_reg) {
/* scalar -= pointer. Creates an unknown scalar */
verbose(env, "R%d tried to subtract pointer from scalar\n",
dst);
return -EACCES;
}
/* We don't allow subtraction from FP, because (according to
* test_verifier.c test "invalid fp arithmetic", JITs might not
* be able to deal with it.
*/
if (ptr_reg->type == PTR_TO_STACK) {
verbose(env, "R%d subtraction from stack pointer prohibited\n",
dst);
return -EACCES;
}
if (known && (ptr_reg->off - smin_val ==
(s64)(s32)(ptr_reg->off - smin_val))) {
/* pointer -= K. Subtract it from fixed offset */
dst_reg->smin_value = smin_ptr;
dst_reg->smax_value = smax_ptr;
dst_reg->umin_value = umin_ptr;
dst_reg->umax_value = umax_ptr;
dst_reg->var_off = ptr_reg->var_off;
dst_reg->id = ptr_reg->id;
dst_reg->off = ptr_reg->off - smin_val;
dst_reg->raw = ptr_reg->raw;
break;
}
/* A new variable offset is created. If the subtrahend is known
* nonnegative, then any reg->range we had before is still good.
*/
if (signed_sub_overflows(smin_ptr, smax_val) ||
signed_sub_overflows(smax_ptr, smin_val)) {
/* Overflow possible, we know nothing */
dst_reg->smin_value = S64_MIN;
dst_reg->smax_value = S64_MAX;
} else {
dst_reg->smin_value = smin_ptr - smax_val;
dst_reg->smax_value = smax_ptr - smin_val;
}
if (umin_ptr < umax_val) {
/* Overflow possible, we know nothing */
dst_reg->umin_value = 0;
dst_reg->umax_value = U64_MAX;
} else {
/* Cannot overflow (as long as bounds are consistent) */
dst_reg->umin_value = umin_ptr - umax_val;
dst_reg->umax_value = umax_ptr - umin_val;
}
dst_reg->var_off = tnum_sub(ptr_reg->var_off, off_reg->var_off);
dst_reg->off = ptr_reg->off;
dst_reg->raw = ptr_reg->raw;
if (reg_is_pkt_pointer(ptr_reg)) {
dst_reg->id = ++env->id_gen;
/* something was added to pkt_ptr, set range to zero */
if (smin_val < 0)
memset(&dst_reg->raw, 0, sizeof(dst_reg->raw));
}
break;
case BPF_AND:
case BPF_OR:
case BPF_XOR:
/* bitwise ops on pointers are troublesome, prohibit. */
verbose(env, "R%d bitwise operator %s on pointer prohibited\n",
dst, bpf_alu_string[opcode >> 4]);
return -EACCES;
default:
/* other operators (e.g. MUL,LSH) produce non-pointer results */
verbose(env, "R%d pointer arithmetic with %s operator prohibited\n",
dst, bpf_alu_string[opcode >> 4]);
return -EACCES;
}
if (!check_reg_sane_offset(env, dst_reg, ptr_reg->type))
return -EINVAL;
reg_bounds_sync(dst_reg);
if (sanitize_check_bounds(env, insn, dst_reg) < 0)
return -EACCES;
if (sanitize_needed(opcode)) {
ret = sanitize_ptr_alu(env, insn, dst_reg, off_reg, dst_reg,
&info, true);
if (ret < 0)
return sanitize_err(env, insn, ret, off_reg, dst_reg);
}
return 0;
}
| [] | [] | [
"CVE-2024-26589"
] | [
"CWE-119"
] | 46 | {
"Execution Environment": [],
"Explanation": null,
"External Function": [
"base_type",
"check_flow_keys_access"
],
"Function Argument": [],
"Globals": [],
"Type Execution Declaration": [
"PTR_TO_FLOW_KEYS"
]
} |
46 | linux | https://github.com/torvalds/linux | kernel/bpf/verifier.c | 22c7fa171a02d310e3a3f6ed46a698ca8a0060ed | bpf: Reject variable offset alu on PTR_TO_FLOW_KEYS
For PTR_TO_FLOW_KEYS, check_flow_keys_access() only uses fixed off
for validation. However, variable offset ptr alu is not prohibited
for this ptr kind. So the variable offset is not checked.
The following prog is accepted:
func#0 @0
0: R1=ctx() R10=fp0
0: (bf) r6 = r1 ; R1=ctx() R6_w=ctx()
1: (79) r7 = *(u64 *)(r6 +144) ; R6_w=ctx() R7_w=flow_keys()
2: (b7) r8 = 1024 ; R8_w=1024
3: (37) r8 /= 1 ; R8_w=scalar()
4: (57) r8 &= 1024 ; R8_w=scalar(smin=smin32=0,
smax=umax=smax32=umax32=1024,var_off=(0x0; 0x400))
5: (0f) r7 += r8
mark_precise: frame0: last_idx 5 first_idx 0 subseq_idx -1
mark_precise: frame0: regs=r8 stack= before 4: (57) r8 &= 1024
mark_precise: frame0: regs=r8 stack= before 3: (37) r8 /= 1
mark_precise: frame0: regs=r8 stack= before 2: (b7) r8 = 1024
6: R7_w=flow_keys(smin=smin32=0,smax=umax=smax32=umax32=1024,var_off
=(0x0; 0x400)) R8_w=scalar(smin=smin32=0,smax=umax=smax32=umax32=1024,
var_off=(0x0; 0x400))
6: (79) r0 = *(u64 *)(r7 +0) ; R0_w=scalar()
7: (95) exit
This prog loads flow_keys to r7, and adds the variable offset r8
to r7, and finally causes out-of-bounds access:
BUG: unable to handle page fault for address: ffffc90014c80038
[...]
Call Trace:
<TASK>
bpf_dispatcher_nop_func include/linux/bpf.h:1231 [inline]
__bpf_prog_run include/linux/filter.h:651 [inline]
bpf_prog_run include/linux/filter.h:658 [inline]
bpf_prog_run_pin_on_cpu include/linux/filter.h:675 [inline]
bpf_flow_dissect+0x15f/0x350 net/core/flow_dissector.c:991
bpf_prog_test_run_flow_dissector+0x39d/0x620 net/bpf/test_run.c:1359
bpf_prog_test_run kernel/bpf/syscall.c:4107 [inline]
__sys_bpf+0xf8f/0x4560 kernel/bpf/syscall.c:5475
__do_sys_bpf kernel/bpf/syscall.c:5561 [inline]
__se_sys_bpf kernel/bpf/syscall.c:5559 [inline]
__x64_sys_bpf+0x73/0xb0 kernel/bpf/syscall.c:5559
do_syscall_x64 arch/x86/entry/common.c:52 [inline]
do_syscall_64+0x3f/0x110 arch/x86/entry/common.c:83
entry_SYSCALL_64_after_hwframe+0x63/0x6b
Fix this by rejecting ptr alu with variable offset on flow_keys.
Applying the patch rejects the program with "R7 pointer arithmetic
on flow_keys prohibited".
Fixes: d58e468b1112 ("flow_dissector: implements flow dissector BPF hook")
Signed-off-by: Hao Sun <sunhao.th@gmail.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Yonghong Song <yonghong.song@linux.dev>
Link: https://lore.kernel.org/bpf/20240115082028.9992-1-sunhao.th@gmail.com | false | cfdd482284e53513277e2607d71373e7 | adjust_ptr_min_max_vals | static int adjust_ptr_min_max_vals(struct bpf_verifier_env *env,
struct bpf_insn *insn,
const struct bpf_reg_state *ptr_reg,
const struct bpf_reg_state *off_reg)
{
struct bpf_verifier_state *vstate = env->cur_state;
struct bpf_func_state *state = vstate->frame[vstate->curframe];
struct bpf_reg_state *regs = state->regs, *dst_reg;
bool known = tnum_is_const(off_reg->var_off);
s64 smin_val = off_reg->smin_value, smax_val = off_reg->smax_value,
smin_ptr = ptr_reg->smin_value, smax_ptr = ptr_reg->smax_value;
u64 umin_val = off_reg->umin_value, umax_val = off_reg->umax_value,
umin_ptr = ptr_reg->umin_value, umax_ptr = ptr_reg->umax_value;
struct bpf_sanitize_info info = {};
u8 opcode = BPF_OP(insn->code);
u32 dst = insn->dst_reg;
int ret;
dst_reg = ®s[dst];
if ((known && (smin_val != smax_val || umin_val != umax_val)) ||
smin_val > smax_val || umin_val > umax_val) {
/* Taint dst register if offset had invalid bounds derived from
* e.g. dead branches.
*/
__mark_reg_unknown(env, dst_reg);
return 0;
}
if (BPF_CLASS(insn->code) != BPF_ALU64) {
/* 32-bit ALU ops on pointers produce (meaningless) scalars */
if (opcode == BPF_SUB && env->allow_ptr_leaks) {
__mark_reg_unknown(env, dst_reg);
return 0;
}
verbose(env,
"R%d 32-bit pointer arithmetic prohibited\n",
dst);
return -EACCES;
}
if (ptr_reg->type & PTR_MAYBE_NULL) {
verbose(env, "R%d pointer arithmetic on %s prohibited, null-check it first\n",
dst, reg_type_str(env, ptr_reg->type));
return -EACCES;
}
switch (base_type(ptr_reg->type)) {
case PTR_TO_FLOW_KEYS:
if (known)
break;
fallthrough;
case CONST_PTR_TO_MAP:
/* smin_val represents the known value */
if (known && smin_val == 0 && opcode == BPF_ADD)
break;
fallthrough;
case PTR_TO_PACKET_END:
case PTR_TO_SOCKET:
case PTR_TO_SOCK_COMMON:
case PTR_TO_TCP_SOCK:
case PTR_TO_XDP_SOCK:
verbose(env, "R%d pointer arithmetic on %s prohibited\n",
dst, reg_type_str(env, ptr_reg->type));
return -EACCES;
default:
break;
}
/* In case of 'scalar += pointer', dst_reg inherits pointer type and id.
* The id may be overwritten later if we create a new variable offset.
*/
dst_reg->type = ptr_reg->type;
dst_reg->id = ptr_reg->id;
if (!check_reg_sane_offset(env, off_reg, ptr_reg->type) ||
!check_reg_sane_offset(env, ptr_reg, ptr_reg->type))
return -EINVAL;
/* pointer types do not carry 32-bit bounds at the moment. */
__mark_reg32_unbounded(dst_reg);
if (sanitize_needed(opcode)) {
ret = sanitize_ptr_alu(env, insn, ptr_reg, off_reg, dst_reg,
&info, false);
if (ret < 0)
return sanitize_err(env, insn, ret, off_reg, dst_reg);
}
switch (opcode) {
case BPF_ADD:
/* We can take a fixed offset as long as it doesn't overflow
* the s32 'off' field
*/
if (known && (ptr_reg->off + smin_val ==
(s64)(s32)(ptr_reg->off + smin_val))) {
/* pointer += K. Accumulate it into fixed offset */
dst_reg->smin_value = smin_ptr;
dst_reg->smax_value = smax_ptr;
dst_reg->umin_value = umin_ptr;
dst_reg->umax_value = umax_ptr;
dst_reg->var_off = ptr_reg->var_off;
dst_reg->off = ptr_reg->off + smin_val;
dst_reg->raw = ptr_reg->raw;
break;
}
/* A new variable offset is created. Note that off_reg->off
* == 0, since it's a scalar.
* dst_reg gets the pointer type and since some positive
* integer value was added to the pointer, give it a new 'id'
* if it's a PTR_TO_PACKET.
* this creates a new 'base' pointer, off_reg (variable) gets
* added into the variable offset, and we copy the fixed offset
* from ptr_reg.
*/
if (signed_add_overflows(smin_ptr, smin_val) ||
signed_add_overflows(smax_ptr, smax_val)) {
dst_reg->smin_value = S64_MIN;
dst_reg->smax_value = S64_MAX;
} else {
dst_reg->smin_value = smin_ptr + smin_val;
dst_reg->smax_value = smax_ptr + smax_val;
}
if (umin_ptr + umin_val < umin_ptr ||
umax_ptr + umax_val < umax_ptr) {
dst_reg->umin_value = 0;
dst_reg->umax_value = U64_MAX;
} else {
dst_reg->umin_value = umin_ptr + umin_val;
dst_reg->umax_value = umax_ptr + umax_val;
}
dst_reg->var_off = tnum_add(ptr_reg->var_off, off_reg->var_off);
dst_reg->off = ptr_reg->off;
dst_reg->raw = ptr_reg->raw;
if (reg_is_pkt_pointer(ptr_reg)) {
dst_reg->id = ++env->id_gen;
/* something was added to pkt_ptr, set range to zero */
memset(&dst_reg->raw, 0, sizeof(dst_reg->raw));
}
break;
case BPF_SUB:
if (dst_reg == off_reg) {
/* scalar -= pointer. Creates an unknown scalar */
verbose(env, "R%d tried to subtract pointer from scalar\n",
dst);
return -EACCES;
}
/* We don't allow subtraction from FP, because (according to
* test_verifier.c test "invalid fp arithmetic", JITs might not
* be able to deal with it.
*/
if (ptr_reg->type == PTR_TO_STACK) {
verbose(env, "R%d subtraction from stack pointer prohibited\n",
dst);
return -EACCES;
}
if (known && (ptr_reg->off - smin_val ==
(s64)(s32)(ptr_reg->off - smin_val))) {
/* pointer -= K. Subtract it from fixed offset */
dst_reg->smin_value = smin_ptr;
dst_reg->smax_value = smax_ptr;
dst_reg->umin_value = umin_ptr;
dst_reg->umax_value = umax_ptr;
dst_reg->var_off = ptr_reg->var_off;
dst_reg->id = ptr_reg->id;
dst_reg->off = ptr_reg->off - smin_val;
dst_reg->raw = ptr_reg->raw;
break;
}
/* A new variable offset is created. If the subtrahend is known
* nonnegative, then any reg->range we had before is still good.
*/
if (signed_sub_overflows(smin_ptr, smax_val) ||
signed_sub_overflows(smax_ptr, smin_val)) {
/* Overflow possible, we know nothing */
dst_reg->smin_value = S64_MIN;
dst_reg->smax_value = S64_MAX;
} else {
dst_reg->smin_value = smin_ptr - smax_val;
dst_reg->smax_value = smax_ptr - smin_val;
}
if (umin_ptr < umax_val) {
/* Overflow possible, we know nothing */
dst_reg->umin_value = 0;
dst_reg->umax_value = U64_MAX;
} else {
/* Cannot overflow (as long as bounds are consistent) */
dst_reg->umin_value = umin_ptr - umax_val;
dst_reg->umax_value = umax_ptr - umin_val;
}
dst_reg->var_off = tnum_sub(ptr_reg->var_off, off_reg->var_off);
dst_reg->off = ptr_reg->off;
dst_reg->raw = ptr_reg->raw;
if (reg_is_pkt_pointer(ptr_reg)) {
dst_reg->id = ++env->id_gen;
/* something was added to pkt_ptr, set range to zero */
if (smin_val < 0)
memset(&dst_reg->raw, 0, sizeof(dst_reg->raw));
}
break;
case BPF_AND:
case BPF_OR:
case BPF_XOR:
/* bitwise ops on pointers are troublesome, prohibit. */
verbose(env, "R%d bitwise operator %s on pointer prohibited\n",
dst, bpf_alu_string[opcode >> 4]);
return -EACCES;
default:
/* other operators (e.g. MUL,LSH) produce non-pointer results */
verbose(env, "R%d pointer arithmetic with %s operator prohibited\n",
dst, bpf_alu_string[opcode >> 4]);
return -EACCES;
}
if (!check_reg_sane_offset(env, dst_reg, ptr_reg->type))
return -EINVAL;
reg_bounds_sync(dst_reg);
if (sanitize_check_bounds(env, insn, dst_reg) < 0)
return -EACCES;
if (sanitize_needed(opcode)) {
ret = sanitize_ptr_alu(env, insn, dst_reg, off_reg, dst_reg,
&info, true);
if (ret < 0)
return sanitize_err(env, insn, ret, off_reg, dst_reg);
}
return 0;
}
| [[12829, "\tcase PTR_TO_FLOW_KEYS:\n"], [12830, "\t\tif (known)\n"], [12831, "\t\t\tbreak;\n"], [12832, "\t\tfallthrough;\n"]] | [[12829, "case PTR_TO_FLOW_KEYS:"], [12830, "if (known)"], [12831, "break;"], [12832, "fallthrough;"]] | [
"CVE-2024-26589"
] | [
"CWE-119"
] | 46 | {
"Execution Environment": [],
"Explanation": null,
"External Function": [
"base_type",
"check_flow_keys_access"
],
"Function Argument": [],
"Globals": [],
"Type Execution Declaration": [
"PTR_TO_FLOW_KEYS"
]
} |
47 | linux | https://github.com/torvalds/linux | net/core/skbuff.c | 23d05d563b7e7b0314e65c8e882bc27eac2da8e7 | net: prevent mss overflow in skb_segment()
Once again syzbot is able to crash the kernel in skb_segment() [1]
GSO_BY_FRAGS is a forbidden value, but unfortunately the following
computation in skb_segment() can reach it quite easily :
mss = mss * partial_segs;
65535 = 3 * 5 * 17 * 257, so many initial values of mss can lead to
a bad final result.
Make sure to limit segmentation so that the new mss value is smaller
than GSO_BY_FRAGS.
[1]
general protection fault, probably for non-canonical address 0xdffffc000000000e: 0000 [#1] PREEMPT SMP KASAN
KASAN: null-ptr-deref in range [0x0000000000000070-0x0000000000000077]
CPU: 1 PID: 5079 Comm: syz-executor993 Not tainted 6.7.0-rc4-syzkaller-00141-g1ae4cd3cbdd0 #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 11/10/2023
RIP: 0010:skb_segment+0x181d/0x3f30 net/core/skbuff.c:4551
Code: 83 e3 02 e9 fb ed ff ff e8 90 68 1c f9 48 8b 84 24 f8 00 00 00 48 8d 78 70 48 b8 00 00 00 00 00 fc ff df 48 89 fa 48 c1 ea 03 <0f> b6 04 02 84 c0 74 08 3c 03 0f 8e 8a 21 00 00 48 8b 84 24 f8 00
RSP: 0018:ffffc900043473d0 EFLAGS: 00010202
RAX: dffffc0000000000 RBX: 0000000000010046 RCX: ffffffff886b1597
RDX: 000000000000000e RSI: ffffffff886b2520 RDI: 0000000000000070
RBP: ffffc90004347578 R08: 0000000000000005 R09: 000000000000ffff
R10: 000000000000ffff R11: 0000000000000002 R12: ffff888063202ac0
R13: 0000000000010000 R14: 000000000000ffff R15: 0000000000000046
FS: 0000555556e7e380(0000) GS:ffff8880b9900000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000000020010000 CR3: 0000000027ee2000 CR4: 00000000003506f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
<TASK>
udp6_ufo_fragment+0xa0e/0xd00 net/ipv6/udp_offload.c:109
ipv6_gso_segment+0x534/0x17e0 net/ipv6/ip6_offload.c:120
skb_mac_gso_segment+0x290/0x610 net/core/gso.c:53
__skb_gso_segment+0x339/0x710 net/core/gso.c:124
skb_gso_segment include/net/gso.h:83 [inline]
validate_xmit_skb+0x36c/0xeb0 net/core/dev.c:3626
__dev_queue_xmit+0x6f3/0x3d60 net/core/dev.c:4338
dev_queue_xmit include/linux/netdevice.h:3134 [inline]
packet_xmit+0x257/0x380 net/packet/af_packet.c:276
packet_snd net/packet/af_packet.c:3087 [inline]
packet_sendmsg+0x24c6/0x5220 net/packet/af_packet.c:3119
sock_sendmsg_nosec net/socket.c:730 [inline]
__sock_sendmsg+0xd5/0x180 net/socket.c:745
__sys_sendto+0x255/0x340 net/socket.c:2190
__do_sys_sendto net/socket.c:2202 [inline]
__se_sys_sendto net/socket.c:2198 [inline]
__x64_sys_sendto+0xe0/0x1b0 net/socket.c:2198
do_syscall_x64 arch/x86/entry/common.c:52 [inline]
do_syscall_64+0x40/0x110 arch/x86/entry/common.c:83
entry_SYSCALL_64_after_hwframe+0x63/0x6b
RIP: 0033:0x7f8692032aa9
Code: 28 00 00 00 75 05 48 83 c4 28 c3 e8 d1 19 00 00 90 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007fff8d685418 EFLAGS: 00000246 ORIG_RAX: 000000000000002c
RAX: ffffffffffffffda RBX: 0000000000000003 RCX: 00007f8692032aa9
RDX: 0000000000010048 RSI: 00000000200000c0 RDI: 0000000000000003
RBP: 00000000000f4240 R08: 0000000020000540 R09: 0000000000000014
R10: 0000000000000000 R11: 0000000000000246 R12: 00007fff8d685480
R13: 0000000000000001 R14: 00007fff8d685480 R15: 0000000000000003
</TASK>
Modules linked in:
---[ end trace 0000000000000000 ]---
RIP: 0010:skb_segment+0x181d/0x3f30 net/core/skbuff.c:4551
Code: 83 e3 02 e9 fb ed ff ff e8 90 68 1c f9 48 8b 84 24 f8 00 00 00 48 8d 78 70 48 b8 00 00 00 00 00 fc ff df 48 89 fa 48 c1 ea 03 <0f> b6 04 02 84 c0 74 08 3c 03 0f 8e 8a 21 00 00 48 8b 84 24 f8 00
RSP: 0018:ffffc900043473d0 EFLAGS: 00010202
RAX: dffffc0000000000 RBX: 0000000000010046 RCX: ffffffff886b1597
RDX: 000000000000000e RSI: ffffffff886b2520 RDI: 0000000000000070
RBP: ffffc90004347578 R08: 0000000000000005 R09: 000000000000ffff
R10: 000000000000ffff R11: 0000000000000002 R12: ffff888063202ac0
R13: 0000000000010000 R14: 000000000000ffff R15: 0000000000000046
FS: 0000555556e7e380(0000) GS:ffff8880b9900000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000000020010000 CR3: 0000000027ee2000 CR4: 00000000003506f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Fixes: 3953c46c3ac7 ("sk_buff: allow segmenting based on frag sizes")
Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
Reviewed-by: Willem de Bruijn <willemb@google.com>
Link: https://lore.kernel.org/r/20231212164621.4131800-1-edumazet@google.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org> | true | f4a6beb6c8f04d99adc7754ff5a865bc | skb_segment | struct sk_buff *skb_segment(struct sk_buff *head_skb,
netdev_features_t features)
{
struct sk_buff *segs = NULL;
struct sk_buff *tail = NULL;
struct sk_buff *list_skb = skb_shinfo(head_skb)->frag_list;
unsigned int mss = skb_shinfo(head_skb)->gso_size;
unsigned int doffset = head_skb->data - skb_mac_header(head_skb);
unsigned int offset = doffset;
unsigned int tnl_hlen = skb_tnl_header_len(head_skb);
unsigned int partial_segs = 0;
unsigned int headroom;
unsigned int len = head_skb->len;
struct sk_buff *frag_skb;
skb_frag_t *frag;
__be16 proto;
bool csum, sg;
int err = -ENOMEM;
int i = 0;
int nfrags, pos;
if ((skb_shinfo(head_skb)->gso_type & SKB_GSO_DODGY) &&
mss != GSO_BY_FRAGS && mss != skb_headlen(head_skb)) {
struct sk_buff *check_skb;
for (check_skb = list_skb; check_skb; check_skb = check_skb->next) {
if (skb_headlen(check_skb) && !check_skb->head_frag) {
/* gso_size is untrusted, and we have a frag_list with
* a linear non head_frag item.
*
* If head_skb's headlen does not fit requested gso_size,
* it means that the frag_list members do NOT terminate
* on exact gso_size boundaries. Hence we cannot perform
* skb_frag_t page sharing. Therefore we must fallback to
* copying the frag_list skbs; we do so by disabling SG.
*/
features &= ~NETIF_F_SG;
break;
}
}
}
__skb_push(head_skb, doffset);
proto = skb_network_protocol(head_skb, NULL);
if (unlikely(!proto))
return ERR_PTR(-EINVAL);
sg = !!(features & NETIF_F_SG);
csum = !!can_checksum_protocol(features, proto);
if (sg && csum && (mss != GSO_BY_FRAGS)) {
if (!(features & NETIF_F_GSO_PARTIAL)) {
struct sk_buff *iter;
unsigned int frag_len;
if (!list_skb ||
!net_gso_ok(features, skb_shinfo(head_skb)->gso_type))
goto normal;
/* If we get here then all the required
* GSO features except frag_list are supported.
* Try to split the SKB to multiple GSO SKBs
* with no frag_list.
* Currently we can do that only when the buffers don't
* have a linear part and all the buffers except
* the last are of the same length.
*/
frag_len = list_skb->len;
skb_walk_frags(head_skb, iter) {
if (frag_len != iter->len && iter->next)
goto normal;
if (skb_headlen(iter) && !iter->head_frag)
goto normal;
len -= iter->len;
}
if (len != frag_len)
goto normal;
}
/* GSO partial only requires that we trim off any excess that
* doesn't fit into an MSS sized block, so take care of that
* now.
*/
partial_segs = len / mss;
if (partial_segs > 1)
mss *= partial_segs;
else
partial_segs = 0;
}
normal:
headroom = skb_headroom(head_skb);
pos = skb_headlen(head_skb);
if (skb_orphan_frags(head_skb, GFP_ATOMIC))
return ERR_PTR(-ENOMEM);
nfrags = skb_shinfo(head_skb)->nr_frags;
frag = skb_shinfo(head_skb)->frags;
frag_skb = head_skb;
do {
struct sk_buff *nskb;
skb_frag_t *nskb_frag;
int hsize;
int size;
if (unlikely(mss == GSO_BY_FRAGS)) {
len = list_skb->len;
} else {
len = head_skb->len - offset;
if (len > mss)
len = mss;
}
hsize = skb_headlen(head_skb) - offset;
if (hsize <= 0 && i >= nfrags && skb_headlen(list_skb) &&
(skb_headlen(list_skb) == len || sg)) {
BUG_ON(skb_headlen(list_skb) > len);
nskb = skb_clone(list_skb, GFP_ATOMIC);
if (unlikely(!nskb))
goto err;
i = 0;
nfrags = skb_shinfo(list_skb)->nr_frags;
frag = skb_shinfo(list_skb)->frags;
frag_skb = list_skb;
pos += skb_headlen(list_skb);
while (pos < offset + len) {
BUG_ON(i >= nfrags);
size = skb_frag_size(frag);
if (pos + size > offset + len)
break;
i++;
pos += size;
frag++;
}
list_skb = list_skb->next;
if (unlikely(pskb_trim(nskb, len))) {
kfree_skb(nskb);
goto err;
}
hsize = skb_end_offset(nskb);
if (skb_cow_head(nskb, doffset + headroom)) {
kfree_skb(nskb);
goto err;
}
nskb->truesize += skb_end_offset(nskb) - hsize;
skb_release_head_state(nskb);
__skb_push(nskb, doffset);
} else {
if (hsize < 0)
hsize = 0;
if (hsize > len || !sg)
hsize = len;
nskb = __alloc_skb(hsize + doffset + headroom,
GFP_ATOMIC, skb_alloc_rx_flag(head_skb),
NUMA_NO_NODE);
if (unlikely(!nskb))
goto err;
skb_reserve(nskb, headroom);
__skb_put(nskb, doffset);
}
if (segs)
tail->next = nskb;
else
segs = nskb;
tail = nskb;
__copy_skb_header(nskb, head_skb);
skb_headers_offset_update(nskb, skb_headroom(nskb) - headroom);
skb_reset_mac_len(nskb);
skb_copy_from_linear_data_offset(head_skb, -tnl_hlen,
nskb->data - tnl_hlen,
doffset + tnl_hlen);
if (nskb->len == len + doffset)
goto perform_csum_check;
if (!sg) {
if (!csum) {
if (!nskb->remcsum_offload)
nskb->ip_summed = CHECKSUM_NONE;
SKB_GSO_CB(nskb)->csum =
skb_copy_and_csum_bits(head_skb, offset,
skb_put(nskb,
len),
len);
SKB_GSO_CB(nskb)->csum_start =
skb_headroom(nskb) + doffset;
} else {
if (skb_copy_bits(head_skb, offset, skb_put(nskb, len), len))
goto err;
}
continue;
}
nskb_frag = skb_shinfo(nskb)->frags;
skb_copy_from_linear_data_offset(head_skb, offset,
skb_put(nskb, hsize), hsize);
skb_shinfo(nskb)->flags |= skb_shinfo(head_skb)->flags &
SKBFL_SHARED_FRAG;
if (skb_zerocopy_clone(nskb, frag_skb, GFP_ATOMIC))
goto err;
while (pos < offset + len) {
if (i >= nfrags) {
if (skb_orphan_frags(list_skb, GFP_ATOMIC) ||
skb_zerocopy_clone(nskb, list_skb,
GFP_ATOMIC))
goto err;
i = 0;
nfrags = skb_shinfo(list_skb)->nr_frags;
frag = skb_shinfo(list_skb)->frags;
frag_skb = list_skb;
if (!skb_headlen(list_skb)) {
BUG_ON(!nfrags);
} else {
BUG_ON(!list_skb->head_frag);
/* to make room for head_frag. */
i--;
frag--;
}
list_skb = list_skb->next;
}
if (unlikely(skb_shinfo(nskb)->nr_frags >=
MAX_SKB_FRAGS)) {
net_warn_ratelimited(
"skb_segment: too many frags: %u %u\n",
pos, mss);
err = -EINVAL;
goto err;
}
*nskb_frag = (i < 0) ? skb_head_frag_to_page_desc(frag_skb) : *frag;
__skb_frag_ref(nskb_frag);
size = skb_frag_size(nskb_frag);
if (pos < offset) {
skb_frag_off_add(nskb_frag, offset - pos);
skb_frag_size_sub(nskb_frag, offset - pos);
}
skb_shinfo(nskb)->nr_frags++;
if (pos + size <= offset + len) {
i++;
frag++;
pos += size;
} else {
skb_frag_size_sub(nskb_frag, pos + size - (offset + len));
goto skip_fraglist;
}
nskb_frag++;
}
skip_fraglist:
nskb->data_len = len - hsize;
nskb->len += nskb->data_len;
nskb->truesize += nskb->data_len;
perform_csum_check:
if (!csum) {
if (skb_has_shared_frag(nskb) &&
__skb_linearize(nskb))
goto err;
if (!nskb->remcsum_offload)
nskb->ip_summed = CHECKSUM_NONE;
SKB_GSO_CB(nskb)->csum =
skb_checksum(nskb, doffset,
nskb->len - doffset, 0);
SKB_GSO_CB(nskb)->csum_start =
skb_headroom(nskb) + doffset;
}
} while ((offset += len) < head_skb->len);
/* Some callers want to get the end of the list.
* Put it in segs->prev to avoid walking the list.
* (see validate_xmit_skb_list() for example)
*/
segs->prev = tail;
if (partial_segs) {
struct sk_buff *iter;
int type = skb_shinfo(head_skb)->gso_type;
unsigned short gso_size = skb_shinfo(head_skb)->gso_size;
/* Update type to add partial and then remove dodgy if set */
type |= (features & NETIF_F_GSO_PARTIAL) / NETIF_F_GSO_PARTIAL * SKB_GSO_PARTIAL;
type &= ~SKB_GSO_DODGY;
/* Update GSO info and prepare to start updating headers on
* our way back down the stack of protocols.
*/
for (iter = segs; iter; iter = iter->next) {
skb_shinfo(iter)->gso_size = gso_size;
skb_shinfo(iter)->gso_segs = partial_segs;
skb_shinfo(iter)->gso_type = type;
SKB_GSO_CB(iter)->data_offset = skb_headroom(iter) + doffset;
}
if (tail->len - doffset <= gso_size)
skb_shinfo(tail)->gso_size = 0;
else if (tail != segs)
skb_shinfo(tail)->gso_segs = DIV_ROUND_UP(tail->len - doffset, gso_size);
}
/* Following permits correct backpressure, for protocols
* using skb_set_owner_w().
* Idea is to tranfert ownership from head_skb to last segment.
*/
if (head_skb->destructor == sock_wfree) {
swap(tail->truesize, head_skb->truesize);
swap(tail->destructor, head_skb->destructor);
swap(tail->sk, head_skb->sk);
}
return segs;
err:
kfree_skb_list(segs);
return ERR_PTR(err);
}
| [[4526, "\t\tpartial_segs = len / mss;\n"]] | [[4526, "partial_segs = len / mss;"]] | [
"CVE-2023-52435"
] | [
"CWE-119"
] | 48 | {
"Execution Environment": [],
"Explanation": null,
"External Function": [],
"Function Argument": [],
"Globals": [
"GSO_BY_FRAGS"
],
"Type Execution Declaration": []
} |
48 | linux | https://github.com/torvalds/linux | net/core/skbuff.c | 23d05d563b7e7b0314e65c8e882bc27eac2da8e7 | net: prevent mss overflow in skb_segment()
Once again syzbot is able to crash the kernel in skb_segment() [1]
GSO_BY_FRAGS is a forbidden value, but unfortunately the following
computation in skb_segment() can reach it quite easily :
mss = mss * partial_segs;
65535 = 3 * 5 * 17 * 257, so many initial values of mss can lead to
a bad final result.
Make sure to limit segmentation so that the new mss value is smaller
than GSO_BY_FRAGS.
[1]
general protection fault, probably for non-canonical address 0xdffffc000000000e: 0000 [#1] PREEMPT SMP KASAN
KASAN: null-ptr-deref in range [0x0000000000000070-0x0000000000000077]
CPU: 1 PID: 5079 Comm: syz-executor993 Not tainted 6.7.0-rc4-syzkaller-00141-g1ae4cd3cbdd0 #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 11/10/2023
RIP: 0010:skb_segment+0x181d/0x3f30 net/core/skbuff.c:4551
Code: 83 e3 02 e9 fb ed ff ff e8 90 68 1c f9 48 8b 84 24 f8 00 00 00 48 8d 78 70 48 b8 00 00 00 00 00 fc ff df 48 89 fa 48 c1 ea 03 <0f> b6 04 02 84 c0 74 08 3c 03 0f 8e 8a 21 00 00 48 8b 84 24 f8 00
RSP: 0018:ffffc900043473d0 EFLAGS: 00010202
RAX: dffffc0000000000 RBX: 0000000000010046 RCX: ffffffff886b1597
RDX: 000000000000000e RSI: ffffffff886b2520 RDI: 0000000000000070
RBP: ffffc90004347578 R08: 0000000000000005 R09: 000000000000ffff
R10: 000000000000ffff R11: 0000000000000002 R12: ffff888063202ac0
R13: 0000000000010000 R14: 000000000000ffff R15: 0000000000000046
FS: 0000555556e7e380(0000) GS:ffff8880b9900000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000000020010000 CR3: 0000000027ee2000 CR4: 00000000003506f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
<TASK>
udp6_ufo_fragment+0xa0e/0xd00 net/ipv6/udp_offload.c:109
ipv6_gso_segment+0x534/0x17e0 net/ipv6/ip6_offload.c:120
skb_mac_gso_segment+0x290/0x610 net/core/gso.c:53
__skb_gso_segment+0x339/0x710 net/core/gso.c:124
skb_gso_segment include/net/gso.h:83 [inline]
validate_xmit_skb+0x36c/0xeb0 net/core/dev.c:3626
__dev_queue_xmit+0x6f3/0x3d60 net/core/dev.c:4338
dev_queue_xmit include/linux/netdevice.h:3134 [inline]
packet_xmit+0x257/0x380 net/packet/af_packet.c:276
packet_snd net/packet/af_packet.c:3087 [inline]
packet_sendmsg+0x24c6/0x5220 net/packet/af_packet.c:3119
sock_sendmsg_nosec net/socket.c:730 [inline]
__sock_sendmsg+0xd5/0x180 net/socket.c:745
__sys_sendto+0x255/0x340 net/socket.c:2190
__do_sys_sendto net/socket.c:2202 [inline]
__se_sys_sendto net/socket.c:2198 [inline]
__x64_sys_sendto+0xe0/0x1b0 net/socket.c:2198
do_syscall_x64 arch/x86/entry/common.c:52 [inline]
do_syscall_64+0x40/0x110 arch/x86/entry/common.c:83
entry_SYSCALL_64_after_hwframe+0x63/0x6b
RIP: 0033:0x7f8692032aa9
Code: 28 00 00 00 75 05 48 83 c4 28 c3 e8 d1 19 00 00 90 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007fff8d685418 EFLAGS: 00000246 ORIG_RAX: 000000000000002c
RAX: ffffffffffffffda RBX: 0000000000000003 RCX: 00007f8692032aa9
RDX: 0000000000010048 RSI: 00000000200000c0 RDI: 0000000000000003
RBP: 00000000000f4240 R08: 0000000020000540 R09: 0000000000000014
R10: 0000000000000000 R11: 0000000000000246 R12: 00007fff8d685480
R13: 0000000000000001 R14: 00007fff8d685480 R15: 0000000000000003
</TASK>
Modules linked in:
---[ end trace 0000000000000000 ]---
RIP: 0010:skb_segment+0x181d/0x3f30 net/core/skbuff.c:4551
Code: 83 e3 02 e9 fb ed ff ff e8 90 68 1c f9 48 8b 84 24 f8 00 00 00 48 8d 78 70 48 b8 00 00 00 00 00 fc ff df 48 89 fa 48 c1 ea 03 <0f> b6 04 02 84 c0 74 08 3c 03 0f 8e 8a 21 00 00 48 8b 84 24 f8 00
RSP: 0018:ffffc900043473d0 EFLAGS: 00010202
RAX: dffffc0000000000 RBX: 0000000000010046 RCX: ffffffff886b1597
RDX: 000000000000000e RSI: ffffffff886b2520 RDI: 0000000000000070
RBP: ffffc90004347578 R08: 0000000000000005 R09: 000000000000ffff
R10: 000000000000ffff R11: 0000000000000002 R12: ffff888063202ac0
R13: 0000000000010000 R14: 000000000000ffff R15: 0000000000000046
FS: 0000555556e7e380(0000) GS:ffff8880b9900000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000000020010000 CR3: 0000000027ee2000 CR4: 00000000003506f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Fixes: 3953c46c3ac7 ("sk_buff: allow segmenting based on frag sizes")
Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
Reviewed-by: Willem de Bruijn <willemb@google.com>
Link: https://lore.kernel.org/r/20231212164621.4131800-1-edumazet@google.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org> | false | 70d5997a37d55526b7145ec1ac92a699 | skb_segment | struct sk_buff *skb_segment(struct sk_buff *head_skb,
netdev_features_t features)
{
struct sk_buff *segs = NULL;
struct sk_buff *tail = NULL;
struct sk_buff *list_skb = skb_shinfo(head_skb)->frag_list;
unsigned int mss = skb_shinfo(head_skb)->gso_size;
unsigned int doffset = head_skb->data - skb_mac_header(head_skb);
unsigned int offset = doffset;
unsigned int tnl_hlen = skb_tnl_header_len(head_skb);
unsigned int partial_segs = 0;
unsigned int headroom;
unsigned int len = head_skb->len;
struct sk_buff *frag_skb;
skb_frag_t *frag;
__be16 proto;
bool csum, sg;
int err = -ENOMEM;
int i = 0;
int nfrags, pos;
if ((skb_shinfo(head_skb)->gso_type & SKB_GSO_DODGY) &&
mss != GSO_BY_FRAGS && mss != skb_headlen(head_skb)) {
struct sk_buff *check_skb;
for (check_skb = list_skb; check_skb; check_skb = check_skb->next) {
if (skb_headlen(check_skb) && !check_skb->head_frag) {
/* gso_size is untrusted, and we have a frag_list with
* a linear non head_frag item.
*
* If head_skb's headlen does not fit requested gso_size,
* it means that the frag_list members do NOT terminate
* on exact gso_size boundaries. Hence we cannot perform
* skb_frag_t page sharing. Therefore we must fallback to
* copying the frag_list skbs; we do so by disabling SG.
*/
features &= ~NETIF_F_SG;
break;
}
}
}
__skb_push(head_skb, doffset);
proto = skb_network_protocol(head_skb, NULL);
if (unlikely(!proto))
return ERR_PTR(-EINVAL);
sg = !!(features & NETIF_F_SG);
csum = !!can_checksum_protocol(features, proto);
if (sg && csum && (mss != GSO_BY_FRAGS)) {
if (!(features & NETIF_F_GSO_PARTIAL)) {
struct sk_buff *iter;
unsigned int frag_len;
if (!list_skb ||
!net_gso_ok(features, skb_shinfo(head_skb)->gso_type))
goto normal;
/* If we get here then all the required
* GSO features except frag_list are supported.
* Try to split the SKB to multiple GSO SKBs
* with no frag_list.
* Currently we can do that only when the buffers don't
* have a linear part and all the buffers except
* the last are of the same length.
*/
frag_len = list_skb->len;
skb_walk_frags(head_skb, iter) {
if (frag_len != iter->len && iter->next)
goto normal;
if (skb_headlen(iter) && !iter->head_frag)
goto normal;
len -= iter->len;
}
if (len != frag_len)
goto normal;
}
/* GSO partial only requires that we trim off any excess that
* doesn't fit into an MSS sized block, so take care of that
* now.
* Cap len to not accidentally hit GSO_BY_FRAGS.
*/
partial_segs = min(len, GSO_BY_FRAGS - 1) / mss;
if (partial_segs > 1)
mss *= partial_segs;
else
partial_segs = 0;
}
normal:
headroom = skb_headroom(head_skb);
pos = skb_headlen(head_skb);
if (skb_orphan_frags(head_skb, GFP_ATOMIC))
return ERR_PTR(-ENOMEM);
nfrags = skb_shinfo(head_skb)->nr_frags;
frag = skb_shinfo(head_skb)->frags;
frag_skb = head_skb;
do {
struct sk_buff *nskb;
skb_frag_t *nskb_frag;
int hsize;
int size;
if (unlikely(mss == GSO_BY_FRAGS)) {
len = list_skb->len;
} else {
len = head_skb->len - offset;
if (len > mss)
len = mss;
}
hsize = skb_headlen(head_skb) - offset;
if (hsize <= 0 && i >= nfrags && skb_headlen(list_skb) &&
(skb_headlen(list_skb) == len || sg)) {
BUG_ON(skb_headlen(list_skb) > len);
nskb = skb_clone(list_skb, GFP_ATOMIC);
if (unlikely(!nskb))
goto err;
i = 0;
nfrags = skb_shinfo(list_skb)->nr_frags;
frag = skb_shinfo(list_skb)->frags;
frag_skb = list_skb;
pos += skb_headlen(list_skb);
while (pos < offset + len) {
BUG_ON(i >= nfrags);
size = skb_frag_size(frag);
if (pos + size > offset + len)
break;
i++;
pos += size;
frag++;
}
list_skb = list_skb->next;
if (unlikely(pskb_trim(nskb, len))) {
kfree_skb(nskb);
goto err;
}
hsize = skb_end_offset(nskb);
if (skb_cow_head(nskb, doffset + headroom)) {
kfree_skb(nskb);
goto err;
}
nskb->truesize += skb_end_offset(nskb) - hsize;
skb_release_head_state(nskb);
__skb_push(nskb, doffset);
} else {
if (hsize < 0)
hsize = 0;
if (hsize > len || !sg)
hsize = len;
nskb = __alloc_skb(hsize + doffset + headroom,
GFP_ATOMIC, skb_alloc_rx_flag(head_skb),
NUMA_NO_NODE);
if (unlikely(!nskb))
goto err;
skb_reserve(nskb, headroom);
__skb_put(nskb, doffset);
}
if (segs)
tail->next = nskb;
else
segs = nskb;
tail = nskb;
__copy_skb_header(nskb, head_skb);
skb_headers_offset_update(nskb, skb_headroom(nskb) - headroom);
skb_reset_mac_len(nskb);
skb_copy_from_linear_data_offset(head_skb, -tnl_hlen,
nskb->data - tnl_hlen,
doffset + tnl_hlen);
if (nskb->len == len + doffset)
goto perform_csum_check;
if (!sg) {
if (!csum) {
if (!nskb->remcsum_offload)
nskb->ip_summed = CHECKSUM_NONE;
SKB_GSO_CB(nskb)->csum =
skb_copy_and_csum_bits(head_skb, offset,
skb_put(nskb,
len),
len);
SKB_GSO_CB(nskb)->csum_start =
skb_headroom(nskb) + doffset;
} else {
if (skb_copy_bits(head_skb, offset, skb_put(nskb, len), len))
goto err;
}
continue;
}
nskb_frag = skb_shinfo(nskb)->frags;
skb_copy_from_linear_data_offset(head_skb, offset,
skb_put(nskb, hsize), hsize);
skb_shinfo(nskb)->flags |= skb_shinfo(head_skb)->flags &
SKBFL_SHARED_FRAG;
if (skb_zerocopy_clone(nskb, frag_skb, GFP_ATOMIC))
goto err;
while (pos < offset + len) {
if (i >= nfrags) {
if (skb_orphan_frags(list_skb, GFP_ATOMIC) ||
skb_zerocopy_clone(nskb, list_skb,
GFP_ATOMIC))
goto err;
i = 0;
nfrags = skb_shinfo(list_skb)->nr_frags;
frag = skb_shinfo(list_skb)->frags;
frag_skb = list_skb;
if (!skb_headlen(list_skb)) {
BUG_ON(!nfrags);
} else {
BUG_ON(!list_skb->head_frag);
/* to make room for head_frag. */
i--;
frag--;
}
list_skb = list_skb->next;
}
if (unlikely(skb_shinfo(nskb)->nr_frags >=
MAX_SKB_FRAGS)) {
net_warn_ratelimited(
"skb_segment: too many frags: %u %u\n",
pos, mss);
err = -EINVAL;
goto err;
}
*nskb_frag = (i < 0) ? skb_head_frag_to_page_desc(frag_skb) : *frag;
__skb_frag_ref(nskb_frag);
size = skb_frag_size(nskb_frag);
if (pos < offset) {
skb_frag_off_add(nskb_frag, offset - pos);
skb_frag_size_sub(nskb_frag, offset - pos);
}
skb_shinfo(nskb)->nr_frags++;
if (pos + size <= offset + len) {
i++;
frag++;
pos += size;
} else {
skb_frag_size_sub(nskb_frag, pos + size - (offset + len));
goto skip_fraglist;
}
nskb_frag++;
}
skip_fraglist:
nskb->data_len = len - hsize;
nskb->len += nskb->data_len;
nskb->truesize += nskb->data_len;
perform_csum_check:
if (!csum) {
if (skb_has_shared_frag(nskb) &&
__skb_linearize(nskb))
goto err;
if (!nskb->remcsum_offload)
nskb->ip_summed = CHECKSUM_NONE;
SKB_GSO_CB(nskb)->csum =
skb_checksum(nskb, doffset,
nskb->len - doffset, 0);
SKB_GSO_CB(nskb)->csum_start =
skb_headroom(nskb) + doffset;
}
} while ((offset += len) < head_skb->len);
/* Some callers want to get the end of the list.
* Put it in segs->prev to avoid walking the list.
* (see validate_xmit_skb_list() for example)
*/
segs->prev = tail;
if (partial_segs) {
struct sk_buff *iter;
int type = skb_shinfo(head_skb)->gso_type;
unsigned short gso_size = skb_shinfo(head_skb)->gso_size;
/* Update type to add partial and then remove dodgy if set */
type |= (features & NETIF_F_GSO_PARTIAL) / NETIF_F_GSO_PARTIAL * SKB_GSO_PARTIAL;
type &= ~SKB_GSO_DODGY;
/* Update GSO info and prepare to start updating headers on
* our way back down the stack of protocols.
*/
for (iter = segs; iter; iter = iter->next) {
skb_shinfo(iter)->gso_size = gso_size;
skb_shinfo(iter)->gso_segs = partial_segs;
skb_shinfo(iter)->gso_type = type;
SKB_GSO_CB(iter)->data_offset = skb_headroom(iter) + doffset;
}
if (tail->len - doffset <= gso_size)
skb_shinfo(tail)->gso_size = 0;
else if (tail != segs)
skb_shinfo(tail)->gso_segs = DIV_ROUND_UP(tail->len - doffset, gso_size);
}
/* Following permits correct backpressure, for protocols
* using skb_set_owner_w().
* Idea is to tranfert ownership from head_skb to last segment.
*/
if (head_skb->destructor == sock_wfree) {
swap(tail->truesize, head_skb->truesize);
swap(tail->destructor, head_skb->destructor);
swap(tail->sk, head_skb->sk);
}
return segs;
err:
kfree_skb_list(segs);
return ERR_PTR(err);
}
| [[4525, "\t\t * Cap len to not accidentally hit GSO_BY_FRAGS.\n"], [4527, "\t\tpartial_segs = min(len, GSO_BY_FRAGS - 1) / mss;\n"]] | [[4522, "/* GSO partial only requires that we trim off any excess that\n\t\t * doesn't fit into an MSS sized block, so take care of that\n\t\t * now.\n\t\t * Cap len to not accidentally hit GSO_BY_FRAGS.\n\t\t */"], [4527, "partial_segs = min(len, GSO_BY_FRAGS - 1) / mss;"]] | [
"CVE-2023-52435"
] | [
"CWE-119"
] | 48 | {
"Execution Environment": [],
"Explanation": null,
"External Function": [],
"Function Argument": [],
"Globals": [
"GSO_BY_FRAGS"
],
"Type Execution Declaration": []
} |
49 | linux | https://github.com/torvalds/linux | net/ipv6/ip6_output.c | 2811ebac2521ceac84f2bdae402455baa6a7fb47 | ipv6: udp packets following an UFO enqueued packet need also be handled by UFO
In the following scenario the socket is corked:
If the first UDP packet is larger then the mtu we try to append it to the
write queue via ip6_ufo_append_data. A following packet, which is smaller
than the mtu would be appended to the already queued up gso-skb via
plain ip6_append_data. This causes random memory corruptions.
In ip6_ufo_append_data we also have to be careful to not queue up the
same skb multiple times. So setup the gso frame only when no first skb
is available.
This also fixes a shortcoming where we add the current packet's length to
cork->length but return early because of a packet > mtu with dontfrag set
(instead of sutracting it again).
Found with trinity.
Cc: YOSHIFUJI Hideaki <yoshfuji@linux-ipv6.org>
Signed-off-by: Hannes Frederic Sowa <hannes@stressinduktion.org>
Reported-by: Dmitry Vyukov <dvyukov@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net> | true | b3f215a3ecde0c9ee380b673673eb026 | ip6_ufo_append_data | static inline int ip6_ufo_append_data(struct sock *sk,
int getfrag(void *from, char *to, int offset, int len,
int odd, struct sk_buff *skb),
void *from, int length, int hh_len, int fragheaderlen,
int transhdrlen, int mtu,unsigned int flags,
struct rt6_info *rt)
{
struct sk_buff *skb;
int err;
/* There is support for UDP large send offload by network
* device, so create one single skb packet containing complete
* udp datagram
*/
if ((skb = skb_peek_tail(&sk->sk_write_queue)) == NULL) {
skb = sock_alloc_send_skb(sk,
hh_len + fragheaderlen + transhdrlen + 20,
(flags & MSG_DONTWAIT), &err);
if (skb == NULL)
return err;
/* reserve space for Hardware header */
skb_reserve(skb, hh_len);
/* create space for UDP/IP header */
skb_put(skb,fragheaderlen + transhdrlen);
/* initialize network header pointer */
skb_reset_network_header(skb);
/* initialize protocol header pointer */
skb->transport_header = skb->network_header + fragheaderlen;
skb->protocol = htons(ETH_P_IPV6);
skb->ip_summed = CHECKSUM_PARTIAL;
skb->csum = 0;
}
err = skb_append_datato_frags(sk,skb, getfrag, from,
(length - transhdrlen));
if (!err) {
struct frag_hdr fhdr;
/* Specify the length of each IPv6 datagram fragment.
* It has to be a multiple of 8.
*/
skb_shinfo(skb)->gso_size = (mtu - fragheaderlen -
sizeof(struct frag_hdr)) & ~7;
skb_shinfo(skb)->gso_type = SKB_GSO_UDP;
ipv6_select_ident(&fhdr, rt);
skb_shinfo(skb)->ip6_frag_id = fhdr.identification;
__skb_queue_tail(&sk->sk_write_queue, skb);
return 0;
}
/* There is not enough support do UPD LSO,
* so follow normal path
*/
kfree_skb(skb);
return err;
}
| [[1039, "\t}\n"], [1040, "\n"], [1041, "\terr = skb_append_datato_frags(sk,skb, getfrag, from,\n"], [1042, "\t\t\t\t (length - transhdrlen));\n"], [1043, "\tif (!err) {\n"], [1044, "\t\tstruct frag_hdr fhdr;\n"], [1055, "\n"], [1056, "\t\treturn 0;\n"], [1058, "\t/* There is not enough support do UPD LSO,\n"], [1059, "\t * so follow normal path\n"], [1060, "\t */\n"], [1061, "\tkfree_skb(skb);\n"], [1063, "\treturn err;\n"]] | [[1039, "\t}\n"], [1040, "\n"], [1041, "err = skb_append_datato_frags(sk,skb, getfrag, from,\n\t\t\t\t (length - transhdrlen));"], [1043, "if (!err)"], [1044, "struct frag_hdr fhdr;"], [1055, "\n"], [1056, "return 0;"], [1058, "/* There is not enough support do UPD LSO,\n\t * so follow normal path\n\t */"], [1061, "kfree_skb(skb);"], [1063, "return err;"]] | [
"CVE-2013-4387"
] | [
"CWE-119"
] | 51 | {
"Execution Environment": [],
"Explanation": null,
"External Function": [
"skb_peek_tail",
"sock_alloc_send_skb",
"skb_reserve",
"skb_put",
"skb_reset_network_header",
"skb_append_datato_frags",
"ipv6_select_ident"
],
"Function Argument": [
"sk",
"getfrag",
"from",
"length",
"hh_len",
"fragheaderlen",
"transhdrlen",
"mtu",
"flags",
"rt"
],
"Globals": [],
"Type Execution Declaration": [
"struct sock",
"struct sk_buff",
"struct rt6_info",
"struct frag_hdr"
]
} |
50 | linux | https://github.com/torvalds/linux | net/ipv6/ip6_output.c | 2811ebac2521ceac84f2bdae402455baa6a7fb47 | ipv6: udp packets following an UFO enqueued packet need also be handled by UFO
In the following scenario the socket is corked:
If the first UDP packet is larger then the mtu we try to append it to the
write queue via ip6_ufo_append_data. A following packet, which is smaller
than the mtu would be appended to the already queued up gso-skb via
plain ip6_append_data. This causes random memory corruptions.
In ip6_ufo_append_data we also have to be careful to not queue up the
same skb multiple times. So setup the gso frame only when no first skb
is available.
This also fixes a shortcoming where we add the current packet's length to
cork->length but return early because of a packet > mtu with dontfrag set
(instead of sutracting it again).
Found with trinity.
Cc: YOSHIFUJI Hideaki <yoshfuji@linux-ipv6.org>
Signed-off-by: Hannes Frederic Sowa <hannes@stressinduktion.org>
Reported-by: Dmitry Vyukov <dvyukov@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net> | true | 0d93ed0dca48b146b6caaa5774fe9256 | ip6_append_data | int ip6_append_data(struct sock *sk, int getfrag(void *from, char *to,
int offset, int len, int odd, struct sk_buff *skb),
void *from, int length, int transhdrlen,
int hlimit, int tclass, struct ipv6_txoptions *opt, struct flowi6 *fl6,
struct rt6_info *rt, unsigned int flags, int dontfrag)
{
struct inet_sock *inet = inet_sk(sk);
struct ipv6_pinfo *np = inet6_sk(sk);
struct inet_cork *cork;
struct sk_buff *skb, *skb_prev = NULL;
unsigned int maxfraglen, fragheaderlen, mtu;
int exthdrlen;
int dst_exthdrlen;
int hh_len;
int copy;
int err;
int offset = 0;
__u8 tx_flags = 0;
if (flags&MSG_PROBE)
return 0;
cork = &inet->cork.base;
if (skb_queue_empty(&sk->sk_write_queue)) {
/*
* setup for corking
*/
if (opt) {
if (WARN_ON(np->cork.opt))
return -EINVAL;
np->cork.opt = kzalloc(opt->tot_len, sk->sk_allocation);
if (unlikely(np->cork.opt == NULL))
return -ENOBUFS;
np->cork.opt->tot_len = opt->tot_len;
np->cork.opt->opt_flen = opt->opt_flen;
np->cork.opt->opt_nflen = opt->opt_nflen;
np->cork.opt->dst0opt = ip6_opt_dup(opt->dst0opt,
sk->sk_allocation);
if (opt->dst0opt && !np->cork.opt->dst0opt)
return -ENOBUFS;
np->cork.opt->dst1opt = ip6_opt_dup(opt->dst1opt,
sk->sk_allocation);
if (opt->dst1opt && !np->cork.opt->dst1opt)
return -ENOBUFS;
np->cork.opt->hopopt = ip6_opt_dup(opt->hopopt,
sk->sk_allocation);
if (opt->hopopt && !np->cork.opt->hopopt)
return -ENOBUFS;
np->cork.opt->srcrt = ip6_rthdr_dup(opt->srcrt,
sk->sk_allocation);
if (opt->srcrt && !np->cork.opt->srcrt)
return -ENOBUFS;
/* need source address above miyazawa*/
}
dst_hold(&rt->dst);
cork->dst = &rt->dst;
inet->cork.fl.u.ip6 = *fl6;
np->cork.hop_limit = hlimit;
np->cork.tclass = tclass;
if (rt->dst.flags & DST_XFRM_TUNNEL)
mtu = np->pmtudisc == IPV6_PMTUDISC_PROBE ?
rt->dst.dev->mtu : dst_mtu(&rt->dst);
else
mtu = np->pmtudisc == IPV6_PMTUDISC_PROBE ?
rt->dst.dev->mtu : dst_mtu(rt->dst.path);
if (np->frag_size < mtu) {
if (np->frag_size)
mtu = np->frag_size;
}
cork->fragsize = mtu;
if (dst_allfrag(rt->dst.path))
cork->flags |= IPCORK_ALLFRAG;
cork->length = 0;
exthdrlen = (opt ? opt->opt_flen : 0);
length += exthdrlen;
transhdrlen += exthdrlen;
dst_exthdrlen = rt->dst.header_len - rt->rt6i_nfheader_len;
} else {
rt = (struct rt6_info *)cork->dst;
fl6 = &inet->cork.fl.u.ip6;
opt = np->cork.opt;
transhdrlen = 0;
exthdrlen = 0;
dst_exthdrlen = 0;
mtu = cork->fragsize;
}
hh_len = LL_RESERVED_SPACE(rt->dst.dev);
fragheaderlen = sizeof(struct ipv6hdr) + rt->rt6i_nfheader_len +
(opt ? opt->opt_nflen : 0);
maxfraglen = ((mtu - fragheaderlen) & ~7) + fragheaderlen - sizeof(struct frag_hdr);
if (mtu <= sizeof(struct ipv6hdr) + IPV6_MAXPLEN) {
if (cork->length + length > sizeof(struct ipv6hdr) + IPV6_MAXPLEN - fragheaderlen) {
ipv6_local_error(sk, EMSGSIZE, fl6, mtu-exthdrlen);
return -EMSGSIZE;
}
}
/* For UDP, check if TX timestamp is enabled */
if (sk->sk_type == SOCK_DGRAM)
sock_tx_timestamp(sk, &tx_flags);
/*
* Let's try using as much space as possible.
* Use MTU if total length of the message fits into the MTU.
* Otherwise, we need to reserve fragment header and
* fragment alignment (= 8-15 octects, in total).
*
* Note that we may need to "move" the data from the tail of
* of the buffer to the new fragment when we split
* the message.
*
* FIXME: It may be fragmented into multiple chunks
* at once if non-fragmentable extension headers
* are too large.
* --yoshfuji
*/
cork->length += length;
if (length > mtu) {
int proto = sk->sk_protocol;
if (dontfrag && (proto == IPPROTO_UDP || proto == IPPROTO_RAW)){
ipv6_local_rxpmtu(sk, fl6, mtu-exthdrlen);
return -EMSGSIZE;
}
if (proto == IPPROTO_UDP &&
(rt->dst.dev->features & NETIF_F_UFO)) {
err = ip6_ufo_append_data(sk, getfrag, from, length,
hh_len, fragheaderlen,
transhdrlen, mtu, flags, rt);
if (err)
goto error;
return 0;
}
}
if ((skb = skb_peek_tail(&sk->sk_write_queue)) == NULL)
goto alloc_new_skb;
while (length > 0) {
/* Check if the remaining data fits into current packet. */
copy = (cork->length <= mtu && !(cork->flags & IPCORK_ALLFRAG) ? mtu : maxfraglen) - skb->len;
if (copy < length)
copy = maxfraglen - skb->len;
if (copy <= 0) {
char *data;
unsigned int datalen;
unsigned int fraglen;
unsigned int fraggap;
unsigned int alloclen;
alloc_new_skb:
/* There's no room in the current skb */
if (skb)
fraggap = skb->len - maxfraglen;
else
fraggap = 0;
/* update mtu and maxfraglen if necessary */
if (skb == NULL || skb_prev == NULL)
ip6_append_data_mtu(&mtu, &maxfraglen,
fragheaderlen, skb, rt,
np->pmtudisc ==
IPV6_PMTUDISC_PROBE);
skb_prev = skb;
/*
* If remaining data exceeds the mtu,
* we know we need more fragment(s).
*/
datalen = length + fraggap;
if (datalen > (cork->length <= mtu && !(cork->flags & IPCORK_ALLFRAG) ? mtu : maxfraglen) - fragheaderlen)
datalen = maxfraglen - fragheaderlen - rt->dst.trailer_len;
if ((flags & MSG_MORE) &&
!(rt->dst.dev->features&NETIF_F_SG))
alloclen = mtu;
else
alloclen = datalen + fragheaderlen;
alloclen += dst_exthdrlen;
if (datalen != length + fraggap) {
/*
* this is not the last fragment, the trailer
* space is regarded as data space.
*/
datalen += rt->dst.trailer_len;
}
alloclen += rt->dst.trailer_len;
fraglen = datalen + fragheaderlen;
/*
* We just reserve space for fragment header.
* Note: this may be overallocation if the message
* (without MSG_MORE) fits into the MTU.
*/
alloclen += sizeof(struct frag_hdr);
if (transhdrlen) {
skb = sock_alloc_send_skb(sk,
alloclen + hh_len,
(flags & MSG_DONTWAIT), &err);
} else {
skb = NULL;
if (atomic_read(&sk->sk_wmem_alloc) <=
2 * sk->sk_sndbuf)
skb = sock_wmalloc(sk,
alloclen + hh_len, 1,
sk->sk_allocation);
if (unlikely(skb == NULL))
err = -ENOBUFS;
else {
/* Only the initial fragment
* is time stamped.
*/
tx_flags = 0;
}
}
if (skb == NULL)
goto error;
/*
* Fill in the control structures
*/
skb->protocol = htons(ETH_P_IPV6);
skb->ip_summed = CHECKSUM_NONE;
skb->csum = 0;
/* reserve for fragmentation and ipsec header */
skb_reserve(skb, hh_len + sizeof(struct frag_hdr) +
dst_exthdrlen);
if (sk->sk_type == SOCK_DGRAM)
skb_shinfo(skb)->tx_flags = tx_flags;
/*
* Find where to start putting bytes
*/
data = skb_put(skb, fraglen);
skb_set_network_header(skb, exthdrlen);
data += fragheaderlen;
skb->transport_header = (skb->network_header +
fragheaderlen);
if (fraggap) {
skb->csum = skb_copy_and_csum_bits(
skb_prev, maxfraglen,
data + transhdrlen, fraggap, 0);
skb_prev->csum = csum_sub(skb_prev->csum,
skb->csum);
data += fraggap;
pskb_trim_unique(skb_prev, maxfraglen);
}
copy = datalen - transhdrlen - fraggap;
if (copy < 0) {
err = -EINVAL;
kfree_skb(skb);
goto error;
} else if (copy > 0 && getfrag(from, data + transhdrlen, offset, copy, fraggap, skb) < 0) {
err = -EFAULT;
kfree_skb(skb);
goto error;
}
offset += copy;
length -= datalen - fraggap;
transhdrlen = 0;
exthdrlen = 0;
dst_exthdrlen = 0;
/*
* Put the packet on the pending queue
*/
__skb_queue_tail(&sk->sk_write_queue, skb);
continue;
}
if (copy > length)
copy = length;
if (!(rt->dst.dev->features&NETIF_F_SG)) {
unsigned int off;
off = skb->len;
if (getfrag(from, skb_put(skb, copy),
offset, copy, off, skb) < 0) {
__skb_trim(skb, off);
err = -EFAULT;
goto error;
}
} else {
int i = skb_shinfo(skb)->nr_frags;
struct page_frag *pfrag = sk_page_frag(sk);
err = -ENOMEM;
if (!sk_page_frag_refill(sk, pfrag))
goto error;
if (!skb_can_coalesce(skb, i, pfrag->page,
pfrag->offset)) {
err = -EMSGSIZE;
if (i == MAX_SKB_FRAGS)
goto error;
__skb_fill_page_desc(skb, i, pfrag->page,
pfrag->offset, 0);
skb_shinfo(skb)->nr_frags = ++i;
get_page(pfrag->page);
}
copy = min_t(int, copy, pfrag->size - pfrag->offset);
if (getfrag(from,
page_address(pfrag->page) + pfrag->offset,
offset, copy, skb->len, skb) < 0)
goto error_efault;
pfrag->offset += copy;
skb_frag_size_add(&skb_shinfo(skb)->frags[i - 1], copy);
skb->len += copy;
skb->data_len += copy;
skb->truesize += copy;
atomic_add(copy, &sk->sk_wmem_alloc);
}
offset += copy;
length -= copy;
}
return 0;
error_efault:
err = -EFAULT;
error:
cork->length -= length;
IP6_INC_STATS(sock_net(sk), rt->rt6i_idev, IPSTATS_MIB_OUTDISCARDS);
return err;
}
| [[1230, "\tcork->length += length;\n"], [1231, "\tif (length > mtu) {\n"], [1232, "\t\tint proto = sk->sk_protocol;\n"], [1233, "\t\tif (dontfrag && (proto == IPPROTO_UDP || proto == IPPROTO_RAW)){\n"], [1234, "\t\t\tipv6_local_rxpmtu(sk, fl6, mtu-exthdrlen);\n"], [1235, "\t\t\treturn -EMSGSIZE;\n"], [1236, "\t\t}\n"], [1237, "\n"], [1238, "\t\tif (proto == IPPROTO_UDP &&\n"], [1239, "\t\t (rt->dst.dev->features & NETIF_F_UFO)) {\n"], [1241, "\t\t\terr = ip6_ufo_append_data(sk, getfrag, from, length,\n"], [1242, "\t\t\t\t\t\t hh_len, fragheaderlen,\n"], [1243, "\t\t\t\t\t\t transhdrlen, mtu, flags, rt);\n"], [1244, "\t\t\tif (err)\n"], [1245, "\t\t\t\tgoto error;\n"], [1246, "\t\t\treturn 0;\n"], [1247, "\t\t}\n"], [1250, "\tif ((skb = skb_peek_tail(&sk->sk_write_queue)) == NULL)\n"]] | [[1230, "cork->length += length;"], [1231, "if (length > mtu)"], [1232, "int proto = sk->sk_protocol;"], [1233, "if (dontfrag && (proto == IPPROTO_UDP || proto == IPPROTO_RAW))"], [1234, "ipv6_local_rxpmtu(sk, fl6, mtu-exthdrlen);"], [1235, "return -EMSGSIZE;"], [1236, "\t\t}\n"], [1237, "\n"], [1238, "if (proto == IPPROTO_UDP &&\n\t\t (rt->dst.dev->features & NETIF_F_UFO))"], [1241, "err = ip6_ufo_append_data(sk, getfrag, from, length,\n\t\t\t\t\t\t hh_len, fragheaderlen,\n\t\t\t\t\t\t transhdrlen, mtu, flags, rt);"], [1244, "if (err)"], [1245, "goto error;"], [1246, "return 0;"], [1247, "\t\t}\n"], [1250, "if ((skb = skb_peek_tail(&sk->sk_write_queue)) == NULL)"]] | [
"CVE-2013-4387"
] | [
"CWE-119"
] | 52 | {
"Execution Environment": [],
"Explanation": null,
"External Function": [
"skb_is_gso",
"ip6_ufo_append_data",
"skb_peek_tail"
],
"Function Argument": [
"struct sock *sk",
"struct ipv6_txoptions *opt",
"struct flowi6 *fl6",
"struct rt6_info *rt",
"int dontfrag"
],
"Globals": [],
"Type Execution Declaration": [
"struct sock",
"struct rt6_info",
"struct sk_buff"
]
} |
51 | linux | https://github.com/torvalds/linux | net/ipv6/ip6_output.c | 2811ebac2521ceac84f2bdae402455baa6a7fb47 | ipv6: udp packets following an UFO enqueued packet need also be handled by UFO
In the following scenario the socket is corked:
If the first UDP packet is larger then the mtu we try to append it to the
write queue via ip6_ufo_append_data. A following packet, which is smaller
than the mtu would be appended to the already queued up gso-skb via
plain ip6_append_data. This causes random memory corruptions.
In ip6_ufo_append_data we also have to be careful to not queue up the
same skb multiple times. So setup the gso frame only when no first skb
is available.
This also fixes a shortcoming where we add the current packet's length to
cork->length but return early because of a packet > mtu with dontfrag set
(instead of sutracting it again).
Found with trinity.
Cc: YOSHIFUJI Hideaki <yoshfuji@linux-ipv6.org>
Signed-off-by: Hannes Frederic Sowa <hannes@stressinduktion.org>
Reported-by: Dmitry Vyukov <dvyukov@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net> | false | ff07229fc5a6a24a8c1fd54c05d03321 | ip6_ufo_append_data | static inline int ip6_ufo_append_data(struct sock *sk,
int getfrag(void *from, char *to, int offset, int len,
int odd, struct sk_buff *skb),
void *from, int length, int hh_len, int fragheaderlen,
int transhdrlen, int mtu,unsigned int flags,
struct rt6_info *rt)
{
struct sk_buff *skb;
int err;
/* There is support for UDP large send offload by network
* device, so create one single skb packet containing complete
* udp datagram
*/
if ((skb = skb_peek_tail(&sk->sk_write_queue)) == NULL) {
struct frag_hdr fhdr;
skb = sock_alloc_send_skb(sk,
hh_len + fragheaderlen + transhdrlen + 20,
(flags & MSG_DONTWAIT), &err);
if (skb == NULL)
return err;
/* reserve space for Hardware header */
skb_reserve(skb, hh_len);
/* create space for UDP/IP header */
skb_put(skb,fragheaderlen + transhdrlen);
/* initialize network header pointer */
skb_reset_network_header(skb);
/* initialize protocol header pointer */
skb->transport_header = skb->network_header + fragheaderlen;
skb->protocol = htons(ETH_P_IPV6);
skb->ip_summed = CHECKSUM_PARTIAL;
skb->csum = 0;
/* Specify the length of each IPv6 datagram fragment.
* It has to be a multiple of 8.
*/
skb_shinfo(skb)->gso_size = (mtu - fragheaderlen -
sizeof(struct frag_hdr)) & ~7;
skb_shinfo(skb)->gso_type = SKB_GSO_UDP;
ipv6_select_ident(&fhdr, rt);
skb_shinfo(skb)->ip6_frag_id = fhdr.identification;
__skb_queue_tail(&sk->sk_write_queue, skb);
}
return skb_append_datato_frags(sk, skb, getfrag, from,
(length - transhdrlen));
}
| [[1018, "\t\tstruct frag_hdr fhdr;\n"], [1019, "\n"], [1053, "\treturn skb_append_datato_frags(sk, skb, getfrag, from,\n"], [1054, "\t\t\t\t (length - transhdrlen));\n"]] | [[1018, "struct frag_hdr fhdr;"], [1019, "\n"], [1053, "return skb_append_datato_frags(sk, skb, getfrag, from,\n\t\t\t\t (length - transhdrlen));"]] | [
"CVE-2013-4387"
] | [
"CWE-119"
] | 51 | {
"Execution Environment": [],
"Explanation": null,
"External Function": [
"skb_peek_tail",
"sock_alloc_send_skb",
"skb_reserve",
"skb_put",
"skb_reset_network_header",
"skb_append_datato_frags",
"ipv6_select_ident"
],
"Function Argument": [
"sk",
"getfrag",
"from",
"length",
"hh_len",
"fragheaderlen",
"transhdrlen",
"mtu",
"flags",
"rt"
],
"Globals": [],
"Type Execution Declaration": [
"struct sock",
"struct sk_buff",
"struct rt6_info",
"struct frag_hdr"
]
} |
52 | linux | https://github.com/torvalds/linux | net/ipv6/ip6_output.c | 2811ebac2521ceac84f2bdae402455baa6a7fb47 | ipv6: udp packets following an UFO enqueued packet need also be handled by UFO
In the following scenario the socket is corked:
If the first UDP packet is larger then the mtu we try to append it to the
write queue via ip6_ufo_append_data. A following packet, which is smaller
than the mtu would be appended to the already queued up gso-skb via
plain ip6_append_data. This causes random memory corruptions.
In ip6_ufo_append_data we also have to be careful to not queue up the
same skb multiple times. So setup the gso frame only when no first skb
is available.
This also fixes a shortcoming where we add the current packet's length to
cork->length but return early because of a packet > mtu with dontfrag set
(instead of sutracting it again).
Found with trinity.
Cc: YOSHIFUJI Hideaki <yoshfuji@linux-ipv6.org>
Signed-off-by: Hannes Frederic Sowa <hannes@stressinduktion.org>
Reported-by: Dmitry Vyukov <dvyukov@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net> | false | d8548966db5ac5d31cb35aca029bc88c | ip6_append_data | int ip6_append_data(struct sock *sk, int getfrag(void *from, char *to,
int offset, int len, int odd, struct sk_buff *skb),
void *from, int length, int transhdrlen,
int hlimit, int tclass, struct ipv6_txoptions *opt, struct flowi6 *fl6,
struct rt6_info *rt, unsigned int flags, int dontfrag)
{
struct inet_sock *inet = inet_sk(sk);
struct ipv6_pinfo *np = inet6_sk(sk);
struct inet_cork *cork;
struct sk_buff *skb, *skb_prev = NULL;
unsigned int maxfraglen, fragheaderlen, mtu;
int exthdrlen;
int dst_exthdrlen;
int hh_len;
int copy;
int err;
int offset = 0;
__u8 tx_flags = 0;
if (flags&MSG_PROBE)
return 0;
cork = &inet->cork.base;
if (skb_queue_empty(&sk->sk_write_queue)) {
/*
* setup for corking
*/
if (opt) {
if (WARN_ON(np->cork.opt))
return -EINVAL;
np->cork.opt = kzalloc(opt->tot_len, sk->sk_allocation);
if (unlikely(np->cork.opt == NULL))
return -ENOBUFS;
np->cork.opt->tot_len = opt->tot_len;
np->cork.opt->opt_flen = opt->opt_flen;
np->cork.opt->opt_nflen = opt->opt_nflen;
np->cork.opt->dst0opt = ip6_opt_dup(opt->dst0opt,
sk->sk_allocation);
if (opt->dst0opt && !np->cork.opt->dst0opt)
return -ENOBUFS;
np->cork.opt->dst1opt = ip6_opt_dup(opt->dst1opt,
sk->sk_allocation);
if (opt->dst1opt && !np->cork.opt->dst1opt)
return -ENOBUFS;
np->cork.opt->hopopt = ip6_opt_dup(opt->hopopt,
sk->sk_allocation);
if (opt->hopopt && !np->cork.opt->hopopt)
return -ENOBUFS;
np->cork.opt->srcrt = ip6_rthdr_dup(opt->srcrt,
sk->sk_allocation);
if (opt->srcrt && !np->cork.opt->srcrt)
return -ENOBUFS;
/* need source address above miyazawa*/
}
dst_hold(&rt->dst);
cork->dst = &rt->dst;
inet->cork.fl.u.ip6 = *fl6;
np->cork.hop_limit = hlimit;
np->cork.tclass = tclass;
if (rt->dst.flags & DST_XFRM_TUNNEL)
mtu = np->pmtudisc == IPV6_PMTUDISC_PROBE ?
rt->dst.dev->mtu : dst_mtu(&rt->dst);
else
mtu = np->pmtudisc == IPV6_PMTUDISC_PROBE ?
rt->dst.dev->mtu : dst_mtu(rt->dst.path);
if (np->frag_size < mtu) {
if (np->frag_size)
mtu = np->frag_size;
}
cork->fragsize = mtu;
if (dst_allfrag(rt->dst.path))
cork->flags |= IPCORK_ALLFRAG;
cork->length = 0;
exthdrlen = (opt ? opt->opt_flen : 0);
length += exthdrlen;
transhdrlen += exthdrlen;
dst_exthdrlen = rt->dst.header_len - rt->rt6i_nfheader_len;
} else {
rt = (struct rt6_info *)cork->dst;
fl6 = &inet->cork.fl.u.ip6;
opt = np->cork.opt;
transhdrlen = 0;
exthdrlen = 0;
dst_exthdrlen = 0;
mtu = cork->fragsize;
}
hh_len = LL_RESERVED_SPACE(rt->dst.dev);
fragheaderlen = sizeof(struct ipv6hdr) + rt->rt6i_nfheader_len +
(opt ? opt->opt_nflen : 0);
maxfraglen = ((mtu - fragheaderlen) & ~7) + fragheaderlen - sizeof(struct frag_hdr);
if (mtu <= sizeof(struct ipv6hdr) + IPV6_MAXPLEN) {
if (cork->length + length > sizeof(struct ipv6hdr) + IPV6_MAXPLEN - fragheaderlen) {
ipv6_local_error(sk, EMSGSIZE, fl6, mtu-exthdrlen);
return -EMSGSIZE;
}
}
/* For UDP, check if TX timestamp is enabled */
if (sk->sk_type == SOCK_DGRAM)
sock_tx_timestamp(sk, &tx_flags);
/*
* Let's try using as much space as possible.
* Use MTU if total length of the message fits into the MTU.
* Otherwise, we need to reserve fragment header and
* fragment alignment (= 8-15 octects, in total).
*
* Note that we may need to "move" the data from the tail of
* of the buffer to the new fragment when we split
* the message.
*
* FIXME: It may be fragmented into multiple chunks
* at once if non-fragmentable extension headers
* are too large.
* --yoshfuji
*/
if ((length > mtu) && dontfrag && (sk->sk_protocol == IPPROTO_UDP ||
sk->sk_protocol == IPPROTO_RAW)) {
ipv6_local_rxpmtu(sk, fl6, mtu-exthdrlen);
return -EMSGSIZE;
}
skb = skb_peek_tail(&sk->sk_write_queue);
cork->length += length;
if (((length > mtu) ||
(skb && skb_is_gso(skb))) &&
(sk->sk_protocol == IPPROTO_UDP) &&
(rt->dst.dev->features & NETIF_F_UFO)) {
err = ip6_ufo_append_data(sk, getfrag, from, length,
hh_len, fragheaderlen,
transhdrlen, mtu, flags, rt);
if (err)
goto error;
return 0;
}
if (!skb)
goto alloc_new_skb;
while (length > 0) {
/* Check if the remaining data fits into current packet. */
copy = (cork->length <= mtu && !(cork->flags & IPCORK_ALLFRAG) ? mtu : maxfraglen) - skb->len;
if (copy < length)
copy = maxfraglen - skb->len;
if (copy <= 0) {
char *data;
unsigned int datalen;
unsigned int fraglen;
unsigned int fraggap;
unsigned int alloclen;
alloc_new_skb:
/* There's no room in the current skb */
if (skb)
fraggap = skb->len - maxfraglen;
else
fraggap = 0;
/* update mtu and maxfraglen if necessary */
if (skb == NULL || skb_prev == NULL)
ip6_append_data_mtu(&mtu, &maxfraglen,
fragheaderlen, skb, rt,
np->pmtudisc ==
IPV6_PMTUDISC_PROBE);
skb_prev = skb;
/*
* If remaining data exceeds the mtu,
* we know we need more fragment(s).
*/
datalen = length + fraggap;
if (datalen > (cork->length <= mtu && !(cork->flags & IPCORK_ALLFRAG) ? mtu : maxfraglen) - fragheaderlen)
datalen = maxfraglen - fragheaderlen - rt->dst.trailer_len;
if ((flags & MSG_MORE) &&
!(rt->dst.dev->features&NETIF_F_SG))
alloclen = mtu;
else
alloclen = datalen + fragheaderlen;
alloclen += dst_exthdrlen;
if (datalen != length + fraggap) {
/*
* this is not the last fragment, the trailer
* space is regarded as data space.
*/
datalen += rt->dst.trailer_len;
}
alloclen += rt->dst.trailer_len;
fraglen = datalen + fragheaderlen;
/*
* We just reserve space for fragment header.
* Note: this may be overallocation if the message
* (without MSG_MORE) fits into the MTU.
*/
alloclen += sizeof(struct frag_hdr);
if (transhdrlen) {
skb = sock_alloc_send_skb(sk,
alloclen + hh_len,
(flags & MSG_DONTWAIT), &err);
} else {
skb = NULL;
if (atomic_read(&sk->sk_wmem_alloc) <=
2 * sk->sk_sndbuf)
skb = sock_wmalloc(sk,
alloclen + hh_len, 1,
sk->sk_allocation);
if (unlikely(skb == NULL))
err = -ENOBUFS;
else {
/* Only the initial fragment
* is time stamped.
*/
tx_flags = 0;
}
}
if (skb == NULL)
goto error;
/*
* Fill in the control structures
*/
skb->protocol = htons(ETH_P_IPV6);
skb->ip_summed = CHECKSUM_NONE;
skb->csum = 0;
/* reserve for fragmentation and ipsec header */
skb_reserve(skb, hh_len + sizeof(struct frag_hdr) +
dst_exthdrlen);
if (sk->sk_type == SOCK_DGRAM)
skb_shinfo(skb)->tx_flags = tx_flags;
/*
* Find where to start putting bytes
*/
data = skb_put(skb, fraglen);
skb_set_network_header(skb, exthdrlen);
data += fragheaderlen;
skb->transport_header = (skb->network_header +
fragheaderlen);
if (fraggap) {
skb->csum = skb_copy_and_csum_bits(
skb_prev, maxfraglen,
data + transhdrlen, fraggap, 0);
skb_prev->csum = csum_sub(skb_prev->csum,
skb->csum);
data += fraggap;
pskb_trim_unique(skb_prev, maxfraglen);
}
copy = datalen - transhdrlen - fraggap;
if (copy < 0) {
err = -EINVAL;
kfree_skb(skb);
goto error;
} else if (copy > 0 && getfrag(from, data + transhdrlen, offset, copy, fraggap, skb) < 0) {
err = -EFAULT;
kfree_skb(skb);
goto error;
}
offset += copy;
length -= datalen - fraggap;
transhdrlen = 0;
exthdrlen = 0;
dst_exthdrlen = 0;
/*
* Put the packet on the pending queue
*/
__skb_queue_tail(&sk->sk_write_queue, skb);
continue;
}
if (copy > length)
copy = length;
if (!(rt->dst.dev->features&NETIF_F_SG)) {
unsigned int off;
off = skb->len;
if (getfrag(from, skb_put(skb, copy),
offset, copy, off, skb) < 0) {
__skb_trim(skb, off);
err = -EFAULT;
goto error;
}
} else {
int i = skb_shinfo(skb)->nr_frags;
struct page_frag *pfrag = sk_page_frag(sk);
err = -ENOMEM;
if (!sk_page_frag_refill(sk, pfrag))
goto error;
if (!skb_can_coalesce(skb, i, pfrag->page,
pfrag->offset)) {
err = -EMSGSIZE;
if (i == MAX_SKB_FRAGS)
goto error;
__skb_fill_page_desc(skb, i, pfrag->page,
pfrag->offset, 0);
skb_shinfo(skb)->nr_frags = ++i;
get_page(pfrag->page);
}
copy = min_t(int, copy, pfrag->size - pfrag->offset);
if (getfrag(from,
page_address(pfrag->page) + pfrag->offset,
offset, copy, skb->len, skb) < 0)
goto error_efault;
pfrag->offset += copy;
skb_frag_size_add(&skb_shinfo(skb)->frags[i - 1], copy);
skb->len += copy;
skb->data_len += copy;
skb->truesize += copy;
atomic_add(copy, &sk->sk_wmem_alloc);
}
offset += copy;
length -= copy;
}
return 0;
error_efault:
err = -EFAULT;
error:
cork->length -= length;
IP6_INC_STATS(sock_net(sk), rt->rt6i_idev, IPSTATS_MIB_OUTDISCARDS);
return err;
}
| [[1221, "\tif ((length > mtu) && dontfrag && (sk->sk_protocol == IPPROTO_UDP ||\n"], [1222, "\t\t\t\t\t sk->sk_protocol == IPPROTO_RAW)) {\n"], [1223, "\t\tipv6_local_rxpmtu(sk, fl6, mtu-exthdrlen);\n"], [1224, "\t\treturn -EMSGSIZE;\n"], [1225, "\t}\n"], [1227, "\tskb = skb_peek_tail(&sk->sk_write_queue);\n"], [1228, "\tcork->length += length;\n"], [1229, "\tif (((length > mtu) ||\n"], [1230, "\t (skb && skb_is_gso(skb))) &&\n"], [1231, "\t (sk->sk_protocol == IPPROTO_UDP) &&\n"], [1232, "\t (rt->dst.dev->features & NETIF_F_UFO)) {\n"], [1233, "\t\terr = ip6_ufo_append_data(sk, getfrag, from, length,\n"], [1234, "\t\t\t\t\t hh_len, fragheaderlen,\n"], [1235, "\t\t\t\t\t transhdrlen, mtu, flags, rt);\n"], [1236, "\t\tif (err)\n"], [1237, "\t\t\tgoto error;\n"], [1238, "\t\treturn 0;\n"], [1241, "\tif (!skb)\n"]] | [[1221, "if ((length > mtu) && dontfrag && (sk->sk_protocol == IPPROTO_UDP ||\n\t\t\t\t\t sk->sk_protocol == IPPROTO_RAW))"], [1223, "ipv6_local_rxpmtu(sk, fl6, mtu-exthdrlen);"], [1224, "return -EMSGSIZE;"], [1225, "\t}\n"], [1227, "skb = skb_peek_tail(&sk->sk_write_queue);"], [1228, "cork->length += length;"], [1229, "if (((length > mtu) ||\n\t (skb && skb_is_gso(skb))) &&\n\t (sk->sk_protocol == IPPROTO_UDP) &&\n\t (rt->dst.dev->features & NETIF_F_UFO))"], [1233, "err = ip6_ufo_append_data(sk, getfrag, from, length,\n\t\t\t\t\t hh_len, fragheaderlen,\n\t\t\t\t\t transhdrlen, mtu, flags, rt);"], [1236, "if (err)"], [1237, "goto error;"], [1238, "return 0;"], [1241, "if (!skb)"]] | [
"CVE-2013-4387"
] | [
"CWE-119"
] | 52 | {
"Execution Environment": [],
"Explanation": null,
"External Function": [
"skb_is_gso",
"ip6_ufo_append_data",
"skb_peek_tail"
],
"Function Argument": [
"struct sock *sk",
"struct ipv6_txoptions *opt",
"struct flowi6 *fl6",
"struct rt6_info *rt",
"int dontfrag"
],
"Globals": [],
"Type Execution Declaration": [
"struct sock",
"struct rt6_info",
"struct sk_buff"
]
} |
53 | linux | https://github.com/torvalds/linux | kernel/bpf/devmap.c | 281d464a34f540de166cee74b723e97ac2515ec3 | bpf: Fix DEVMAP_HASH overflow check on 32-bit arches
The devmap code allocates a number hash buckets equal to the next power
of two of the max_entries value provided when creating the map. When
rounding up to the next power of two, the 32-bit variable storing the
number of buckets can overflow, and the code checks for overflow by
checking if the truncated 32-bit value is equal to 0. However, on 32-bit
arches the rounding up itself can overflow mid-way through, because it
ends up doing a left-shift of 32 bits on an unsigned long value. If the
size of an unsigned long is four bytes, this is undefined behaviour, so
there is no guarantee that we'll end up with a nice and tidy 0-value at
the end.
Syzbot managed to turn this into a crash on arm32 by creating a
DEVMAP_HASH with max_entries > 0x80000000 and then trying to update it.
Fix this by moving the overflow check to before the rounding up
operation.
Fixes: 6f9d451ab1a3 ("xdp: Add devmap_hash map type for looking up devices by hashed index")
Link: https://lore.kernel.org/r/000000000000ed666a0611af6818@google.com
Reported-and-tested-by: syzbot+8cd36f6b65f3cafd400a@syzkaller.appspotmail.com
Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com>
Message-ID: <20240307120340.99577-2-toke@redhat.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org> | true | 5ccca320aeb25f8c7892ddc54d010fe5 | dev_map_init_map | static int dev_map_init_map(struct bpf_dtab *dtab, union bpf_attr *attr)
{
u32 valsize = attr->value_size;
/* check sanity of attributes. 2 value sizes supported:
* 4 bytes: ifindex
* 8 bytes: ifindex + prog fd
*/
if (attr->max_entries == 0 || attr->key_size != 4 ||
(valsize != offsetofend(struct bpf_devmap_val, ifindex) &&
valsize != offsetofend(struct bpf_devmap_val, bpf_prog.fd)) ||
attr->map_flags & ~DEV_CREATE_FLAG_MASK)
return -EINVAL;
/* Lookup returns a pointer straight to dev->ifindex, so make sure the
* verifier prevents writes from the BPF side
*/
attr->map_flags |= BPF_F_RDONLY_PROG;
bpf_map_init_from_attr(&dtab->map, attr);
if (attr->map_type == BPF_MAP_TYPE_DEVMAP_HASH) {
dtab->n_buckets = roundup_pow_of_two(dtab->map.max_entries);
if (!dtab->n_buckets) /* Overflow check */
return -EINVAL;
}
if (attr->map_type == BPF_MAP_TYPE_DEVMAP_HASH) {
dtab->dev_index_head = dev_map_create_hash(dtab->n_buckets,
dtab->map.numa_node);
if (!dtab->dev_index_head)
return -ENOMEM;
spin_lock_init(&dtab->index_lock);
} else {
dtab->netdev_map = bpf_map_area_alloc((u64) dtab->map.max_entries *
sizeof(struct bpf_dtab_netdev *),
dtab->map.numa_node);
if (!dtab->netdev_map)
return -ENOMEM;
}
return 0;
}
| [[133, "\t\tdtab->n_buckets = roundup_pow_of_two(dtab->map.max_entries);\n"], [134, "\n"], [135, "\t\tif (!dtab->n_buckets) /* Overflow check */\n"], [137, "\t}\n"], [139, "\tif (attr->map_type == BPF_MAP_TYPE_DEVMAP_HASH) {\n"]] | [[133, "dtab->n_buckets = roundup_pow_of_two(dtab->map.max_entries);"], [134, "\n"], [135, "if (!dtab->n_buckets) /* Overflow check */"], [137, "\t}\n"], [139, "if (attr->map_type == BPF_MAP_TYPE_DEVMAP_HASH)"]] | [
"CVE-2024-26885"
] | [
"CWE-119"
] | 54 | {
"Execution Environment": [
"Size of unsigned long on the target architecture (32-bit vs 64-bit)"
],
"Explanation": null,
"External Function": [
"roundup_pow_of_two"
],
"Function Argument": [],
"Globals": [],
"Type Execution Declaration": []
} |
54 | linux | https://github.com/torvalds/linux | kernel/bpf/devmap.c | 281d464a34f540de166cee74b723e97ac2515ec3 | bpf: Fix DEVMAP_HASH overflow check on 32-bit arches
The devmap code allocates a number hash buckets equal to the next power
of two of the max_entries value provided when creating the map. When
rounding up to the next power of two, the 32-bit variable storing the
number of buckets can overflow, and the code checks for overflow by
checking if the truncated 32-bit value is equal to 0. However, on 32-bit
arches the rounding up itself can overflow mid-way through, because it
ends up doing a left-shift of 32 bits on an unsigned long value. If the
size of an unsigned long is four bytes, this is undefined behaviour, so
there is no guarantee that we'll end up with a nice and tidy 0-value at
the end.
Syzbot managed to turn this into a crash on arm32 by creating a
DEVMAP_HASH with max_entries > 0x80000000 and then trying to update it.
Fix this by moving the overflow check to before the rounding up
operation.
Fixes: 6f9d451ab1a3 ("xdp: Add devmap_hash map type for looking up devices by hashed index")
Link: https://lore.kernel.org/r/000000000000ed666a0611af6818@google.com
Reported-and-tested-by: syzbot+8cd36f6b65f3cafd400a@syzkaller.appspotmail.com
Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com>
Message-ID: <20240307120340.99577-2-toke@redhat.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org> | false | e8f9ecae3cc00ea898e1a067d4768776 | dev_map_init_map | static int dev_map_init_map(struct bpf_dtab *dtab, union bpf_attr *attr)
{
u32 valsize = attr->value_size;
/* check sanity of attributes. 2 value sizes supported:
* 4 bytes: ifindex
* 8 bytes: ifindex + prog fd
*/
if (attr->max_entries == 0 || attr->key_size != 4 ||
(valsize != offsetofend(struct bpf_devmap_val, ifindex) &&
valsize != offsetofend(struct bpf_devmap_val, bpf_prog.fd)) ||
attr->map_flags & ~DEV_CREATE_FLAG_MASK)
return -EINVAL;
/* Lookup returns a pointer straight to dev->ifindex, so make sure the
* verifier prevents writes from the BPF side
*/
attr->map_flags |= BPF_F_RDONLY_PROG;
bpf_map_init_from_attr(&dtab->map, attr);
if (attr->map_type == BPF_MAP_TYPE_DEVMAP_HASH) {
/* hash table size must be power of 2; roundup_pow_of_two() can
* overflow into UB on 32-bit arches, so check that first
*/
if (dtab->map.max_entries > 1UL << 31)
return -EINVAL;
dtab->n_buckets = roundup_pow_of_two(dtab->map.max_entries);
dtab->dev_index_head = dev_map_create_hash(dtab->n_buckets,
dtab->map.numa_node);
if (!dtab->dev_index_head)
return -ENOMEM;
spin_lock_init(&dtab->index_lock);
} else {
dtab->netdev_map = bpf_map_area_alloc((u64) dtab->map.max_entries *
sizeof(struct bpf_dtab_netdev *),
dtab->map.numa_node);
if (!dtab->netdev_map)
return -ENOMEM;
}
return 0;
}
| [[133, "\t\t/* hash table size must be power of 2; roundup_pow_of_two() can\n"], [134, "\t\t * overflow into UB on 32-bit arches, so check that first\n"], [135, "\t\t */\n"], [136, "\t\tif (dtab->map.max_entries > 1UL << 31)\n"], [139, "\t\tdtab->n_buckets = roundup_pow_of_two(dtab->map.max_entries);\n"], [140, "\n"]] | [[133, "/* hash table size must be power of 2; roundup_pow_of_two() can\n\t\t * overflow into UB on 32-bit arches, so check that first\n\t\t */"], [136, "if (dtab->map.max_entries > 1UL << 31)"], [139, "dtab->n_buckets = roundup_pow_of_two(dtab->map.max_entries);"], [140, "\n"]] | [
"CVE-2024-26885"
] | [
"CWE-119"
] | 54 | {
"Execution Environment": [
"Size of unsigned long on the target architecture (32-bit vs 64-bit)"
],
"Explanation": null,
"External Function": [
"roundup_pow_of_two"
],
"Function Argument": [],
"Globals": [],
"Type Execution Declaration": []
} |
55 | linux | https://github.com/torvalds/linux | net/netlabel/netlabel_cipso_v4.c | 2a2f11c227bdf292b3a2900ad04139d301b56ac4 | NetLabel: correct CIPSO tag handling when adding new DOI definitions
The current netlbl_cipsov4_add_common() function has two problems which are
fixed with this patch. The first is an off-by-one bug where it is possibile to
overflow the doi_def->tags[] array. The second is a bug where the same
doi_def->tags[] array was not always fully initialized, which caused sporadic
failures.
Signed-off-by: Paul Moore <paul.moore@hp.com>
Signed-off-by: James Morris <jmorris@namei.org> | true | 4c9fd8131014712f7144f68bd5c6ddd7 | netlbl_cipsov4_add_common | static int netlbl_cipsov4_add_common(struct genl_info *info,
struct cipso_v4_doi *doi_def)
{
struct nlattr *nla;
int nla_rem;
u32 iter = 0;
doi_def->doi = nla_get_u32(info->attrs[NLBL_CIPSOV4_A_DOI]);
if (nla_validate_nested(info->attrs[NLBL_CIPSOV4_A_TAGLST],
NLBL_CIPSOV4_A_MAX,
netlbl_cipsov4_genl_policy) != 0)
return -EINVAL;
nla_for_each_nested(nla, info->attrs[NLBL_CIPSOV4_A_TAGLST], nla_rem)
if (nla->nla_type == NLBL_CIPSOV4_A_TAG) {
if (iter > CIPSO_V4_TAG_MAXCNT)
return -EINVAL;
doi_def->tags[iter++] = nla_get_u8(nla);
}
if (iter < CIPSO_V4_TAG_MAXCNT)
doi_def->tags[iter] = CIPSO_V4_TAG_INVALID;
return 0;
}
| [[133, "\t\t\tif (iter > CIPSO_V4_TAG_MAXCNT)\n"], [137, "\tif (iter < CIPSO_V4_TAG_MAXCNT)\n"], [138, "\t\tdoi_def->tags[iter] = CIPSO_V4_TAG_INVALID;\n"]] | [[133, "if (iter > CIPSO_V4_TAG_MAXCNT)"], [137, "if (iter < CIPSO_V4_TAG_MAXCNT)"], [138, "doi_def->tags[iter] = CIPSO_V4_TAG_INVALID;"]] | [
"CVE-2007-6762"
] | [
"CWE-119"
] | 56 | {
"Execution Environment": [],
"Explanation": null,
"External Function": [],
"Function Argument": [
"struct cipso_v4_doi *doi_def"
],
"Globals": [
"CIPSO_V4_TAG_MAXCNT"
],
"Type Execution Declaration": [
"struct cipso_v4_doi"
]
} |
56 | linux | https://github.com/torvalds/linux | net/netlabel/netlabel_cipso_v4.c | 2a2f11c227bdf292b3a2900ad04139d301b56ac4 | NetLabel: correct CIPSO tag handling when adding new DOI definitions
The current netlbl_cipsov4_add_common() function has two problems which are
fixed with this patch. The first is an off-by-one bug where it is possibile to
overflow the doi_def->tags[] array. The second is a bug where the same
doi_def->tags[] array was not always fully initialized, which caused sporadic
failures.
Signed-off-by: Paul Moore <paul.moore@hp.com>
Signed-off-by: James Morris <jmorris@namei.org> | false | d04467fa7dd237b22455c6cb93a5f448 | netlbl_cipsov4_add_common | static int netlbl_cipsov4_add_common(struct genl_info *info,
struct cipso_v4_doi *doi_def)
{
struct nlattr *nla;
int nla_rem;
u32 iter = 0;
doi_def->doi = nla_get_u32(info->attrs[NLBL_CIPSOV4_A_DOI]);
if (nla_validate_nested(info->attrs[NLBL_CIPSOV4_A_TAGLST],
NLBL_CIPSOV4_A_MAX,
netlbl_cipsov4_genl_policy) != 0)
return -EINVAL;
nla_for_each_nested(nla, info->attrs[NLBL_CIPSOV4_A_TAGLST], nla_rem)
if (nla->nla_type == NLBL_CIPSOV4_A_TAG) {
if (iter >= CIPSO_V4_TAG_MAXCNT)
return -EINVAL;
doi_def->tags[iter++] = nla_get_u8(nla);
}
while (iter < CIPSO_V4_TAG_MAXCNT)
doi_def->tags[iter++] = CIPSO_V4_TAG_INVALID;
return 0;
}
| [[133, "\t\t\tif (iter >= CIPSO_V4_TAG_MAXCNT)\n"], [137, "\twhile (iter < CIPSO_V4_TAG_MAXCNT)\n"], [138, "\t\tdoi_def->tags[iter++] = CIPSO_V4_TAG_INVALID;\n"]] | [[133, "if (iter >= CIPSO_V4_TAG_MAXCNT)"], [137, "while (iter < CIPSO_V4_TAG_MAXCNT)"], [138, "doi_def->tags[iter++] = CIPSO_V4_TAG_INVALID;"]] | [
"CVE-2007-6762"
] | [
"CWE-119"
] | 56 | {
"Execution Environment": [],
"Explanation": null,
"External Function": [],
"Function Argument": [
"struct cipso_v4_doi *doi_def"
],
"Globals": [
"CIPSO_V4_TAG_MAXCNT"
],
"Type Execution Declaration": [
"struct cipso_v4_doi"
]
} |
57 | linux | https://github.com/torvalds/linux | net/packet/af_packet.c | 2b6867c2ce76c596676bec7d2d525af525fdc6e2 | net/packet: fix overflow in check for priv area size
Subtracting tp_sizeof_priv from tp_block_size and casting to int
to check whether one is less then the other doesn't always work
(both of them are unsigned ints).
Compare them as is instead.
Also cast tp_sizeof_priv to u64 before using BLK_PLUS_PRIV, as
it can overflow inside BLK_PLUS_PRIV otherwise.
Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net> | true | 444a6ab864c27b2bc29b69291705f480 | packet_set_ring | static int packet_set_ring(struct sock *sk, union tpacket_req_u *req_u,
int closing, int tx_ring)
{
struct pgv *pg_vec = NULL;
struct packet_sock *po = pkt_sk(sk);
int was_running, order = 0;
struct packet_ring_buffer *rb;
struct sk_buff_head *rb_queue;
__be16 num;
int err = -EINVAL;
/* Added to avoid minimal code churn */
struct tpacket_req *req = &req_u->req;
lock_sock(sk);
rb = tx_ring ? &po->tx_ring : &po->rx_ring;
rb_queue = tx_ring ? &sk->sk_write_queue : &sk->sk_receive_queue;
err = -EBUSY;
if (!closing) {
if (atomic_read(&po->mapped))
goto out;
if (packet_read_pending(rb))
goto out;
}
if (req->tp_block_nr) {
/* Sanity tests and some calculations */
err = -EBUSY;
if (unlikely(rb->pg_vec))
goto out;
switch (po->tp_version) {
case TPACKET_V1:
po->tp_hdrlen = TPACKET_HDRLEN;
break;
case TPACKET_V2:
po->tp_hdrlen = TPACKET2_HDRLEN;
break;
case TPACKET_V3:
po->tp_hdrlen = TPACKET3_HDRLEN;
break;
}
err = -EINVAL;
if (unlikely((int)req->tp_block_size <= 0))
goto out;
if (unlikely(!PAGE_ALIGNED(req->tp_block_size)))
goto out;
if (po->tp_version >= TPACKET_V3 &&
(int)(req->tp_block_size -
BLK_PLUS_PRIV(req_u->req3.tp_sizeof_priv)) <= 0)
goto out;
if (unlikely(req->tp_frame_size < po->tp_hdrlen +
po->tp_reserve))
goto out;
if (unlikely(req->tp_frame_size & (TPACKET_ALIGNMENT - 1)))
goto out;
rb->frames_per_block = req->tp_block_size / req->tp_frame_size;
if (unlikely(rb->frames_per_block == 0))
goto out;
if (unlikely((rb->frames_per_block * req->tp_block_nr) !=
req->tp_frame_nr))
goto out;
err = -ENOMEM;
order = get_order(req->tp_block_size);
pg_vec = alloc_pg_vec(req, order);
if (unlikely(!pg_vec))
goto out;
switch (po->tp_version) {
case TPACKET_V3:
/* Block transmit is not supported yet */
if (!tx_ring) {
init_prb_bdqc(po, rb, pg_vec, req_u);
} else {
struct tpacket_req3 *req3 = &req_u->req3;
if (req3->tp_retire_blk_tov ||
req3->tp_sizeof_priv ||
req3->tp_feature_req_word) {
err = -EINVAL;
goto out;
}
}
break;
default:
break;
}
}
/* Done */
else {
err = -EINVAL;
if (unlikely(req->tp_frame_nr))
goto out;
}
/* Detach socket from network */
spin_lock(&po->bind_lock);
was_running = po->running;
num = po->num;
if (was_running) {
po->num = 0;
__unregister_prot_hook(sk, false);
}
spin_unlock(&po->bind_lock);
synchronize_net();
err = -EBUSY;
mutex_lock(&po->pg_vec_lock);
if (closing || atomic_read(&po->mapped) == 0) {
err = 0;
spin_lock_bh(&rb_queue->lock);
swap(rb->pg_vec, pg_vec);
rb->frame_max = (req->tp_frame_nr - 1);
rb->head = 0;
rb->frame_size = req->tp_frame_size;
spin_unlock_bh(&rb_queue->lock);
swap(rb->pg_vec_order, order);
swap(rb->pg_vec_len, req->tp_block_nr);
rb->pg_vec_pages = req->tp_block_size/PAGE_SIZE;
po->prot_hook.func = (po->rx_ring.pg_vec) ?
tpacket_rcv : packet_rcv;
skb_queue_purge(rb_queue);
if (atomic_read(&po->mapped))
pr_err("packet_mmap: vma is busy: %d\n",
atomic_read(&po->mapped));
}
mutex_unlock(&po->pg_vec_lock);
spin_lock(&po->bind_lock);
if (was_running) {
po->num = num;
register_prot_hook(sk);
}
spin_unlock(&po->bind_lock);
if (closing && (po->tp_version > TPACKET_V2)) {
/* Because we don't support block-based V3 on tx-ring */
if (!tx_ring)
prb_shutdown_retire_blk_timer(po, rb_queue);
}
if (pg_vec)
free_pg_vec(pg_vec, order, req->tp_block_nr);
out:
release_sock(sk);
return err;
}
| [[4196, "\t\t (int)(req->tp_block_size -\n"], [4197, "\t\t\t BLK_PLUS_PRIV(req_u->req3.tp_sizeof_priv)) <= 0)\n"]] | [[4195, "if (po->tp_version >= TPACKET_V3 &&\n\t\t (int)(req->tp_block_size -\n\t\t\t BLK_PLUS_PRIV(req_u->req3.tp_sizeof_priv)) <= 0)"]] | [
"CVE-2017-7308"
] | [
"CWE-787"
] | 58 | {
"Execution Environment": [],
"Explanation": null,
"External Function": [
"BLK_PLUS_PRIV"
],
"Function Argument": [
"req_u"
],
"Globals": [],
"Type Execution Declaration": []
} |
58 | linux | https://github.com/torvalds/linux | net/packet/af_packet.c | 2b6867c2ce76c596676bec7d2d525af525fdc6e2 | net/packet: fix overflow in check for priv area size
Subtracting tp_sizeof_priv from tp_block_size and casting to int
to check whether one is less then the other doesn't always work
(both of them are unsigned ints).
Compare them as is instead.
Also cast tp_sizeof_priv to u64 before using BLK_PLUS_PRIV, as
it can overflow inside BLK_PLUS_PRIV otherwise.
Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net> | false | d6c8f0d1d148fbd942e8b6aa7987ec8f | packet_set_ring | static int packet_set_ring(struct sock *sk, union tpacket_req_u *req_u,
int closing, int tx_ring)
{
struct pgv *pg_vec = NULL;
struct packet_sock *po = pkt_sk(sk);
int was_running, order = 0;
struct packet_ring_buffer *rb;
struct sk_buff_head *rb_queue;
__be16 num;
int err = -EINVAL;
/* Added to avoid minimal code churn */
struct tpacket_req *req = &req_u->req;
lock_sock(sk);
rb = tx_ring ? &po->tx_ring : &po->rx_ring;
rb_queue = tx_ring ? &sk->sk_write_queue : &sk->sk_receive_queue;
err = -EBUSY;
if (!closing) {
if (atomic_read(&po->mapped))
goto out;
if (packet_read_pending(rb))
goto out;
}
if (req->tp_block_nr) {
/* Sanity tests and some calculations */
err = -EBUSY;
if (unlikely(rb->pg_vec))
goto out;
switch (po->tp_version) {
case TPACKET_V1:
po->tp_hdrlen = TPACKET_HDRLEN;
break;
case TPACKET_V2:
po->tp_hdrlen = TPACKET2_HDRLEN;
break;
case TPACKET_V3:
po->tp_hdrlen = TPACKET3_HDRLEN;
break;
}
err = -EINVAL;
if (unlikely((int)req->tp_block_size <= 0))
goto out;
if (unlikely(!PAGE_ALIGNED(req->tp_block_size)))
goto out;
if (po->tp_version >= TPACKET_V3 &&
req->tp_block_size <=
BLK_PLUS_PRIV((u64)req_u->req3.tp_sizeof_priv))
goto out;
if (unlikely(req->tp_frame_size < po->tp_hdrlen +
po->tp_reserve))
goto out;
if (unlikely(req->tp_frame_size & (TPACKET_ALIGNMENT - 1)))
goto out;
rb->frames_per_block = req->tp_block_size / req->tp_frame_size;
if (unlikely(rb->frames_per_block == 0))
goto out;
if (unlikely((rb->frames_per_block * req->tp_block_nr) !=
req->tp_frame_nr))
goto out;
err = -ENOMEM;
order = get_order(req->tp_block_size);
pg_vec = alloc_pg_vec(req, order);
if (unlikely(!pg_vec))
goto out;
switch (po->tp_version) {
case TPACKET_V3:
/* Block transmit is not supported yet */
if (!tx_ring) {
init_prb_bdqc(po, rb, pg_vec, req_u);
} else {
struct tpacket_req3 *req3 = &req_u->req3;
if (req3->tp_retire_blk_tov ||
req3->tp_sizeof_priv ||
req3->tp_feature_req_word) {
err = -EINVAL;
goto out;
}
}
break;
default:
break;
}
}
/* Done */
else {
err = -EINVAL;
if (unlikely(req->tp_frame_nr))
goto out;
}
/* Detach socket from network */
spin_lock(&po->bind_lock);
was_running = po->running;
num = po->num;
if (was_running) {
po->num = 0;
__unregister_prot_hook(sk, false);
}
spin_unlock(&po->bind_lock);
synchronize_net();
err = -EBUSY;
mutex_lock(&po->pg_vec_lock);
if (closing || atomic_read(&po->mapped) == 0) {
err = 0;
spin_lock_bh(&rb_queue->lock);
swap(rb->pg_vec, pg_vec);
rb->frame_max = (req->tp_frame_nr - 1);
rb->head = 0;
rb->frame_size = req->tp_frame_size;
spin_unlock_bh(&rb_queue->lock);
swap(rb->pg_vec_order, order);
swap(rb->pg_vec_len, req->tp_block_nr);
rb->pg_vec_pages = req->tp_block_size/PAGE_SIZE;
po->prot_hook.func = (po->rx_ring.pg_vec) ?
tpacket_rcv : packet_rcv;
skb_queue_purge(rb_queue);
if (atomic_read(&po->mapped))
pr_err("packet_mmap: vma is busy: %d\n",
atomic_read(&po->mapped));
}
mutex_unlock(&po->pg_vec_lock);
spin_lock(&po->bind_lock);
if (was_running) {
po->num = num;
register_prot_hook(sk);
}
spin_unlock(&po->bind_lock);
if (closing && (po->tp_version > TPACKET_V2)) {
/* Because we don't support block-based V3 on tx-ring */
if (!tx_ring)
prb_shutdown_retire_blk_timer(po, rb_queue);
}
if (pg_vec)
free_pg_vec(pg_vec, order, req->tp_block_nr);
out:
release_sock(sk);
return err;
}
| [[4196, "\t\t req->tp_block_size <=\n"], [4197, "\t\t\t BLK_PLUS_PRIV((u64)req_u->req3.tp_sizeof_priv))\n"]] | [[4195, "if (po->tp_version >= TPACKET_V3 &&\n\t\t req->tp_block_size <=\n\t\t\t BLK_PLUS_PRIV((u64)req_u->req3.tp_sizeof_priv))"]] | [
"CVE-2017-7308"
] | [
"CWE-787"
] | 58 | {
"Execution Environment": [],
"Explanation": null,
"External Function": [
"BLK_PLUS_PRIV"
],
"Function Argument": [
"req_u"
],
"Globals": [],
"Type Execution Declaration": []
} |
59 | linux | https://github.com/torvalds/linux | drivers/net/usb/catc.c | 2d6a0e9de03ee658a9adc3bfb2f0ca55dff1e478 | catc: Use heap buffer for memory size test
Allocating USB buffers on the stack is not portable, and no longer
works on x86_64 (with VMAP_STACK enabled as per default).
Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2")
Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
Signed-off-by: David S. Miller <davem@davemloft.net> | true | 22f939f3ee45d3ab7a523cf5b27ed1f2 | catc_probe | static int catc_probe(struct usb_interface *intf, const struct usb_device_id *id)
{
struct device *dev = &intf->dev;
struct usb_device *usbdev = interface_to_usbdev(intf);
struct net_device *netdev;
struct catc *catc;
u8 broadcast[ETH_ALEN];
int i, pktsz, ret;
if (usb_set_interface(usbdev,
intf->altsetting->desc.bInterfaceNumber, 1)) {
dev_err(dev, "Can't set altsetting 1.\n");
return -EIO;
}
netdev = alloc_etherdev(sizeof(struct catc));
if (!netdev)
return -ENOMEM;
catc = netdev_priv(netdev);
netdev->netdev_ops = &catc_netdev_ops;
netdev->watchdog_timeo = TX_TIMEOUT;
netdev->ethtool_ops = &ops;
catc->usbdev = usbdev;
catc->netdev = netdev;
spin_lock_init(&catc->tx_lock);
spin_lock_init(&catc->ctrl_lock);
init_timer(&catc->timer);
catc->timer.data = (long) catc;
catc->timer.function = catc_stats_timer;
catc->ctrl_urb = usb_alloc_urb(0, GFP_KERNEL);
catc->tx_urb = usb_alloc_urb(0, GFP_KERNEL);
catc->rx_urb = usb_alloc_urb(0, GFP_KERNEL);
catc->irq_urb = usb_alloc_urb(0, GFP_KERNEL);
if ((!catc->ctrl_urb) || (!catc->tx_urb) ||
(!catc->rx_urb) || (!catc->irq_urb)) {
dev_err(&intf->dev, "No free urbs available.\n");
ret = -ENOMEM;
goto fail_free;
}
/* The F5U011 has the same vendor/product as the netmate but a device version of 0x130 */
if (le16_to_cpu(usbdev->descriptor.idVendor) == 0x0423 &&
le16_to_cpu(usbdev->descriptor.idProduct) == 0xa &&
le16_to_cpu(catc->usbdev->descriptor.bcdDevice) == 0x0130) {
dev_dbg(dev, "Testing for f5u011\n");
catc->is_f5u011 = 1;
atomic_set(&catc->recq_sz, 0);
pktsz = RX_PKT_SZ;
} else {
pktsz = RX_MAX_BURST * (PKT_SZ + 2);
}
usb_fill_control_urb(catc->ctrl_urb, usbdev, usb_sndctrlpipe(usbdev, 0),
NULL, NULL, 0, catc_ctrl_done, catc);
usb_fill_bulk_urb(catc->tx_urb, usbdev, usb_sndbulkpipe(usbdev, 1),
NULL, 0, catc_tx_done, catc);
usb_fill_bulk_urb(catc->rx_urb, usbdev, usb_rcvbulkpipe(usbdev, 1),
catc->rx_buf, pktsz, catc_rx_done, catc);
usb_fill_int_urb(catc->irq_urb, usbdev, usb_rcvintpipe(usbdev, 2),
catc->irq_buf, 2, catc_irq_done, catc, 1);
if (!catc->is_f5u011) {
dev_dbg(dev, "Checking memory size\n");
i = 0x12345678;
catc_write_mem(catc, 0x7a80, &i, 4);
i = 0x87654321;
catc_write_mem(catc, 0xfa80, &i, 4);
catc_read_mem(catc, 0x7a80, &i, 4);
switch (i) {
case 0x12345678:
catc_set_reg(catc, TxBufCount, 8);
catc_set_reg(catc, RxBufCount, 32);
dev_dbg(dev, "64k Memory\n");
break;
default:
dev_warn(&intf->dev,
"Couldn't detect memory size, assuming 32k\n");
case 0x87654321:
catc_set_reg(catc, TxBufCount, 4);
catc_set_reg(catc, RxBufCount, 16);
dev_dbg(dev, "32k Memory\n");
break;
}
dev_dbg(dev, "Getting MAC from SEEROM.\n");
catc_get_mac(catc, netdev->dev_addr);
dev_dbg(dev, "Setting MAC into registers.\n");
for (i = 0; i < 6; i++)
catc_set_reg(catc, StationAddr0 - i, netdev->dev_addr[i]);
dev_dbg(dev, "Filling the multicast list.\n");
eth_broadcast_addr(broadcast);
catc_multicast(broadcast, catc->multicast);
catc_multicast(netdev->dev_addr, catc->multicast);
catc_write_mem(catc, 0xfa80, catc->multicast, 64);
dev_dbg(dev, "Clearing error counters.\n");
for (i = 0; i < 8; i++)
catc_set_reg(catc, EthStats + i, 0);
catc->last_stats = jiffies;
dev_dbg(dev, "Enabling.\n");
catc_set_reg(catc, MaxBurst, RX_MAX_BURST);
catc_set_reg(catc, OpModes, OpTxMerge | OpRxMerge | OpLenInclude | Op3MemWaits);
catc_set_reg(catc, LEDCtrl, LEDLink);
catc_set_reg(catc, RxUnit, RxEnable | RxPolarity | RxMultiCast);
} else {
dev_dbg(dev, "Performing reset\n");
catc_reset(catc);
catc_get_mac(catc, netdev->dev_addr);
dev_dbg(dev, "Setting RX Mode\n");
catc->rxmode[0] = RxEnable | RxPolarity | RxMultiCast;
catc->rxmode[1] = 0;
f5u011_rxmode(catc, catc->rxmode);
}
dev_dbg(dev, "Init done.\n");
printk(KERN_INFO "%s: %s USB Ethernet at usb-%s-%s, %pM.\n",
netdev->name, (catc->is_f5u011) ? "Belkin F5U011" : "CATC EL1210A NetMate",
usbdev->bus->bus_name, usbdev->devpath, netdev->dev_addr);
usb_set_intfdata(intf, catc);
SET_NETDEV_DEV(netdev, &intf->dev);
ret = register_netdev(netdev);
if (ret)
goto fail_clear_intfdata;
return 0;
fail_clear_intfdata:
usb_set_intfdata(intf, NULL);
fail_free:
usb_free_urb(catc->ctrl_urb);
usb_free_urb(catc->tx_urb);
usb_free_urb(catc->rx_urb);
usb_free_urb(catc->irq_urb);
free_netdev(netdev);
return ret;
}
| [[779, "\tint i, pktsz, ret;\n"], [845, "\t\ti = 0x12345678;\n"], [846, "\t\tcatc_write_mem(catc, 0x7a80, &i, 4);\n"], [847, "\t\ti = 0x87654321;\t\n"], [848, "\t\tcatc_write_mem(catc, 0xfa80, &i, 4);\n"], [849, "\t\tcatc_read_mem(catc, 0x7a80, &i, 4);\n"], [851, "\t\tswitch (i) {\n"]] | [[779, "int i, pktsz, ret;"], [845, "i = 0x12345678;"], [846, "catc_write_mem(catc, 0x7a80, &i, 4);"], [847, "i = 0x87654321;"], [848, "catc_write_mem(catc, 0xfa80, &i, 4);"], [849, "catc_read_mem(catc, 0x7a80, &i, 4);"], [851, "switch (i) {"]] | [
"CVE-2017-8070"
] | [
"CWE-119"
] | 60 | {
"Execution Environment": [
"CONFIG_VMAP_STACK"
],
"Explanation": null,
"External Function": [
"catc_write_mem",
"catc_read_mem"
],
"Function Argument": [
"struct usb_interface *intf",
"const struct usb_device_id *id"
],
"Globals": [],
"Type Execution Declaration": []
} |
60 | linux | https://github.com/torvalds/linux | drivers/net/usb/catc.c | 2d6a0e9de03ee658a9adc3bfb2f0ca55dff1e478 | catc: Use heap buffer for memory size test
Allocating USB buffers on the stack is not portable, and no longer
works on x86_64 (with VMAP_STACK enabled as per default).
Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2")
Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
Signed-off-by: David S. Miller <davem@davemloft.net> | false | f4d160e73f9201211d263585dd4f503c | catc_probe | static int catc_probe(struct usb_interface *intf, const struct usb_device_id *id)
{
struct device *dev = &intf->dev;
struct usb_device *usbdev = interface_to_usbdev(intf);
struct net_device *netdev;
struct catc *catc;
u8 broadcast[ETH_ALEN];
int pktsz, ret;
if (usb_set_interface(usbdev,
intf->altsetting->desc.bInterfaceNumber, 1)) {
dev_err(dev, "Can't set altsetting 1.\n");
return -EIO;
}
netdev = alloc_etherdev(sizeof(struct catc));
if (!netdev)
return -ENOMEM;
catc = netdev_priv(netdev);
netdev->netdev_ops = &catc_netdev_ops;
netdev->watchdog_timeo = TX_TIMEOUT;
netdev->ethtool_ops = &ops;
catc->usbdev = usbdev;
catc->netdev = netdev;
spin_lock_init(&catc->tx_lock);
spin_lock_init(&catc->ctrl_lock);
init_timer(&catc->timer);
catc->timer.data = (long) catc;
catc->timer.function = catc_stats_timer;
catc->ctrl_urb = usb_alloc_urb(0, GFP_KERNEL);
catc->tx_urb = usb_alloc_urb(0, GFP_KERNEL);
catc->rx_urb = usb_alloc_urb(0, GFP_KERNEL);
catc->irq_urb = usb_alloc_urb(0, GFP_KERNEL);
if ((!catc->ctrl_urb) || (!catc->tx_urb) ||
(!catc->rx_urb) || (!catc->irq_urb)) {
dev_err(&intf->dev, "No free urbs available.\n");
ret = -ENOMEM;
goto fail_free;
}
/* The F5U011 has the same vendor/product as the netmate but a device version of 0x130 */
if (le16_to_cpu(usbdev->descriptor.idVendor) == 0x0423 &&
le16_to_cpu(usbdev->descriptor.idProduct) == 0xa &&
le16_to_cpu(catc->usbdev->descriptor.bcdDevice) == 0x0130) {
dev_dbg(dev, "Testing for f5u011\n");
catc->is_f5u011 = 1;
atomic_set(&catc->recq_sz, 0);
pktsz = RX_PKT_SZ;
} else {
pktsz = RX_MAX_BURST * (PKT_SZ + 2);
}
usb_fill_control_urb(catc->ctrl_urb, usbdev, usb_sndctrlpipe(usbdev, 0),
NULL, NULL, 0, catc_ctrl_done, catc);
usb_fill_bulk_urb(catc->tx_urb, usbdev, usb_sndbulkpipe(usbdev, 1),
NULL, 0, catc_tx_done, catc);
usb_fill_bulk_urb(catc->rx_urb, usbdev, usb_rcvbulkpipe(usbdev, 1),
catc->rx_buf, pktsz, catc_rx_done, catc);
usb_fill_int_urb(catc->irq_urb, usbdev, usb_rcvintpipe(usbdev, 2),
catc->irq_buf, 2, catc_irq_done, catc, 1);
if (!catc->is_f5u011) {
u32 *buf;
int i;
dev_dbg(dev, "Checking memory size\n");
buf = kmalloc(4, GFP_KERNEL);
if (!buf) {
ret = -ENOMEM;
goto fail_free;
}
*buf = 0x12345678;
catc_write_mem(catc, 0x7a80, buf, 4);
*buf = 0x87654321;
catc_write_mem(catc, 0xfa80, buf, 4);
catc_read_mem(catc, 0x7a80, buf, 4);
switch (*buf) {
case 0x12345678:
catc_set_reg(catc, TxBufCount, 8);
catc_set_reg(catc, RxBufCount, 32);
dev_dbg(dev, "64k Memory\n");
break;
default:
dev_warn(&intf->dev,
"Couldn't detect memory size, assuming 32k\n");
case 0x87654321:
catc_set_reg(catc, TxBufCount, 4);
catc_set_reg(catc, RxBufCount, 16);
dev_dbg(dev, "32k Memory\n");
break;
}
kfree(buf);
dev_dbg(dev, "Getting MAC from SEEROM.\n");
catc_get_mac(catc, netdev->dev_addr);
dev_dbg(dev, "Setting MAC into registers.\n");
for (i = 0; i < 6; i++)
catc_set_reg(catc, StationAddr0 - i, netdev->dev_addr[i]);
dev_dbg(dev, "Filling the multicast list.\n");
eth_broadcast_addr(broadcast);
catc_multicast(broadcast, catc->multicast);
catc_multicast(netdev->dev_addr, catc->multicast);
catc_write_mem(catc, 0xfa80, catc->multicast, 64);
dev_dbg(dev, "Clearing error counters.\n");
for (i = 0; i < 8; i++)
catc_set_reg(catc, EthStats + i, 0);
catc->last_stats = jiffies;
dev_dbg(dev, "Enabling.\n");
catc_set_reg(catc, MaxBurst, RX_MAX_BURST);
catc_set_reg(catc, OpModes, OpTxMerge | OpRxMerge | OpLenInclude | Op3MemWaits);
catc_set_reg(catc, LEDCtrl, LEDLink);
catc_set_reg(catc, RxUnit, RxEnable | RxPolarity | RxMultiCast);
} else {
dev_dbg(dev, "Performing reset\n");
catc_reset(catc);
catc_get_mac(catc, netdev->dev_addr);
dev_dbg(dev, "Setting RX Mode\n");
catc->rxmode[0] = RxEnable | RxPolarity | RxMultiCast;
catc->rxmode[1] = 0;
f5u011_rxmode(catc, catc->rxmode);
}
dev_dbg(dev, "Init done.\n");
printk(KERN_INFO "%s: %s USB Ethernet at usb-%s-%s, %pM.\n",
netdev->name, (catc->is_f5u011) ? "Belkin F5U011" : "CATC EL1210A NetMate",
usbdev->bus->bus_name, usbdev->devpath, netdev->dev_addr);
usb_set_intfdata(intf, catc);
SET_NETDEV_DEV(netdev, &intf->dev);
ret = register_netdev(netdev);
if (ret)
goto fail_clear_intfdata;
return 0;
fail_clear_intfdata:
usb_set_intfdata(intf, NULL);
fail_free:
usb_free_urb(catc->ctrl_urb);
usb_free_urb(catc->tx_urb);
usb_free_urb(catc->rx_urb);
usb_free_urb(catc->irq_urb);
free_netdev(netdev);
return ret;
}
| [[779, "\tint pktsz, ret;\n"], [843, "\t\tu32 *buf;\n"], [844, "\t\tint i;\n"], [845, "\n"], [848, "\t\tbuf = kmalloc(4, GFP_KERNEL);\n"], [849, "\t\tif (!buf) {\n"], [850, "\t\t\tret = -ENOMEM;\n"], [851, "\t\t\tgoto fail_free;\n"], [852, "\t\t}\n"], [853, "\n"], [854, "\t\t*buf = 0x12345678;\n"], [855, "\t\tcatc_write_mem(catc, 0x7a80, buf, 4);\n"], [856, "\t\t*buf = 0x87654321;\n"], [857, "\t\tcatc_write_mem(catc, 0xfa80, buf, 4);\n"], [858, "\t\tcatc_read_mem(catc, 0x7a80, buf, 4);\n"], [860, "\t\tswitch (*buf) {\n"], [875, "\n"], [876, "\t\tkfree(buf);\n"]] | [[779, "int pktsz, ret;"], [843, "u32 *buf;"], [844, "int i;"], [845, "\n"], [848, "buf = kmalloc(4, GFP_KERNEL);"], [849, "if (!buf)"], [850, "ret = -ENOMEM;"], [851, "goto fail_free;"], [852, "\t\t}\n"], [853, "\n"], [854, "*buf = 0x12345678;"], [855, "catc_write_mem(catc, 0x7a80, buf, 4);"], [856, "*buf = 0x87654321;"], [857, "catc_write_mem(catc, 0xfa80, buf, 4);"], [858, "catc_read_mem(catc, 0x7a80, buf, 4);"], [860, "switch (*buf) {"], [875, "\n"], [876, "kfree(buf);"]] | [
"CVE-2017-8070"
] | [
"CWE-119"
] | 60 | {
"Execution Environment": [
"CONFIG_VMAP_STACK"
],
"Explanation": null,
"External Function": [
"catc_write_mem",
"catc_read_mem"
],
"Function Argument": [
"struct usb_interface *intf",
"const struct usb_device_id *id"
],
"Globals": [],
"Type Execution Declaration": []
} |
61 | linux | https://github.com/torvalds/linux | drivers/net/wireless/iwlwifi/iwl-agn-sta.c | 2da424b0773cea3db47e1e81db71eeebde8269d4 | iwlwifi: Sanity check for sta_id
On my testing, I saw some strange behavior
[ 421.739708] iwlwifi 0000:01:00.0: ACTIVATE a non DRIVER active station id 148 addr 00:00:00:00:00:00
[ 421.739719] iwlwifi 0000:01:00.0: iwl_sta_ucode_activate Added STA id 148 addr 00:00:00:00:00:00 to uCode
not sure how it happen, but adding the sanity check to prevent memory
corruption
Signed-off-by: Wey-Yi Guy <wey-yi.w.guy@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com> | true | 9ad2deb6b4d8a8bc03cf87ed9776a3b6 | iwl_sta_ucode_activate | static void iwl_sta_ucode_activate(struct iwl_priv *priv, u8 sta_id)
{
if (!(priv->stations[sta_id].used & IWL_STA_DRIVER_ACTIVE))
IWL_ERR(priv, "ACTIVATE a non DRIVER active station id %u "
"addr %pM\n",
sta_id, priv->stations[sta_id].sta.sta.addr);
if (priv->stations[sta_id].used & IWL_STA_UCODE_ACTIVE) {
IWL_DEBUG_ASSOC(priv,
"STA id %u addr %pM already present in uCode "
"(according to driver)\n",
sta_id, priv->stations[sta_id].sta.sta.addr);
} else {
priv->stations[sta_id].used |= IWL_STA_UCODE_ACTIVE;
IWL_DEBUG_ASSOC(priv, "Added STA id %u addr %pM to uCode\n",
sta_id, priv->stations[sta_id].sta.sta.addr);
}
}
| [[38, "static void iwl_sta_ucode_activate(struct iwl_priv *priv, u8 sta_id)\n"], [40, "\n"]] | [[38, "static void iwl_sta_ucode_activate(struct iwl_priv *priv, u8 sta_id)"], [40, "\n"]] | [
"CVE-2012-6712"
] | [
"CWE-119"
] | 62 | {
"Execution Environment": [],
"Explanation": null,
"External Function": [],
"Function Argument": [],
"Globals": [
"IWLAGN_STATION_COUNT"
],
"Type Execution Declaration": [
"struct iwl_priv",
"IWL_STA_DRIVER_ACTIVE",
"IWL_STA_UCODE_ACTIVE"
]
} |
62 | linux | https://github.com/torvalds/linux | drivers/net/wireless/iwlwifi/iwl-agn-sta.c | 2da424b0773cea3db47e1e81db71eeebde8269d4 | iwlwifi: Sanity check for sta_id
On my testing, I saw some strange behavior
[ 421.739708] iwlwifi 0000:01:00.0: ACTIVATE a non DRIVER active station id 148 addr 00:00:00:00:00:00
[ 421.739719] iwlwifi 0000:01:00.0: iwl_sta_ucode_activate Added STA id 148 addr 00:00:00:00:00:00 to uCode
not sure how it happen, but adding the sanity check to prevent memory
corruption
Signed-off-by: Wey-Yi Guy <wey-yi.w.guy@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com> | false | 4fe4f98d31c4ee36521adc11f4aa90ea | iwl_sta_ucode_activate | static int iwl_sta_ucode_activate(struct iwl_priv *priv, u8 sta_id)
{
if (sta_id >= IWLAGN_STATION_COUNT) {
IWL_ERR(priv, "invalid sta_id %u", sta_id);
return -EINVAL;
}
if (!(priv->stations[sta_id].used & IWL_STA_DRIVER_ACTIVE))
IWL_ERR(priv, "ACTIVATE a non DRIVER active station id %u "
"addr %pM\n",
sta_id, priv->stations[sta_id].sta.sta.addr);
if (priv->stations[sta_id].used & IWL_STA_UCODE_ACTIVE) {
IWL_DEBUG_ASSOC(priv,
"STA id %u addr %pM already present in uCode "
"(according to driver)\n",
sta_id, priv->stations[sta_id].sta.sta.addr);
} else {
priv->stations[sta_id].used |= IWL_STA_UCODE_ACTIVE;
IWL_DEBUG_ASSOC(priv, "Added STA id %u addr %pM to uCode\n",
sta_id, priv->stations[sta_id].sta.sta.addr);
}
return 0;
}
| [[38, "static int iwl_sta_ucode_activate(struct iwl_priv *priv, u8 sta_id)\n"], [40, "\tif (sta_id >= IWLAGN_STATION_COUNT) {\n"], [41, "\t\tIWL_ERR(priv, \"invalid sta_id %u\", sta_id);\n"], [42, "\t\treturn -EINVAL;\n"], [43, "\t}\n"], [59, "\treturn 0;\n"]] | [[38, "static int iwl_sta_ucode_activate(struct iwl_priv *priv, u8 sta_id)"], [40, "if (sta_id >= IWLAGN_STATION_COUNT)"], [41, "IWL_ERR(priv, \"invalid sta_id %u\", sta_id);"], [42, "return -EINVAL;"], [43, "\t}\n"], [59, "return 0;"]] | [
"CVE-2012-6712"
] | [
"CWE-119"
] | 62 | {
"Execution Environment": [],
"Explanation": null,
"External Function": [],
"Function Argument": [],
"Globals": [
"IWLAGN_STATION_COUNT"
],
"Type Execution Declaration": [
"struct iwl_priv",
"IWL_STA_DRIVER_ACTIVE",
"IWL_STA_UCODE_ACTIVE"
]
} |
63 | linux | https://github.com/torvalds/linux | drivers/usb/core/message.c | 2e1c42391ff2556387b3cb6308b24f6f65619feb | USB: core: harden cdc_parse_cdc_header
Andrey Konovalov reported a possible out-of-bounds problem for the
cdc_parse_cdc_header function. He writes:
It looks like cdc_parse_cdc_header() doesn't validate buflen
before accessing buffer[1], buffer[2] and so on. The only check
present is while (buflen > 0).
So fix this issue up by properly validating the buffer length matches
what the descriptor says it is.
Reported-by: Andrey Konovalov <andreyknvl@google.com>
Tested-by: Andrey Konovalov <andreyknvl@google.com>
Cc: stable <stable@vger.kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> | true | 67cccb2edbdbcaa7b6d975bc930769e3 | cdc_parse_cdc_header | int cdc_parse_cdc_header(struct usb_cdc_parsed_header *hdr,
struct usb_interface *intf,
u8 *buffer,
int buflen)
{
/* duplicates are ignored */
struct usb_cdc_union_desc *union_header = NULL;
/* duplicates are not tolerated */
struct usb_cdc_header_desc *header = NULL;
struct usb_cdc_ether_desc *ether = NULL;
struct usb_cdc_mdlm_detail_desc *detail = NULL;
struct usb_cdc_mdlm_desc *desc = NULL;
unsigned int elength;
int cnt = 0;
memset(hdr, 0x00, sizeof(struct usb_cdc_parsed_header));
hdr->phonet_magic_present = false;
while (buflen > 0) {
elength = buffer[0];
if (!elength) {
dev_err(&intf->dev, "skipping garbage byte\n");
elength = 1;
goto next_desc;
}
if (buffer[1] != USB_DT_CS_INTERFACE) {
dev_err(&intf->dev, "skipping garbage\n");
goto next_desc;
}
switch (buffer[2]) {
case USB_CDC_UNION_TYPE: /* we've found it */
if (elength < sizeof(struct usb_cdc_union_desc))
goto next_desc;
if (union_header) {
dev_err(&intf->dev, "More than one union descriptor, skipping ...\n");
goto next_desc;
}
union_header = (struct usb_cdc_union_desc *)buffer;
break;
case USB_CDC_COUNTRY_TYPE:
if (elength < sizeof(struct usb_cdc_country_functional_desc))
goto next_desc;
hdr->usb_cdc_country_functional_desc =
(struct usb_cdc_country_functional_desc *)buffer;
break;
case USB_CDC_HEADER_TYPE:
if (elength != sizeof(struct usb_cdc_header_desc))
goto next_desc;
if (header)
return -EINVAL;
header = (struct usb_cdc_header_desc *)buffer;
break;
case USB_CDC_ACM_TYPE:
if (elength < sizeof(struct usb_cdc_acm_descriptor))
goto next_desc;
hdr->usb_cdc_acm_descriptor =
(struct usb_cdc_acm_descriptor *)buffer;
break;
case USB_CDC_ETHERNET_TYPE:
if (elength != sizeof(struct usb_cdc_ether_desc))
goto next_desc;
if (ether)
return -EINVAL;
ether = (struct usb_cdc_ether_desc *)buffer;
break;
case USB_CDC_CALL_MANAGEMENT_TYPE:
if (elength < sizeof(struct usb_cdc_call_mgmt_descriptor))
goto next_desc;
hdr->usb_cdc_call_mgmt_descriptor =
(struct usb_cdc_call_mgmt_descriptor *)buffer;
break;
case USB_CDC_DMM_TYPE:
if (elength < sizeof(struct usb_cdc_dmm_desc))
goto next_desc;
hdr->usb_cdc_dmm_desc =
(struct usb_cdc_dmm_desc *)buffer;
break;
case USB_CDC_MDLM_TYPE:
if (elength < sizeof(struct usb_cdc_mdlm_desc *))
goto next_desc;
if (desc)
return -EINVAL;
desc = (struct usb_cdc_mdlm_desc *)buffer;
break;
case USB_CDC_MDLM_DETAIL_TYPE:
if (elength < sizeof(struct usb_cdc_mdlm_detail_desc *))
goto next_desc;
if (detail)
return -EINVAL;
detail = (struct usb_cdc_mdlm_detail_desc *)buffer;
break;
case USB_CDC_NCM_TYPE:
if (elength < sizeof(struct usb_cdc_ncm_desc))
goto next_desc;
hdr->usb_cdc_ncm_desc = (struct usb_cdc_ncm_desc *)buffer;
break;
case USB_CDC_MBIM_TYPE:
if (elength < sizeof(struct usb_cdc_mbim_desc))
goto next_desc;
hdr->usb_cdc_mbim_desc = (struct usb_cdc_mbim_desc *)buffer;
break;
case USB_CDC_MBIM_EXTENDED_TYPE:
if (elength < sizeof(struct usb_cdc_mbim_extended_desc))
break;
hdr->usb_cdc_mbim_extended_desc =
(struct usb_cdc_mbim_extended_desc *)buffer;
break;
case CDC_PHONET_MAGIC_NUMBER:
hdr->phonet_magic_present = true;
break;
default:
/*
* there are LOTS more CDC descriptors that
* could legitimately be found here.
*/
dev_dbg(&intf->dev, "Ignoring descriptor: type %02x, length %ud\n",
buffer[2], elength);
goto next_desc;
}
cnt++;
next_desc:
buflen -= elength;
buffer += elength;
}
hdr->usb_cdc_union_desc = union_header;
hdr->usb_cdc_header_desc = header;
hdr->usb_cdc_mdlm_detail_desc = detail;
hdr->usb_cdc_mdlm_desc = desc;
hdr->usb_cdc_ether_desc = ether;
return cnt;
}
| [] | [] | [
"CVE-2017-16534"
] | [
"CWE-119"
] | 64 | {
"Execution Environment": [],
"Explanation": null,
"External Function": [],
"Function Argument": [],
"Globals": [
"USB_DT_CS_INTERFACE",
"USB_CDC_UNION_TYPE",
"USB_CDC_COUNTRY_TYPE",
"USB_CDC_HEADER_TYPE",
"USB_CDC_ACM_TYPE",
"USB_CDC_ETHERNET_TYPE",
"USB_CDC_CALL_MANAGEMENT_TYPE",
"USB_CDC_DMM_TYPE",
"USB_CDC_MDLM_TYPE",
"USB_CDC_MDLM_DETAIL_TYPE",
"USB_CDC_NCM_TYPE",
"USB_CDC_MBIM_TYPE",
"USB_CDC_MBIM_EXTENDED_TYPE",
"CDC_PHONET_MAGIC_NUMBER"
],
"Type Execution Declaration": [
"struct usb_cdc_union_desc",
"struct usb_cdc_header_desc",
"struct usb_cdc_ether_desc",
"struct usb_cdc_mdlm_detail_desc",
"struct usb_cdc_mdlm_desc",
"struct usb_cdc_country_functional_desc",
"struct usb_cdc_acm_descriptor",
"struct usb_cdc_call_mgmt_descriptor",
"struct usb_cdc_dmm_desc",
"struct usb_cdc_ncm_desc",
"struct usb_cdc_mbim_desc",
"struct usb_cdc_mbim_extended_desc",
"struct usb_cdc_parsed_header"
]
} |
64 | linux | https://github.com/torvalds/linux | drivers/usb/core/message.c | 2e1c42391ff2556387b3cb6308b24f6f65619feb | USB: core: harden cdc_parse_cdc_header
Andrey Konovalov reported a possible out-of-bounds problem for the
cdc_parse_cdc_header function. He writes:
It looks like cdc_parse_cdc_header() doesn't validate buflen
before accessing buffer[1], buffer[2] and so on. The only check
present is while (buflen > 0).
So fix this issue up by properly validating the buffer length matches
what the descriptor says it is.
Reported-by: Andrey Konovalov <andreyknvl@google.com>
Tested-by: Andrey Konovalov <andreyknvl@google.com>
Cc: stable <stable@vger.kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> | false | b882c8014dc9c7f1f0b707ef47655e4f | cdc_parse_cdc_header | int cdc_parse_cdc_header(struct usb_cdc_parsed_header *hdr,
struct usb_interface *intf,
u8 *buffer,
int buflen)
{
/* duplicates are ignored */
struct usb_cdc_union_desc *union_header = NULL;
/* duplicates are not tolerated */
struct usb_cdc_header_desc *header = NULL;
struct usb_cdc_ether_desc *ether = NULL;
struct usb_cdc_mdlm_detail_desc *detail = NULL;
struct usb_cdc_mdlm_desc *desc = NULL;
unsigned int elength;
int cnt = 0;
memset(hdr, 0x00, sizeof(struct usb_cdc_parsed_header));
hdr->phonet_magic_present = false;
while (buflen > 0) {
elength = buffer[0];
if (!elength) {
dev_err(&intf->dev, "skipping garbage byte\n");
elength = 1;
goto next_desc;
}
if ((buflen < elength) || (elength < 3)) {
dev_err(&intf->dev, "invalid descriptor buffer length\n");
break;
}
if (buffer[1] != USB_DT_CS_INTERFACE) {
dev_err(&intf->dev, "skipping garbage\n");
goto next_desc;
}
switch (buffer[2]) {
case USB_CDC_UNION_TYPE: /* we've found it */
if (elength < sizeof(struct usb_cdc_union_desc))
goto next_desc;
if (union_header) {
dev_err(&intf->dev, "More than one union descriptor, skipping ...\n");
goto next_desc;
}
union_header = (struct usb_cdc_union_desc *)buffer;
break;
case USB_CDC_COUNTRY_TYPE:
if (elength < sizeof(struct usb_cdc_country_functional_desc))
goto next_desc;
hdr->usb_cdc_country_functional_desc =
(struct usb_cdc_country_functional_desc *)buffer;
break;
case USB_CDC_HEADER_TYPE:
if (elength != sizeof(struct usb_cdc_header_desc))
goto next_desc;
if (header)
return -EINVAL;
header = (struct usb_cdc_header_desc *)buffer;
break;
case USB_CDC_ACM_TYPE:
if (elength < sizeof(struct usb_cdc_acm_descriptor))
goto next_desc;
hdr->usb_cdc_acm_descriptor =
(struct usb_cdc_acm_descriptor *)buffer;
break;
case USB_CDC_ETHERNET_TYPE:
if (elength != sizeof(struct usb_cdc_ether_desc))
goto next_desc;
if (ether)
return -EINVAL;
ether = (struct usb_cdc_ether_desc *)buffer;
break;
case USB_CDC_CALL_MANAGEMENT_TYPE:
if (elength < sizeof(struct usb_cdc_call_mgmt_descriptor))
goto next_desc;
hdr->usb_cdc_call_mgmt_descriptor =
(struct usb_cdc_call_mgmt_descriptor *)buffer;
break;
case USB_CDC_DMM_TYPE:
if (elength < sizeof(struct usb_cdc_dmm_desc))
goto next_desc;
hdr->usb_cdc_dmm_desc =
(struct usb_cdc_dmm_desc *)buffer;
break;
case USB_CDC_MDLM_TYPE:
if (elength < sizeof(struct usb_cdc_mdlm_desc *))
goto next_desc;
if (desc)
return -EINVAL;
desc = (struct usb_cdc_mdlm_desc *)buffer;
break;
case USB_CDC_MDLM_DETAIL_TYPE:
if (elength < sizeof(struct usb_cdc_mdlm_detail_desc *))
goto next_desc;
if (detail)
return -EINVAL;
detail = (struct usb_cdc_mdlm_detail_desc *)buffer;
break;
case USB_CDC_NCM_TYPE:
if (elength < sizeof(struct usb_cdc_ncm_desc))
goto next_desc;
hdr->usb_cdc_ncm_desc = (struct usb_cdc_ncm_desc *)buffer;
break;
case USB_CDC_MBIM_TYPE:
if (elength < sizeof(struct usb_cdc_mbim_desc))
goto next_desc;
hdr->usb_cdc_mbim_desc = (struct usb_cdc_mbim_desc *)buffer;
break;
case USB_CDC_MBIM_EXTENDED_TYPE:
if (elength < sizeof(struct usb_cdc_mbim_extended_desc))
break;
hdr->usb_cdc_mbim_extended_desc =
(struct usb_cdc_mbim_extended_desc *)buffer;
break;
case CDC_PHONET_MAGIC_NUMBER:
hdr->phonet_magic_present = true;
break;
default:
/*
* there are LOTS more CDC descriptors that
* could legitimately be found here.
*/
dev_dbg(&intf->dev, "Ignoring descriptor: type %02x, length %ud\n",
buffer[2], elength);
goto next_desc;
}
cnt++;
next_desc:
buflen -= elength;
buffer += elength;
}
hdr->usb_cdc_union_desc = union_header;
hdr->usb_cdc_header_desc = header;
hdr->usb_cdc_mdlm_detail_desc = detail;
hdr->usb_cdc_mdlm_desc = desc;
hdr->usb_cdc_ether_desc = ether;
return cnt;
}
| [[2072, "\t\tif ((buflen < elength) || (elength < 3)) {\n"], [2073, "\t\t\tdev_err(&intf->dev, \"invalid descriptor buffer length\\n\");\n"], [2074, "\t\t\tbreak;\n"], [2075, "\t\t}\n"]] | [[2072, "if ((buflen < elength) || (elength < 3))"], [2073, "dev_err(&intf->dev, \"invalid descriptor buffer length\\n\");"], [2074, "break;"], [2075, "\t\t}\n"]] | [
"CVE-2017-16534"
] | [
"CWE-119"
] | 64 | {
"Execution Environment": [],
"Explanation": null,
"External Function": [],
"Function Argument": [],
"Globals": [
"USB_DT_CS_INTERFACE",
"USB_CDC_UNION_TYPE",
"USB_CDC_COUNTRY_TYPE",
"USB_CDC_HEADER_TYPE",
"USB_CDC_ACM_TYPE",
"USB_CDC_ETHERNET_TYPE",
"USB_CDC_CALL_MANAGEMENT_TYPE",
"USB_CDC_DMM_TYPE",
"USB_CDC_MDLM_TYPE",
"USB_CDC_MDLM_DETAIL_TYPE",
"USB_CDC_NCM_TYPE",
"USB_CDC_MBIM_TYPE",
"USB_CDC_MBIM_EXTENDED_TYPE",
"CDC_PHONET_MAGIC_NUMBER"
],
"Type Execution Declaration": [
"struct usb_cdc_union_desc",
"struct usb_cdc_header_desc",
"struct usb_cdc_ether_desc",
"struct usb_cdc_mdlm_detail_desc",
"struct usb_cdc_mdlm_desc",
"struct usb_cdc_country_functional_desc",
"struct usb_cdc_acm_descriptor",
"struct usb_cdc_call_mgmt_descriptor",
"struct usb_cdc_dmm_desc",
"struct usb_cdc_ncm_desc",
"struct usb_cdc_mbim_desc",
"struct usb_cdc_mbim_extended_desc",
"struct usb_cdc_parsed_header"
]
} |
65 | linux | https://github.com/torvalds/linux | security/apparmor/lsm.c | 30a46a4647fd1df9cf52e43bf467f0d9265096ca | apparmor: fix oops, validate buffer size in apparmor_setprocattr()
When proc_pid_attr_write() was changed to use memdup_user apparmor's
(interface violating) assumption that the setprocattr buffer was always
a single page was violated.
The size test is not strictly speaking needed as proc_pid_attr_write()
will reject anything larger, but for the sake of robustness we can keep
it in.
SMACK and SELinux look safe to me, but somebody else should probably
have a look just in case.
Based on original patch from Vegard Nossum <vegard.nossum@oracle.com>
modified for the case that apparmor provides null termination.
Fixes: bb646cdb12e75d82258c2f2e7746d5952d3e321a
Reported-by: Vegard Nossum <vegard.nossum@oracle.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: John Johansen <john.johansen@canonical.com>
Cc: Paul Moore <paul@paul-moore.com>
Cc: Stephen Smalley <sds@tycho.nsa.gov>
Cc: Eric Paris <eparis@parisplace.org>
Cc: Casey Schaufler <casey@schaufler-ca.com>
Cc: stable@kernel.org
Signed-off-by: John Johansen <john.johansen@canonical.com>
Reviewed-by: Tyler Hicks <tyhicks@canonical.com>
Signed-off-by: James Morris <james.l.morris@oracle.com> | true | 2aa205d4471b524745a6ff7b07ec87e5 | apparmor_setprocattr | static int apparmor_setprocattr(struct task_struct *task, char *name,
void *value, size_t size)
{
struct common_audit_data sa;
struct apparmor_audit_data aad = {0,};
char *command, *args = value;
size_t arg_size;
int error;
if (size == 0)
return -EINVAL;
/* args points to a PAGE_SIZE buffer, AppArmor requires that
* the buffer must be null terminated or have size <= PAGE_SIZE -1
* so that AppArmor can null terminate them
*/
if (args[size - 1] != '\0') {
if (size == PAGE_SIZE)
return -EINVAL;
args[size] = '\0';
}
/* task can only write its own attributes */
if (current != task)
return -EACCES;
args = value;
args = strim(args);
command = strsep(&args, " ");
if (!args)
return -EINVAL;
args = skip_spaces(args);
if (!*args)
return -EINVAL;
arg_size = size - (args - (char *) value);
if (strcmp(name, "current") == 0) {
if (strcmp(command, "changehat") == 0) {
error = aa_setprocattr_changehat(args, arg_size,
!AA_DO_TEST);
} else if (strcmp(command, "permhat") == 0) {
error = aa_setprocattr_changehat(args, arg_size,
AA_DO_TEST);
} else if (strcmp(command, "changeprofile") == 0) {
error = aa_setprocattr_changeprofile(args, !AA_ONEXEC,
!AA_DO_TEST);
} else if (strcmp(command, "permprofile") == 0) {
error = aa_setprocattr_changeprofile(args, !AA_ONEXEC,
AA_DO_TEST);
} else
goto fail;
} else if (strcmp(name, "exec") == 0) {
if (strcmp(command, "exec") == 0)
error = aa_setprocattr_changeprofile(args, AA_ONEXEC,
!AA_DO_TEST);
else
goto fail;
} else
/* only support the "current" and "exec" process attributes */
return -EINVAL;
if (!error)
error = size;
return error;
fail:
sa.type = LSM_AUDIT_DATA_NONE;
sa.aad = &aad;
aad.profile = aa_current_profile();
aad.op = OP_SETPROCATTR;
aad.info = name;
aad.error = -EINVAL;
aa_audit_msg(AUDIT_APPARMOR_DENIED, &sa, NULL);
return -EINVAL;
}
| [[503, "\tchar *command, *args = value;\n"], [509, "\t/* args points to a PAGE_SIZE buffer, AppArmor requires that\n"], [510, "\t * the buffer must be null terminated or have size <= PAGE_SIZE -1\n"], [511, "\t * so that AppArmor can null terminate them\n"], [512, "\t */\n"], [513, "\tif (args[size - 1] != '\\0') {\n"], [514, "\t\tif (size == PAGE_SIZE)\n"], [515, "\t\t\treturn -EINVAL;\n"], [516, "\t\targs[size] = '\\0';\n"], [517, "\t}\n"], [518, "\n"], [523, "\targs = value;\n"], [527, "\t\treturn -EINVAL;\n"], [530, "\t\treturn -EINVAL;\n"], [556, "\t\treturn -EINVAL;\n"], [568, "\taad.error = -EINVAL;\n"], [570, "\treturn -EINVAL;\n"]] | [[503, "char *command, *args = value;"], [509, "/* args points to a PAGE_SIZE buffer, AppArmor requires that\n\t * the buffer must be null terminated or have size <= PAGE_SIZE -1\n\t * so that AppArmor can null terminate them\n\t */"], [513, "if (args[size - 1] != '\\0')"], [514, "if (size == PAGE_SIZE)"], [515, "return -EINVAL;"], [516, "args[size] = '\\0';"], [517, "\t}\n"], [518, "\n"], [523, "args = value;"], [527, "return -EINVAL;"], [530, "return -EINVAL;"], [556, "return -EINVAL;"], [568, "aad.error = -EINVAL;"], [570, "return -EINVAL;"]] | [
"CVE-2016-6187"
] | [
"CWE-119"
] | 66 | {
"Execution Environment": [],
"Explanation": null,
"External Function": [],
"Function Argument": [
"value",
"size"
],
"Globals": [
"PAGE_SIZE"
],
"Type Execution Declaration": []
} |
66 | linux | https://github.com/torvalds/linux | security/apparmor/lsm.c | 30a46a4647fd1df9cf52e43bf467f0d9265096ca | apparmor: fix oops, validate buffer size in apparmor_setprocattr()
When proc_pid_attr_write() was changed to use memdup_user apparmor's
(interface violating) assumption that the setprocattr buffer was always
a single page was violated.
The size test is not strictly speaking needed as proc_pid_attr_write()
will reject anything larger, but for the sake of robustness we can keep
it in.
SMACK and SELinux look safe to me, but somebody else should probably
have a look just in case.
Based on original patch from Vegard Nossum <vegard.nossum@oracle.com>
modified for the case that apparmor provides null termination.
Fixes: bb646cdb12e75d82258c2f2e7746d5952d3e321a
Reported-by: Vegard Nossum <vegard.nossum@oracle.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: John Johansen <john.johansen@canonical.com>
Cc: Paul Moore <paul@paul-moore.com>
Cc: Stephen Smalley <sds@tycho.nsa.gov>
Cc: Eric Paris <eparis@parisplace.org>
Cc: Casey Schaufler <casey@schaufler-ca.com>
Cc: stable@kernel.org
Signed-off-by: John Johansen <john.johansen@canonical.com>
Reviewed-by: Tyler Hicks <tyhicks@canonical.com>
Signed-off-by: James Morris <james.l.morris@oracle.com> | false | c9f7da81394639484c3d759435eb4307 | apparmor_setprocattr | static int apparmor_setprocattr(struct task_struct *task, char *name,
void *value, size_t size)
{
struct common_audit_data sa;
struct apparmor_audit_data aad = {0,};
char *command, *largs = NULL, *args = value;
size_t arg_size;
int error;
if (size == 0)
return -EINVAL;
/* task can only write its own attributes */
if (current != task)
return -EACCES;
/* AppArmor requires that the buffer must be null terminated atm */
if (args[size - 1] != '\0') {
/* null terminate */
largs = args = kmalloc(size + 1, GFP_KERNEL);
if (!args)
return -ENOMEM;
memcpy(args, value, size);
args[size] = '\0';
}
error = -EINVAL;
args = strim(args);
command = strsep(&args, " ");
if (!args)
goto out;
args = skip_spaces(args);
if (!*args)
goto out;
arg_size = size - (args - (char *) value);
if (strcmp(name, "current") == 0) {
if (strcmp(command, "changehat") == 0) {
error = aa_setprocattr_changehat(args, arg_size,
!AA_DO_TEST);
} else if (strcmp(command, "permhat") == 0) {
error = aa_setprocattr_changehat(args, arg_size,
AA_DO_TEST);
} else if (strcmp(command, "changeprofile") == 0) {
error = aa_setprocattr_changeprofile(args, !AA_ONEXEC,
!AA_DO_TEST);
} else if (strcmp(command, "permprofile") == 0) {
error = aa_setprocattr_changeprofile(args, !AA_ONEXEC,
AA_DO_TEST);
} else
goto fail;
} else if (strcmp(name, "exec") == 0) {
if (strcmp(command, "exec") == 0)
error = aa_setprocattr_changeprofile(args, AA_ONEXEC,
!AA_DO_TEST);
else
goto fail;
} else
/* only support the "current" and "exec" process attributes */
goto fail;
if (!error)
error = size;
out:
kfree(largs);
return error;
fail:
sa.type = LSM_AUDIT_DATA_NONE;
sa.aad = &aad;
aad.profile = aa_current_profile();
aad.op = OP_SETPROCATTR;
aad.info = name;
aad.error = error = -EINVAL;
aa_audit_msg(AUDIT_APPARMOR_DENIED, &sa, NULL);
goto out;
}
| [[503, "\tchar *command, *largs = NULL, *args = value;\n"], [513, "\t/* AppArmor requires that the buffer must be null terminated atm */\n"], [514, "\tif (args[size - 1] != '\\0') {\n"], [515, "\t\t/* null terminate */\n"], [516, "\t\tlargs = args = kmalloc(size + 1, GFP_KERNEL);\n"], [517, "\t\tif (!args)\n"], [518, "\t\t\treturn -ENOMEM;\n"], [519, "\t\tmemcpy(args, value, size);\n"], [520, "\t\targs[size] = '\\0';\n"], [521, "\t}\n"], [522, "\n"], [523, "\terror = -EINVAL;\n"], [527, "\t\tgoto out;\n"], [530, "\t\tgoto out;\n"], [556, "\t\tgoto fail;\n"], [560, "out:\n"], [561, "\tkfree(largs);\n"], [570, "\taad.error = error = -EINVAL;\n"], [572, "\tgoto out;\n"]] | [[503, "char *command, *largs = NULL, *args = value;"], [513, "/* AppArmor requires that the buffer must be null terminated atm */"], [514, "if (args[size - 1] != '\\0')"], [515, "/* null terminate */"], [516, "largs = args = kmalloc(size + 1, GFP_KERNEL);"], [517, "if (!args)"], [518, "return -ENOMEM;"], [519, "memcpy(args, value, size);"], [520, "args[size] = '\\0';"], [521, "\t}\n"], [522, "\n"], [523, "error = -EINVAL;"], [527, "goto out;"], [530, "goto out;"], [556, "goto fail;"], [560, "out:"], [561, "kfree(largs);"], [570, "aad.error = error = -EINVAL;"], [572, "goto out;"]] | [
"CVE-2016-6187"
] | [
"CWE-119"
] | 66 | {
"Execution Environment": [],
"Explanation": null,
"External Function": [],
"Function Argument": [
"value",
"size"
],
"Globals": [
"PAGE_SIZE"
],
"Type Execution Declaration": []
} |
67 | linux | https://github.com/torvalds/linux | arch/loongarch/net/bpf_jit.c | 36a87385e31c9343af9a4756598e704741250a67 | LoongArch: BPF: Prevent out-of-bounds memory access
The test_tag test triggers an unhandled page fault:
# ./test_tag
[ 130.640218] CPU 0 Unable to handle kernel paging request at virtual address ffff80001b898004, era == 9000000003137f7c, ra == 9000000003139e70
[ 130.640501] Oops[#3]:
[ 130.640553] CPU: 0 PID: 1326 Comm: test_tag Tainted: G D O 6.7.0-rc4-loong-devel-gb62ab1a397cf #47 61985c1d94084daa2432f771daa45b56b10d8d2a
[ 130.640764] Hardware name: QEMU QEMU Virtual Machine, BIOS unknown 2/2/2022
[ 130.640874] pc 9000000003137f7c ra 9000000003139e70 tp 9000000104cb4000 sp 9000000104cb7a40
[ 130.641001] a0 ffff80001b894000 a1 ffff80001b897ff8 a2 000000006ba210be a3 0000000000000000
[ 130.641128] a4 000000006ba210be a5 00000000000000f1 a6 00000000000000b3 a7 0000000000000000
[ 130.641256] t0 0000000000000000 t1 00000000000007f6 t2 0000000000000000 t3 9000000004091b70
[ 130.641387] t4 000000006ba210be t5 0000000000000004 t6 fffffffffffffff0 t7 90000000040913e0
[ 130.641512] t8 0000000000000005 u0 0000000000000dc0 s9 0000000000000009 s0 9000000104cb7ae0
[ 130.641641] s1 00000000000007f6 s2 0000000000000009 s3 0000000000000095 s4 0000000000000000
[ 130.641771] s5 ffff80001b894000 s6 ffff80001b897fb0 s7 9000000004090c50 s8 0000000000000000
[ 130.641900] ra: 9000000003139e70 build_body+0x1fcc/0x4988
[ 130.642007] ERA: 9000000003137f7c build_body+0xd8/0x4988
[ 130.642112] CRMD: 000000b0 (PLV0 -IE -DA +PG DACF=CC DACM=CC -WE)
[ 130.642261] PRMD: 00000004 (PPLV0 +PIE -PWE)
[ 130.642353] EUEN: 00000003 (+FPE +SXE -ASXE -BTE)
[ 130.642458] ECFG: 00071c1c (LIE=2-4,10-12 VS=7)
[ 130.642554] ESTAT: 00010000 [PIL] (IS= ECode=1 EsubCode=0)
[ 130.642658] BADV: ffff80001b898004
[ 130.642719] PRID: 0014c010 (Loongson-64bit, Loongson-3A5000)
[ 130.642815] Modules linked in: [last unloaded: bpf_testmod(O)]
[ 130.642924] Process test_tag (pid: 1326, threadinfo=00000000f7f4015f, task=000000006499f9fd)
[ 130.643062] Stack : 0000000000000000 9000000003380724 0000000000000000 0000000104cb7be8
[ 130.643213] 0000000000000000 25af8d9b6e600558 9000000106250ea0 9000000104cb7ae0
[ 130.643378] 0000000000000000 0000000000000000 9000000104cb7be8 90000000049f6000
[ 130.643538] 0000000000000090 9000000106250ea0 ffff80001b894000 ffff80001b894000
[ 130.643685] 00007ffffb917790 900000000313ca94 0000000000000000 0000000000000000
[ 130.643831] ffff80001b894000 0000000000000ff7 0000000000000000 9000000100468000
[ 130.643983] 0000000000000000 0000000000000000 0000000000000040 25af8d9b6e600558
[ 130.644131] 0000000000000bb7 ffff80001b894048 0000000000000000 0000000000000000
[ 130.644276] 9000000104cb7be8 90000000049f6000 0000000000000090 9000000104cb7bdc
[ 130.644423] ffff80001b894000 0000000000000000 00007ffffb917790 90000000032acfb0
[ 130.644572] ...
[ 130.644629] Call Trace:
[ 130.644641] [<9000000003137f7c>] build_body+0xd8/0x4988
[ 130.644785] [<900000000313ca94>] bpf_int_jit_compile+0x228/0x4ec
[ 130.644891] [<90000000032acfb0>] bpf_prog_select_runtime+0x158/0x1b0
[ 130.645003] [<90000000032b3504>] bpf_prog_load+0x760/0xb44
[ 130.645089] [<90000000032b6744>] __sys_bpf+0xbb8/0x2588
[ 130.645175] [<90000000032b8388>] sys_bpf+0x20/0x2c
[ 130.645259] [<9000000003f6ab38>] do_syscall+0x7c/0x94
[ 130.645369] [<9000000003121c5c>] handle_syscall+0xbc/0x158
[ 130.645507]
[ 130.645539] Code: 380839f6 380831f9 28412bae <24000ca6> 004081ad 0014cb50 004083e8 02bff34c 58008e91
[ 130.645729]
[ 130.646418] ---[ end trace 0000000000000000 ]---
On my machine, which has CONFIG_PAGE_SIZE_16KB=y, the test failed at
loading a BPF prog with 2039 instructions:
prog = (struct bpf_prog *)ffff80001b894000
insn = (struct bpf_insn *)(prog->insnsi)ffff80001b894048
insn + 2039 = (struct bpf_insn *)ffff80001b898000 <- end of the page
In the build_insn() function, we are trying to access next instruction
unconditionally, i.e. `(insn + 1)->imm`. The address lies in the next
page and can be not owned by the current process, thus an page fault is
inevitable and then segfault.
So, let's access next instruction only under `dst = imm64` context.
With this fix, we have:
# ./test_tag
test_tag: OK (40945 tests)
Fixes: bbfddb904df6f82 ("LoongArch: BPF: Avoid declare variables in switch-case")
Tested-by: Tiezhu Yang <yangtiezhu@loongson.cn>
Signed-off-by: Hengqi Chen <hengqi.chen@gmail.com>
Signed-off-by: Huacai Chen <chenhuacai@loongson.cn> | true | 719fda17f6dc069f936d58796841734c | build_insn | static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, bool extra_pass)
{
u8 tm = -1;
u64 func_addr;
bool func_addr_fixed, sign_extend;
int i = insn - ctx->prog->insnsi;
int ret, jmp_offset;
const u8 code = insn->code;
const u8 cond = BPF_OP(code);
const u8 t1 = LOONGARCH_GPR_T1;
const u8 t2 = LOONGARCH_GPR_T2;
const u8 src = regmap[insn->src_reg];
const u8 dst = regmap[insn->dst_reg];
const s16 off = insn->off;
const s32 imm = insn->imm;
const u64 imm64 = (u64)(insn + 1)->imm << 32 | (u32)insn->imm;
const bool is32 = BPF_CLASS(insn->code) == BPF_ALU || BPF_CLASS(insn->code) == BPF_JMP32;
switch (code) {
/* dst = src */
case BPF_ALU | BPF_MOV | BPF_X:
case BPF_ALU64 | BPF_MOV | BPF_X:
switch (off) {
case 0:
move_reg(ctx, dst, src);
emit_zext_32(ctx, dst, is32);
break;
case 8:
move_reg(ctx, t1, src);
emit_insn(ctx, extwb, dst, t1);
emit_zext_32(ctx, dst, is32);
break;
case 16:
move_reg(ctx, t1, src);
emit_insn(ctx, extwh, dst, t1);
emit_zext_32(ctx, dst, is32);
break;
case 32:
emit_insn(ctx, addw, dst, src, LOONGARCH_GPR_ZERO);
break;
}
break;
/* dst = imm */
case BPF_ALU | BPF_MOV | BPF_K:
case BPF_ALU64 | BPF_MOV | BPF_K:
move_imm(ctx, dst, imm, is32);
break;
/* dst = dst + src */
case BPF_ALU | BPF_ADD | BPF_X:
case BPF_ALU64 | BPF_ADD | BPF_X:
emit_insn(ctx, addd, dst, dst, src);
emit_zext_32(ctx, dst, is32);
break;
/* dst = dst + imm */
case BPF_ALU | BPF_ADD | BPF_K:
case BPF_ALU64 | BPF_ADD | BPF_K:
if (is_signed_imm12(imm)) {
emit_insn(ctx, addid, dst, dst, imm);
} else {
move_imm(ctx, t1, imm, is32);
emit_insn(ctx, addd, dst, dst, t1);
}
emit_zext_32(ctx, dst, is32);
break;
/* dst = dst - src */
case BPF_ALU | BPF_SUB | BPF_X:
case BPF_ALU64 | BPF_SUB | BPF_X:
emit_insn(ctx, subd, dst, dst, src);
emit_zext_32(ctx, dst, is32);
break;
/* dst = dst - imm */
case BPF_ALU | BPF_SUB | BPF_K:
case BPF_ALU64 | BPF_SUB | BPF_K:
if (is_signed_imm12(-imm)) {
emit_insn(ctx, addid, dst, dst, -imm);
} else {
move_imm(ctx, t1, imm, is32);
emit_insn(ctx, subd, dst, dst, t1);
}
emit_zext_32(ctx, dst, is32);
break;
/* dst = dst * src */
case BPF_ALU | BPF_MUL | BPF_X:
case BPF_ALU64 | BPF_MUL | BPF_X:
emit_insn(ctx, muld, dst, dst, src);
emit_zext_32(ctx, dst, is32);
break;
/* dst = dst * imm */
case BPF_ALU | BPF_MUL | BPF_K:
case BPF_ALU64 | BPF_MUL | BPF_K:
move_imm(ctx, t1, imm, is32);
emit_insn(ctx, muld, dst, dst, t1);
emit_zext_32(ctx, dst, is32);
break;
/* dst = dst / src */
case BPF_ALU | BPF_DIV | BPF_X:
case BPF_ALU64 | BPF_DIV | BPF_X:
if (!off) {
emit_zext_32(ctx, dst, is32);
move_reg(ctx, t1, src);
emit_zext_32(ctx, t1, is32);
emit_insn(ctx, divdu, dst, dst, t1);
emit_zext_32(ctx, dst, is32);
} else {
emit_sext_32(ctx, dst, is32);
move_reg(ctx, t1, src);
emit_sext_32(ctx, t1, is32);
emit_insn(ctx, divd, dst, dst, t1);
emit_sext_32(ctx, dst, is32);
}
break;
/* dst = dst / imm */
case BPF_ALU | BPF_DIV | BPF_K:
case BPF_ALU64 | BPF_DIV | BPF_K:
if (!off) {
move_imm(ctx, t1, imm, is32);
emit_zext_32(ctx, dst, is32);
emit_insn(ctx, divdu, dst, dst, t1);
emit_zext_32(ctx, dst, is32);
} else {
move_imm(ctx, t1, imm, false);
emit_sext_32(ctx, t1, is32);
emit_sext_32(ctx, dst, is32);
emit_insn(ctx, divd, dst, dst, t1);
emit_sext_32(ctx, dst, is32);
}
break;
/* dst = dst % src */
case BPF_ALU | BPF_MOD | BPF_X:
case BPF_ALU64 | BPF_MOD | BPF_X:
if (!off) {
emit_zext_32(ctx, dst, is32);
move_reg(ctx, t1, src);
emit_zext_32(ctx, t1, is32);
emit_insn(ctx, moddu, dst, dst, t1);
emit_zext_32(ctx, dst, is32);
} else {
emit_sext_32(ctx, dst, is32);
move_reg(ctx, t1, src);
emit_sext_32(ctx, t1, is32);
emit_insn(ctx, modd, dst, dst, t1);
emit_sext_32(ctx, dst, is32);
}
break;
/* dst = dst % imm */
case BPF_ALU | BPF_MOD | BPF_K:
case BPF_ALU64 | BPF_MOD | BPF_K:
if (!off) {
move_imm(ctx, t1, imm, is32);
emit_zext_32(ctx, dst, is32);
emit_insn(ctx, moddu, dst, dst, t1);
emit_zext_32(ctx, dst, is32);
} else {
move_imm(ctx, t1, imm, false);
emit_sext_32(ctx, t1, is32);
emit_sext_32(ctx, dst, is32);
emit_insn(ctx, modd, dst, dst, t1);
emit_sext_32(ctx, dst, is32);
}
break;
/* dst = -dst */
case BPF_ALU | BPF_NEG:
case BPF_ALU64 | BPF_NEG:
move_imm(ctx, t1, imm, is32);
emit_insn(ctx, subd, dst, LOONGARCH_GPR_ZERO, dst);
emit_zext_32(ctx, dst, is32);
break;
/* dst = dst & src */
case BPF_ALU | BPF_AND | BPF_X:
case BPF_ALU64 | BPF_AND | BPF_X:
emit_insn(ctx, and, dst, dst, src);
emit_zext_32(ctx, dst, is32);
break;
/* dst = dst & imm */
case BPF_ALU | BPF_AND | BPF_K:
case BPF_ALU64 | BPF_AND | BPF_K:
if (is_unsigned_imm12(imm)) {
emit_insn(ctx, andi, dst, dst, imm);
} else {
move_imm(ctx, t1, imm, is32);
emit_insn(ctx, and, dst, dst, t1);
}
emit_zext_32(ctx, dst, is32);
break;
/* dst = dst | src */
case BPF_ALU | BPF_OR | BPF_X:
case BPF_ALU64 | BPF_OR | BPF_X:
emit_insn(ctx, or, dst, dst, src);
emit_zext_32(ctx, dst, is32);
break;
/* dst = dst | imm */
case BPF_ALU | BPF_OR | BPF_K:
case BPF_ALU64 | BPF_OR | BPF_K:
if (is_unsigned_imm12(imm)) {
emit_insn(ctx, ori, dst, dst, imm);
} else {
move_imm(ctx, t1, imm, is32);
emit_insn(ctx, or, dst, dst, t1);
}
emit_zext_32(ctx, dst, is32);
break;
/* dst = dst ^ src */
case BPF_ALU | BPF_XOR | BPF_X:
case BPF_ALU64 | BPF_XOR | BPF_X:
emit_insn(ctx, xor, dst, dst, src);
emit_zext_32(ctx, dst, is32);
break;
/* dst = dst ^ imm */
case BPF_ALU | BPF_XOR | BPF_K:
case BPF_ALU64 | BPF_XOR | BPF_K:
if (is_unsigned_imm12(imm)) {
emit_insn(ctx, xori, dst, dst, imm);
} else {
move_imm(ctx, t1, imm, is32);
emit_insn(ctx, xor, dst, dst, t1);
}
emit_zext_32(ctx, dst, is32);
break;
/* dst = dst << src (logical) */
case BPF_ALU | BPF_LSH | BPF_X:
emit_insn(ctx, sllw, dst, dst, src);
emit_zext_32(ctx, dst, is32);
break;
case BPF_ALU64 | BPF_LSH | BPF_X:
emit_insn(ctx, slld, dst, dst, src);
break;
/* dst = dst << imm (logical) */
case BPF_ALU | BPF_LSH | BPF_K:
emit_insn(ctx, slliw, dst, dst, imm);
emit_zext_32(ctx, dst, is32);
break;
case BPF_ALU64 | BPF_LSH | BPF_K:
emit_insn(ctx, sllid, dst, dst, imm);
break;
/* dst = dst >> src (logical) */
case BPF_ALU | BPF_RSH | BPF_X:
emit_insn(ctx, srlw, dst, dst, src);
emit_zext_32(ctx, dst, is32);
break;
case BPF_ALU64 | BPF_RSH | BPF_X:
emit_insn(ctx, srld, dst, dst, src);
break;
/* dst = dst >> imm (logical) */
case BPF_ALU | BPF_RSH | BPF_K:
emit_insn(ctx, srliw, dst, dst, imm);
emit_zext_32(ctx, dst, is32);
break;
case BPF_ALU64 | BPF_RSH | BPF_K:
emit_insn(ctx, srlid, dst, dst, imm);
break;
/* dst = dst >> src (arithmetic) */
case BPF_ALU | BPF_ARSH | BPF_X:
emit_insn(ctx, sraw, dst, dst, src);
emit_zext_32(ctx, dst, is32);
break;
case BPF_ALU64 | BPF_ARSH | BPF_X:
emit_insn(ctx, srad, dst, dst, src);
break;
/* dst = dst >> imm (arithmetic) */
case BPF_ALU | BPF_ARSH | BPF_K:
emit_insn(ctx, sraiw, dst, dst, imm);
emit_zext_32(ctx, dst, is32);
break;
case BPF_ALU64 | BPF_ARSH | BPF_K:
emit_insn(ctx, sraid, dst, dst, imm);
break;
/* dst = BSWAP##imm(dst) */
case BPF_ALU | BPF_END | BPF_FROM_LE:
switch (imm) {
case 16:
/* zero-extend 16 bits into 64 bits */
emit_insn(ctx, bstrpickd, dst, dst, 15, 0);
break;
case 32:
/* zero-extend 32 bits into 64 bits */
emit_zext_32(ctx, dst, is32);
break;
case 64:
/* do nothing */
break;
}
break;
case BPF_ALU | BPF_END | BPF_FROM_BE:
case BPF_ALU64 | BPF_END | BPF_FROM_LE:
switch (imm) {
case 16:
emit_insn(ctx, revb2h, dst, dst);
/* zero-extend 16 bits into 64 bits */
emit_insn(ctx, bstrpickd, dst, dst, 15, 0);
break;
case 32:
emit_insn(ctx, revb2w, dst, dst);
/* clear the upper 32 bits */
emit_zext_32(ctx, dst, true);
break;
case 64:
emit_insn(ctx, revbd, dst, dst);
break;
}
break;
/* PC += off if dst cond src */
case BPF_JMP | BPF_JEQ | BPF_X:
case BPF_JMP | BPF_JNE | BPF_X:
case BPF_JMP | BPF_JGT | BPF_X:
case BPF_JMP | BPF_JGE | BPF_X:
case BPF_JMP | BPF_JLT | BPF_X:
case BPF_JMP | BPF_JLE | BPF_X:
case BPF_JMP | BPF_JSGT | BPF_X:
case BPF_JMP | BPF_JSGE | BPF_X:
case BPF_JMP | BPF_JSLT | BPF_X:
case BPF_JMP | BPF_JSLE | BPF_X:
case BPF_JMP32 | BPF_JEQ | BPF_X:
case BPF_JMP32 | BPF_JNE | BPF_X:
case BPF_JMP32 | BPF_JGT | BPF_X:
case BPF_JMP32 | BPF_JGE | BPF_X:
case BPF_JMP32 | BPF_JLT | BPF_X:
case BPF_JMP32 | BPF_JLE | BPF_X:
case BPF_JMP32 | BPF_JSGT | BPF_X:
case BPF_JMP32 | BPF_JSGE | BPF_X:
case BPF_JMP32 | BPF_JSLT | BPF_X:
case BPF_JMP32 | BPF_JSLE | BPF_X:
jmp_offset = bpf2la_offset(i, off, ctx);
move_reg(ctx, t1, dst);
move_reg(ctx, t2, src);
if (is_signed_bpf_cond(BPF_OP(code))) {
emit_sext_32(ctx, t1, is32);
emit_sext_32(ctx, t2, is32);
} else {
emit_zext_32(ctx, t1, is32);
emit_zext_32(ctx, t2, is32);
}
if (emit_cond_jmp(ctx, cond, t1, t2, jmp_offset) < 0)
goto toofar;
break;
/* PC += off if dst cond imm */
case BPF_JMP | BPF_JEQ | BPF_K:
case BPF_JMP | BPF_JNE | BPF_K:
case BPF_JMP | BPF_JGT | BPF_K:
case BPF_JMP | BPF_JGE | BPF_K:
case BPF_JMP | BPF_JLT | BPF_K:
case BPF_JMP | BPF_JLE | BPF_K:
case BPF_JMP | BPF_JSGT | BPF_K:
case BPF_JMP | BPF_JSGE | BPF_K:
case BPF_JMP | BPF_JSLT | BPF_K:
case BPF_JMP | BPF_JSLE | BPF_K:
case BPF_JMP32 | BPF_JEQ | BPF_K:
case BPF_JMP32 | BPF_JNE | BPF_K:
case BPF_JMP32 | BPF_JGT | BPF_K:
case BPF_JMP32 | BPF_JGE | BPF_K:
case BPF_JMP32 | BPF_JLT | BPF_K:
case BPF_JMP32 | BPF_JLE | BPF_K:
case BPF_JMP32 | BPF_JSGT | BPF_K:
case BPF_JMP32 | BPF_JSGE | BPF_K:
case BPF_JMP32 | BPF_JSLT | BPF_K:
case BPF_JMP32 | BPF_JSLE | BPF_K:
jmp_offset = bpf2la_offset(i, off, ctx);
if (imm) {
move_imm(ctx, t1, imm, false);
tm = t1;
} else {
/* If imm is 0, simply use zero register. */
tm = LOONGARCH_GPR_ZERO;
}
move_reg(ctx, t2, dst);
if (is_signed_bpf_cond(BPF_OP(code))) {
emit_sext_32(ctx, tm, is32);
emit_sext_32(ctx, t2, is32);
} else {
emit_zext_32(ctx, tm, is32);
emit_zext_32(ctx, t2, is32);
}
if (emit_cond_jmp(ctx, cond, t2, tm, jmp_offset) < 0)
goto toofar;
break;
/* PC += off if dst & src */
case BPF_JMP | BPF_JSET | BPF_X:
case BPF_JMP32 | BPF_JSET | BPF_X:
jmp_offset = bpf2la_offset(i, off, ctx);
emit_insn(ctx, and, t1, dst, src);
emit_zext_32(ctx, t1, is32);
if (emit_cond_jmp(ctx, cond, t1, LOONGARCH_GPR_ZERO, jmp_offset) < 0)
goto toofar;
break;
/* PC += off if dst & imm */
case BPF_JMP | BPF_JSET | BPF_K:
case BPF_JMP32 | BPF_JSET | BPF_K:
jmp_offset = bpf2la_offset(i, off, ctx);
move_imm(ctx, t1, imm, is32);
emit_insn(ctx, and, t1, dst, t1);
emit_zext_32(ctx, t1, is32);
if (emit_cond_jmp(ctx, cond, t1, LOONGARCH_GPR_ZERO, jmp_offset) < 0)
goto toofar;
break;
/* PC += off */
case BPF_JMP | BPF_JA:
case BPF_JMP32 | BPF_JA:
if (BPF_CLASS(code) == BPF_JMP)
jmp_offset = bpf2la_offset(i, off, ctx);
else
jmp_offset = bpf2la_offset(i, imm, ctx);
if (emit_uncond_jmp(ctx, jmp_offset) < 0)
goto toofar;
break;
/* function call */
case BPF_JMP | BPF_CALL:
mark_call(ctx);
ret = bpf_jit_get_func_addr(ctx->prog, insn, extra_pass,
&func_addr, &func_addr_fixed);
if (ret < 0)
return ret;
move_addr(ctx, t1, func_addr);
emit_insn(ctx, jirl, t1, LOONGARCH_GPR_RA, 0);
move_reg(ctx, regmap[BPF_REG_0], LOONGARCH_GPR_A0);
break;
/* tail call */
case BPF_JMP | BPF_TAIL_CALL:
mark_tail_call(ctx);
if (emit_bpf_tail_call(ctx) < 0)
return -EINVAL;
break;
/* function return */
case BPF_JMP | BPF_EXIT:
if (i == ctx->prog->len - 1)
break;
jmp_offset = epilogue_offset(ctx);
if (emit_uncond_jmp(ctx, jmp_offset) < 0)
goto toofar;
break;
/* dst = imm64 */
case BPF_LD | BPF_IMM | BPF_DW:
move_imm(ctx, dst, imm64, is32);
return 1;
/* dst = *(size *)(src + off) */
case BPF_LDX | BPF_MEM | BPF_B:
case BPF_LDX | BPF_MEM | BPF_H:
case BPF_LDX | BPF_MEM | BPF_W:
case BPF_LDX | BPF_MEM | BPF_DW:
case BPF_LDX | BPF_PROBE_MEM | BPF_DW:
case BPF_LDX | BPF_PROBE_MEM | BPF_W:
case BPF_LDX | BPF_PROBE_MEM | BPF_H:
case BPF_LDX | BPF_PROBE_MEM | BPF_B:
/* dst_reg = (s64)*(signed size *)(src_reg + off) */
case BPF_LDX | BPF_MEMSX | BPF_B:
case BPF_LDX | BPF_MEMSX | BPF_H:
case BPF_LDX | BPF_MEMSX | BPF_W:
case BPF_LDX | BPF_PROBE_MEMSX | BPF_B:
case BPF_LDX | BPF_PROBE_MEMSX | BPF_H:
case BPF_LDX | BPF_PROBE_MEMSX | BPF_W:
sign_extend = BPF_MODE(insn->code) == BPF_MEMSX ||
BPF_MODE(insn->code) == BPF_PROBE_MEMSX;
switch (BPF_SIZE(code)) {
case BPF_B:
if (is_signed_imm12(off)) {
if (sign_extend)
emit_insn(ctx, ldb, dst, src, off);
else
emit_insn(ctx, ldbu, dst, src, off);
} else {
move_imm(ctx, t1, off, is32);
if (sign_extend)
emit_insn(ctx, ldxb, dst, src, t1);
else
emit_insn(ctx, ldxbu, dst, src, t1);
}
break;
case BPF_H:
if (is_signed_imm12(off)) {
if (sign_extend)
emit_insn(ctx, ldh, dst, src, off);
else
emit_insn(ctx, ldhu, dst, src, off);
} else {
move_imm(ctx, t1, off, is32);
if (sign_extend)
emit_insn(ctx, ldxh, dst, src, t1);
else
emit_insn(ctx, ldxhu, dst, src, t1);
}
break;
case BPF_W:
if (is_signed_imm12(off)) {
if (sign_extend)
emit_insn(ctx, ldw, dst, src, off);
else
emit_insn(ctx, ldwu, dst, src, off);
} else {
move_imm(ctx, t1, off, is32);
if (sign_extend)
emit_insn(ctx, ldxw, dst, src, t1);
else
emit_insn(ctx, ldxwu, dst, src, t1);
}
break;
case BPF_DW:
move_imm(ctx, t1, off, is32);
emit_insn(ctx, ldxd, dst, src, t1);
break;
}
ret = add_exception_handler(insn, ctx, dst);
if (ret)
return ret;
break;
/* *(size *)(dst + off) = imm */
case BPF_ST | BPF_MEM | BPF_B:
case BPF_ST | BPF_MEM | BPF_H:
case BPF_ST | BPF_MEM | BPF_W:
case BPF_ST | BPF_MEM | BPF_DW:
switch (BPF_SIZE(code)) {
case BPF_B:
move_imm(ctx, t1, imm, is32);
if (is_signed_imm12(off)) {
emit_insn(ctx, stb, t1, dst, off);
} else {
move_imm(ctx, t2, off, is32);
emit_insn(ctx, stxb, t1, dst, t2);
}
break;
case BPF_H:
move_imm(ctx, t1, imm, is32);
if (is_signed_imm12(off)) {
emit_insn(ctx, sth, t1, dst, off);
} else {
move_imm(ctx, t2, off, is32);
emit_insn(ctx, stxh, t1, dst, t2);
}
break;
case BPF_W:
move_imm(ctx, t1, imm, is32);
if (is_signed_imm12(off)) {
emit_insn(ctx, stw, t1, dst, off);
} else if (is_signed_imm14(off)) {
emit_insn(ctx, stptrw, t1, dst, off);
} else {
move_imm(ctx, t2, off, is32);
emit_insn(ctx, stxw, t1, dst, t2);
}
break;
case BPF_DW:
move_imm(ctx, t1, imm, is32);
if (is_signed_imm12(off)) {
emit_insn(ctx, std, t1, dst, off);
} else if (is_signed_imm14(off)) {
emit_insn(ctx, stptrd, t1, dst, off);
} else {
move_imm(ctx, t2, off, is32);
emit_insn(ctx, stxd, t1, dst, t2);
}
break;
}
break;
/* *(size *)(dst + off) = src */
case BPF_STX | BPF_MEM | BPF_B:
case BPF_STX | BPF_MEM | BPF_H:
case BPF_STX | BPF_MEM | BPF_W:
case BPF_STX | BPF_MEM | BPF_DW:
switch (BPF_SIZE(code)) {
case BPF_B:
if (is_signed_imm12(off)) {
emit_insn(ctx, stb, src, dst, off);
} else {
move_imm(ctx, t1, off, is32);
emit_insn(ctx, stxb, src, dst, t1);
}
break;
case BPF_H:
if (is_signed_imm12(off)) {
emit_insn(ctx, sth, src, dst, off);
} else {
move_imm(ctx, t1, off, is32);
emit_insn(ctx, stxh, src, dst, t1);
}
break;
case BPF_W:
if (is_signed_imm12(off)) {
emit_insn(ctx, stw, src, dst, off);
} else if (is_signed_imm14(off)) {
emit_insn(ctx, stptrw, src, dst, off);
} else {
move_imm(ctx, t1, off, is32);
emit_insn(ctx, stxw, src, dst, t1);
}
break;
case BPF_DW:
if (is_signed_imm12(off)) {
emit_insn(ctx, std, src, dst, off);
} else if (is_signed_imm14(off)) {
emit_insn(ctx, stptrd, src, dst, off);
} else {
move_imm(ctx, t1, off, is32);
emit_insn(ctx, stxd, src, dst, t1);
}
break;
}
break;
case BPF_STX | BPF_ATOMIC | BPF_W:
case BPF_STX | BPF_ATOMIC | BPF_DW:
emit_atomic(insn, ctx);
break;
/* Speculation barrier */
case BPF_ST | BPF_NOSPEC:
break;
default:
pr_err("bpf_jit: unknown opcode %02x\n", code);
return -EINVAL;
}
return 0;
toofar:
pr_info_once("bpf_jit: opcode %02x, jump too far\n", code);
return -E2BIG;
}
| [[473, "\tconst u64 imm64 = (u64)(insn + 1)->imm << 32 | (u32)insn->imm;\n"]] | [[473, "const u64 imm64 = (u64)(insn + 1)->imm << 32 | (u32)insn->imm;"]] | [
"CVE-2024-26588"
] | [
"CWE-125",
"CWE-119"
] | 68 | {
"Execution Environment": [],
"Explanation": null,
"External Function": [],
"Function Argument": [
"insn",
"ctx"
],
"Globals": [],
"Type Execution Declaration": []
} |
68 | linux | https://github.com/torvalds/linux | arch/loongarch/net/bpf_jit.c | 36a87385e31c9343af9a4756598e704741250a67 | LoongArch: BPF: Prevent out-of-bounds memory access
The test_tag test triggers an unhandled page fault:
# ./test_tag
[ 130.640218] CPU 0 Unable to handle kernel paging request at virtual address ffff80001b898004, era == 9000000003137f7c, ra == 9000000003139e70
[ 130.640501] Oops[#3]:
[ 130.640553] CPU: 0 PID: 1326 Comm: test_tag Tainted: G D O 6.7.0-rc4-loong-devel-gb62ab1a397cf #47 61985c1d94084daa2432f771daa45b56b10d8d2a
[ 130.640764] Hardware name: QEMU QEMU Virtual Machine, BIOS unknown 2/2/2022
[ 130.640874] pc 9000000003137f7c ra 9000000003139e70 tp 9000000104cb4000 sp 9000000104cb7a40
[ 130.641001] a0 ffff80001b894000 a1 ffff80001b897ff8 a2 000000006ba210be a3 0000000000000000
[ 130.641128] a4 000000006ba210be a5 00000000000000f1 a6 00000000000000b3 a7 0000000000000000
[ 130.641256] t0 0000000000000000 t1 00000000000007f6 t2 0000000000000000 t3 9000000004091b70
[ 130.641387] t4 000000006ba210be t5 0000000000000004 t6 fffffffffffffff0 t7 90000000040913e0
[ 130.641512] t8 0000000000000005 u0 0000000000000dc0 s9 0000000000000009 s0 9000000104cb7ae0
[ 130.641641] s1 00000000000007f6 s2 0000000000000009 s3 0000000000000095 s4 0000000000000000
[ 130.641771] s5 ffff80001b894000 s6 ffff80001b897fb0 s7 9000000004090c50 s8 0000000000000000
[ 130.641900] ra: 9000000003139e70 build_body+0x1fcc/0x4988
[ 130.642007] ERA: 9000000003137f7c build_body+0xd8/0x4988
[ 130.642112] CRMD: 000000b0 (PLV0 -IE -DA +PG DACF=CC DACM=CC -WE)
[ 130.642261] PRMD: 00000004 (PPLV0 +PIE -PWE)
[ 130.642353] EUEN: 00000003 (+FPE +SXE -ASXE -BTE)
[ 130.642458] ECFG: 00071c1c (LIE=2-4,10-12 VS=7)
[ 130.642554] ESTAT: 00010000 [PIL] (IS= ECode=1 EsubCode=0)
[ 130.642658] BADV: ffff80001b898004
[ 130.642719] PRID: 0014c010 (Loongson-64bit, Loongson-3A5000)
[ 130.642815] Modules linked in: [last unloaded: bpf_testmod(O)]
[ 130.642924] Process test_tag (pid: 1326, threadinfo=00000000f7f4015f, task=000000006499f9fd)
[ 130.643062] Stack : 0000000000000000 9000000003380724 0000000000000000 0000000104cb7be8
[ 130.643213] 0000000000000000 25af8d9b6e600558 9000000106250ea0 9000000104cb7ae0
[ 130.643378] 0000000000000000 0000000000000000 9000000104cb7be8 90000000049f6000
[ 130.643538] 0000000000000090 9000000106250ea0 ffff80001b894000 ffff80001b894000
[ 130.643685] 00007ffffb917790 900000000313ca94 0000000000000000 0000000000000000
[ 130.643831] ffff80001b894000 0000000000000ff7 0000000000000000 9000000100468000
[ 130.643983] 0000000000000000 0000000000000000 0000000000000040 25af8d9b6e600558
[ 130.644131] 0000000000000bb7 ffff80001b894048 0000000000000000 0000000000000000
[ 130.644276] 9000000104cb7be8 90000000049f6000 0000000000000090 9000000104cb7bdc
[ 130.644423] ffff80001b894000 0000000000000000 00007ffffb917790 90000000032acfb0
[ 130.644572] ...
[ 130.644629] Call Trace:
[ 130.644641] [<9000000003137f7c>] build_body+0xd8/0x4988
[ 130.644785] [<900000000313ca94>] bpf_int_jit_compile+0x228/0x4ec
[ 130.644891] [<90000000032acfb0>] bpf_prog_select_runtime+0x158/0x1b0
[ 130.645003] [<90000000032b3504>] bpf_prog_load+0x760/0xb44
[ 130.645089] [<90000000032b6744>] __sys_bpf+0xbb8/0x2588
[ 130.645175] [<90000000032b8388>] sys_bpf+0x20/0x2c
[ 130.645259] [<9000000003f6ab38>] do_syscall+0x7c/0x94
[ 130.645369] [<9000000003121c5c>] handle_syscall+0xbc/0x158
[ 130.645507]
[ 130.645539] Code: 380839f6 380831f9 28412bae <24000ca6> 004081ad 0014cb50 004083e8 02bff34c 58008e91
[ 130.645729]
[ 130.646418] ---[ end trace 0000000000000000 ]---
On my machine, which has CONFIG_PAGE_SIZE_16KB=y, the test failed at
loading a BPF prog with 2039 instructions:
prog = (struct bpf_prog *)ffff80001b894000
insn = (struct bpf_insn *)(prog->insnsi)ffff80001b894048
insn + 2039 = (struct bpf_insn *)ffff80001b898000 <- end of the page
In the build_insn() function, we are trying to access next instruction
unconditionally, i.e. `(insn + 1)->imm`. The address lies in the next
page and can be not owned by the current process, thus an page fault is
inevitable and then segfault.
So, let's access next instruction only under `dst = imm64` context.
With this fix, we have:
# ./test_tag
test_tag: OK (40945 tests)
Fixes: bbfddb904df6f82 ("LoongArch: BPF: Avoid declare variables in switch-case")
Tested-by: Tiezhu Yang <yangtiezhu@loongson.cn>
Signed-off-by: Hengqi Chen <hengqi.chen@gmail.com>
Signed-off-by: Huacai Chen <chenhuacai@loongson.cn> | false | d0de32374eb64f45e7c908076a8d3536 | build_insn | static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, bool extra_pass)
{
u8 tm = -1;
u64 func_addr;
bool func_addr_fixed, sign_extend;
int i = insn - ctx->prog->insnsi;
int ret, jmp_offset;
const u8 code = insn->code;
const u8 cond = BPF_OP(code);
const u8 t1 = LOONGARCH_GPR_T1;
const u8 t2 = LOONGARCH_GPR_T2;
const u8 src = regmap[insn->src_reg];
const u8 dst = regmap[insn->dst_reg];
const s16 off = insn->off;
const s32 imm = insn->imm;
const bool is32 = BPF_CLASS(insn->code) == BPF_ALU || BPF_CLASS(insn->code) == BPF_JMP32;
switch (code) {
/* dst = src */
case BPF_ALU | BPF_MOV | BPF_X:
case BPF_ALU64 | BPF_MOV | BPF_X:
switch (off) {
case 0:
move_reg(ctx, dst, src);
emit_zext_32(ctx, dst, is32);
break;
case 8:
move_reg(ctx, t1, src);
emit_insn(ctx, extwb, dst, t1);
emit_zext_32(ctx, dst, is32);
break;
case 16:
move_reg(ctx, t1, src);
emit_insn(ctx, extwh, dst, t1);
emit_zext_32(ctx, dst, is32);
break;
case 32:
emit_insn(ctx, addw, dst, src, LOONGARCH_GPR_ZERO);
break;
}
break;
/* dst = imm */
case BPF_ALU | BPF_MOV | BPF_K:
case BPF_ALU64 | BPF_MOV | BPF_K:
move_imm(ctx, dst, imm, is32);
break;
/* dst = dst + src */
case BPF_ALU | BPF_ADD | BPF_X:
case BPF_ALU64 | BPF_ADD | BPF_X:
emit_insn(ctx, addd, dst, dst, src);
emit_zext_32(ctx, dst, is32);
break;
/* dst = dst + imm */
case BPF_ALU | BPF_ADD | BPF_K:
case BPF_ALU64 | BPF_ADD | BPF_K:
if (is_signed_imm12(imm)) {
emit_insn(ctx, addid, dst, dst, imm);
} else {
move_imm(ctx, t1, imm, is32);
emit_insn(ctx, addd, dst, dst, t1);
}
emit_zext_32(ctx, dst, is32);
break;
/* dst = dst - src */
case BPF_ALU | BPF_SUB | BPF_X:
case BPF_ALU64 | BPF_SUB | BPF_X:
emit_insn(ctx, subd, dst, dst, src);
emit_zext_32(ctx, dst, is32);
break;
/* dst = dst - imm */
case BPF_ALU | BPF_SUB | BPF_K:
case BPF_ALU64 | BPF_SUB | BPF_K:
if (is_signed_imm12(-imm)) {
emit_insn(ctx, addid, dst, dst, -imm);
} else {
move_imm(ctx, t1, imm, is32);
emit_insn(ctx, subd, dst, dst, t1);
}
emit_zext_32(ctx, dst, is32);
break;
/* dst = dst * src */
case BPF_ALU | BPF_MUL | BPF_X:
case BPF_ALU64 | BPF_MUL | BPF_X:
emit_insn(ctx, muld, dst, dst, src);
emit_zext_32(ctx, dst, is32);
break;
/* dst = dst * imm */
case BPF_ALU | BPF_MUL | BPF_K:
case BPF_ALU64 | BPF_MUL | BPF_K:
move_imm(ctx, t1, imm, is32);
emit_insn(ctx, muld, dst, dst, t1);
emit_zext_32(ctx, dst, is32);
break;
/* dst = dst / src */
case BPF_ALU | BPF_DIV | BPF_X:
case BPF_ALU64 | BPF_DIV | BPF_X:
if (!off) {
emit_zext_32(ctx, dst, is32);
move_reg(ctx, t1, src);
emit_zext_32(ctx, t1, is32);
emit_insn(ctx, divdu, dst, dst, t1);
emit_zext_32(ctx, dst, is32);
} else {
emit_sext_32(ctx, dst, is32);
move_reg(ctx, t1, src);
emit_sext_32(ctx, t1, is32);
emit_insn(ctx, divd, dst, dst, t1);
emit_sext_32(ctx, dst, is32);
}
break;
/* dst = dst / imm */
case BPF_ALU | BPF_DIV | BPF_K:
case BPF_ALU64 | BPF_DIV | BPF_K:
if (!off) {
move_imm(ctx, t1, imm, is32);
emit_zext_32(ctx, dst, is32);
emit_insn(ctx, divdu, dst, dst, t1);
emit_zext_32(ctx, dst, is32);
} else {
move_imm(ctx, t1, imm, false);
emit_sext_32(ctx, t1, is32);
emit_sext_32(ctx, dst, is32);
emit_insn(ctx, divd, dst, dst, t1);
emit_sext_32(ctx, dst, is32);
}
break;
/* dst = dst % src */
case BPF_ALU | BPF_MOD | BPF_X:
case BPF_ALU64 | BPF_MOD | BPF_X:
if (!off) {
emit_zext_32(ctx, dst, is32);
move_reg(ctx, t1, src);
emit_zext_32(ctx, t1, is32);
emit_insn(ctx, moddu, dst, dst, t1);
emit_zext_32(ctx, dst, is32);
} else {
emit_sext_32(ctx, dst, is32);
move_reg(ctx, t1, src);
emit_sext_32(ctx, t1, is32);
emit_insn(ctx, modd, dst, dst, t1);
emit_sext_32(ctx, dst, is32);
}
break;
/* dst = dst % imm */
case BPF_ALU | BPF_MOD | BPF_K:
case BPF_ALU64 | BPF_MOD | BPF_K:
if (!off) {
move_imm(ctx, t1, imm, is32);
emit_zext_32(ctx, dst, is32);
emit_insn(ctx, moddu, dst, dst, t1);
emit_zext_32(ctx, dst, is32);
} else {
move_imm(ctx, t1, imm, false);
emit_sext_32(ctx, t1, is32);
emit_sext_32(ctx, dst, is32);
emit_insn(ctx, modd, dst, dst, t1);
emit_sext_32(ctx, dst, is32);
}
break;
/* dst = -dst */
case BPF_ALU | BPF_NEG:
case BPF_ALU64 | BPF_NEG:
move_imm(ctx, t1, imm, is32);
emit_insn(ctx, subd, dst, LOONGARCH_GPR_ZERO, dst);
emit_zext_32(ctx, dst, is32);
break;
/* dst = dst & src */
case BPF_ALU | BPF_AND | BPF_X:
case BPF_ALU64 | BPF_AND | BPF_X:
emit_insn(ctx, and, dst, dst, src);
emit_zext_32(ctx, dst, is32);
break;
/* dst = dst & imm */
case BPF_ALU | BPF_AND | BPF_K:
case BPF_ALU64 | BPF_AND | BPF_K:
if (is_unsigned_imm12(imm)) {
emit_insn(ctx, andi, dst, dst, imm);
} else {
move_imm(ctx, t1, imm, is32);
emit_insn(ctx, and, dst, dst, t1);
}
emit_zext_32(ctx, dst, is32);
break;
/* dst = dst | src */
case BPF_ALU | BPF_OR | BPF_X:
case BPF_ALU64 | BPF_OR | BPF_X:
emit_insn(ctx, or, dst, dst, src);
emit_zext_32(ctx, dst, is32);
break;
/* dst = dst | imm */
case BPF_ALU | BPF_OR | BPF_K:
case BPF_ALU64 | BPF_OR | BPF_K:
if (is_unsigned_imm12(imm)) {
emit_insn(ctx, ori, dst, dst, imm);
} else {
move_imm(ctx, t1, imm, is32);
emit_insn(ctx, or, dst, dst, t1);
}
emit_zext_32(ctx, dst, is32);
break;
/* dst = dst ^ src */
case BPF_ALU | BPF_XOR | BPF_X:
case BPF_ALU64 | BPF_XOR | BPF_X:
emit_insn(ctx, xor, dst, dst, src);
emit_zext_32(ctx, dst, is32);
break;
/* dst = dst ^ imm */
case BPF_ALU | BPF_XOR | BPF_K:
case BPF_ALU64 | BPF_XOR | BPF_K:
if (is_unsigned_imm12(imm)) {
emit_insn(ctx, xori, dst, dst, imm);
} else {
move_imm(ctx, t1, imm, is32);
emit_insn(ctx, xor, dst, dst, t1);
}
emit_zext_32(ctx, dst, is32);
break;
/* dst = dst << src (logical) */
case BPF_ALU | BPF_LSH | BPF_X:
emit_insn(ctx, sllw, dst, dst, src);
emit_zext_32(ctx, dst, is32);
break;
case BPF_ALU64 | BPF_LSH | BPF_X:
emit_insn(ctx, slld, dst, dst, src);
break;
/* dst = dst << imm (logical) */
case BPF_ALU | BPF_LSH | BPF_K:
emit_insn(ctx, slliw, dst, dst, imm);
emit_zext_32(ctx, dst, is32);
break;
case BPF_ALU64 | BPF_LSH | BPF_K:
emit_insn(ctx, sllid, dst, dst, imm);
break;
/* dst = dst >> src (logical) */
case BPF_ALU | BPF_RSH | BPF_X:
emit_insn(ctx, srlw, dst, dst, src);
emit_zext_32(ctx, dst, is32);
break;
case BPF_ALU64 | BPF_RSH | BPF_X:
emit_insn(ctx, srld, dst, dst, src);
break;
/* dst = dst >> imm (logical) */
case BPF_ALU | BPF_RSH | BPF_K:
emit_insn(ctx, srliw, dst, dst, imm);
emit_zext_32(ctx, dst, is32);
break;
case BPF_ALU64 | BPF_RSH | BPF_K:
emit_insn(ctx, srlid, dst, dst, imm);
break;
/* dst = dst >> src (arithmetic) */
case BPF_ALU | BPF_ARSH | BPF_X:
emit_insn(ctx, sraw, dst, dst, src);
emit_zext_32(ctx, dst, is32);
break;
case BPF_ALU64 | BPF_ARSH | BPF_X:
emit_insn(ctx, srad, dst, dst, src);
break;
/* dst = dst >> imm (arithmetic) */
case BPF_ALU | BPF_ARSH | BPF_K:
emit_insn(ctx, sraiw, dst, dst, imm);
emit_zext_32(ctx, dst, is32);
break;
case BPF_ALU64 | BPF_ARSH | BPF_K:
emit_insn(ctx, sraid, dst, dst, imm);
break;
/* dst = BSWAP##imm(dst) */
case BPF_ALU | BPF_END | BPF_FROM_LE:
switch (imm) {
case 16:
/* zero-extend 16 bits into 64 bits */
emit_insn(ctx, bstrpickd, dst, dst, 15, 0);
break;
case 32:
/* zero-extend 32 bits into 64 bits */
emit_zext_32(ctx, dst, is32);
break;
case 64:
/* do nothing */
break;
}
break;
case BPF_ALU | BPF_END | BPF_FROM_BE:
case BPF_ALU64 | BPF_END | BPF_FROM_LE:
switch (imm) {
case 16:
emit_insn(ctx, revb2h, dst, dst);
/* zero-extend 16 bits into 64 bits */
emit_insn(ctx, bstrpickd, dst, dst, 15, 0);
break;
case 32:
emit_insn(ctx, revb2w, dst, dst);
/* clear the upper 32 bits */
emit_zext_32(ctx, dst, true);
break;
case 64:
emit_insn(ctx, revbd, dst, dst);
break;
}
break;
/* PC += off if dst cond src */
case BPF_JMP | BPF_JEQ | BPF_X:
case BPF_JMP | BPF_JNE | BPF_X:
case BPF_JMP | BPF_JGT | BPF_X:
case BPF_JMP | BPF_JGE | BPF_X:
case BPF_JMP | BPF_JLT | BPF_X:
case BPF_JMP | BPF_JLE | BPF_X:
case BPF_JMP | BPF_JSGT | BPF_X:
case BPF_JMP | BPF_JSGE | BPF_X:
case BPF_JMP | BPF_JSLT | BPF_X:
case BPF_JMP | BPF_JSLE | BPF_X:
case BPF_JMP32 | BPF_JEQ | BPF_X:
case BPF_JMP32 | BPF_JNE | BPF_X:
case BPF_JMP32 | BPF_JGT | BPF_X:
case BPF_JMP32 | BPF_JGE | BPF_X:
case BPF_JMP32 | BPF_JLT | BPF_X:
case BPF_JMP32 | BPF_JLE | BPF_X:
case BPF_JMP32 | BPF_JSGT | BPF_X:
case BPF_JMP32 | BPF_JSGE | BPF_X:
case BPF_JMP32 | BPF_JSLT | BPF_X:
case BPF_JMP32 | BPF_JSLE | BPF_X:
jmp_offset = bpf2la_offset(i, off, ctx);
move_reg(ctx, t1, dst);
move_reg(ctx, t2, src);
if (is_signed_bpf_cond(BPF_OP(code))) {
emit_sext_32(ctx, t1, is32);
emit_sext_32(ctx, t2, is32);
} else {
emit_zext_32(ctx, t1, is32);
emit_zext_32(ctx, t2, is32);
}
if (emit_cond_jmp(ctx, cond, t1, t2, jmp_offset) < 0)
goto toofar;
break;
/* PC += off if dst cond imm */
case BPF_JMP | BPF_JEQ | BPF_K:
case BPF_JMP | BPF_JNE | BPF_K:
case BPF_JMP | BPF_JGT | BPF_K:
case BPF_JMP | BPF_JGE | BPF_K:
case BPF_JMP | BPF_JLT | BPF_K:
case BPF_JMP | BPF_JLE | BPF_K:
case BPF_JMP | BPF_JSGT | BPF_K:
case BPF_JMP | BPF_JSGE | BPF_K:
case BPF_JMP | BPF_JSLT | BPF_K:
case BPF_JMP | BPF_JSLE | BPF_K:
case BPF_JMP32 | BPF_JEQ | BPF_K:
case BPF_JMP32 | BPF_JNE | BPF_K:
case BPF_JMP32 | BPF_JGT | BPF_K:
case BPF_JMP32 | BPF_JGE | BPF_K:
case BPF_JMP32 | BPF_JLT | BPF_K:
case BPF_JMP32 | BPF_JLE | BPF_K:
case BPF_JMP32 | BPF_JSGT | BPF_K:
case BPF_JMP32 | BPF_JSGE | BPF_K:
case BPF_JMP32 | BPF_JSLT | BPF_K:
case BPF_JMP32 | BPF_JSLE | BPF_K:
jmp_offset = bpf2la_offset(i, off, ctx);
if (imm) {
move_imm(ctx, t1, imm, false);
tm = t1;
} else {
/* If imm is 0, simply use zero register. */
tm = LOONGARCH_GPR_ZERO;
}
move_reg(ctx, t2, dst);
if (is_signed_bpf_cond(BPF_OP(code))) {
emit_sext_32(ctx, tm, is32);
emit_sext_32(ctx, t2, is32);
} else {
emit_zext_32(ctx, tm, is32);
emit_zext_32(ctx, t2, is32);
}
if (emit_cond_jmp(ctx, cond, t2, tm, jmp_offset) < 0)
goto toofar;
break;
/* PC += off if dst & src */
case BPF_JMP | BPF_JSET | BPF_X:
case BPF_JMP32 | BPF_JSET | BPF_X:
jmp_offset = bpf2la_offset(i, off, ctx);
emit_insn(ctx, and, t1, dst, src);
emit_zext_32(ctx, t1, is32);
if (emit_cond_jmp(ctx, cond, t1, LOONGARCH_GPR_ZERO, jmp_offset) < 0)
goto toofar;
break;
/* PC += off if dst & imm */
case BPF_JMP | BPF_JSET | BPF_K:
case BPF_JMP32 | BPF_JSET | BPF_K:
jmp_offset = bpf2la_offset(i, off, ctx);
move_imm(ctx, t1, imm, is32);
emit_insn(ctx, and, t1, dst, t1);
emit_zext_32(ctx, t1, is32);
if (emit_cond_jmp(ctx, cond, t1, LOONGARCH_GPR_ZERO, jmp_offset) < 0)
goto toofar;
break;
/* PC += off */
case BPF_JMP | BPF_JA:
case BPF_JMP32 | BPF_JA:
if (BPF_CLASS(code) == BPF_JMP)
jmp_offset = bpf2la_offset(i, off, ctx);
else
jmp_offset = bpf2la_offset(i, imm, ctx);
if (emit_uncond_jmp(ctx, jmp_offset) < 0)
goto toofar;
break;
/* function call */
case BPF_JMP | BPF_CALL:
mark_call(ctx);
ret = bpf_jit_get_func_addr(ctx->prog, insn, extra_pass,
&func_addr, &func_addr_fixed);
if (ret < 0)
return ret;
move_addr(ctx, t1, func_addr);
emit_insn(ctx, jirl, t1, LOONGARCH_GPR_RA, 0);
move_reg(ctx, regmap[BPF_REG_0], LOONGARCH_GPR_A0);
break;
/* tail call */
case BPF_JMP | BPF_TAIL_CALL:
mark_tail_call(ctx);
if (emit_bpf_tail_call(ctx) < 0)
return -EINVAL;
break;
/* function return */
case BPF_JMP | BPF_EXIT:
if (i == ctx->prog->len - 1)
break;
jmp_offset = epilogue_offset(ctx);
if (emit_uncond_jmp(ctx, jmp_offset) < 0)
goto toofar;
break;
/* dst = imm64 */
case BPF_LD | BPF_IMM | BPF_DW:
{
const u64 imm64 = (u64)(insn + 1)->imm << 32 | (u32)insn->imm;
move_imm(ctx, dst, imm64, is32);
return 1;
}
/* dst = *(size *)(src + off) */
case BPF_LDX | BPF_MEM | BPF_B:
case BPF_LDX | BPF_MEM | BPF_H:
case BPF_LDX | BPF_MEM | BPF_W:
case BPF_LDX | BPF_MEM | BPF_DW:
case BPF_LDX | BPF_PROBE_MEM | BPF_DW:
case BPF_LDX | BPF_PROBE_MEM | BPF_W:
case BPF_LDX | BPF_PROBE_MEM | BPF_H:
case BPF_LDX | BPF_PROBE_MEM | BPF_B:
/* dst_reg = (s64)*(signed size *)(src_reg + off) */
case BPF_LDX | BPF_MEMSX | BPF_B:
case BPF_LDX | BPF_MEMSX | BPF_H:
case BPF_LDX | BPF_MEMSX | BPF_W:
case BPF_LDX | BPF_PROBE_MEMSX | BPF_B:
case BPF_LDX | BPF_PROBE_MEMSX | BPF_H:
case BPF_LDX | BPF_PROBE_MEMSX | BPF_W:
sign_extend = BPF_MODE(insn->code) == BPF_MEMSX ||
BPF_MODE(insn->code) == BPF_PROBE_MEMSX;
switch (BPF_SIZE(code)) {
case BPF_B:
if (is_signed_imm12(off)) {
if (sign_extend)
emit_insn(ctx, ldb, dst, src, off);
else
emit_insn(ctx, ldbu, dst, src, off);
} else {
move_imm(ctx, t1, off, is32);
if (sign_extend)
emit_insn(ctx, ldxb, dst, src, t1);
else
emit_insn(ctx, ldxbu, dst, src, t1);
}
break;
case BPF_H:
if (is_signed_imm12(off)) {
if (sign_extend)
emit_insn(ctx, ldh, dst, src, off);
else
emit_insn(ctx, ldhu, dst, src, off);
} else {
move_imm(ctx, t1, off, is32);
if (sign_extend)
emit_insn(ctx, ldxh, dst, src, t1);
else
emit_insn(ctx, ldxhu, dst, src, t1);
}
break;
case BPF_W:
if (is_signed_imm12(off)) {
if (sign_extend)
emit_insn(ctx, ldw, dst, src, off);
else
emit_insn(ctx, ldwu, dst, src, off);
} else {
move_imm(ctx, t1, off, is32);
if (sign_extend)
emit_insn(ctx, ldxw, dst, src, t1);
else
emit_insn(ctx, ldxwu, dst, src, t1);
}
break;
case BPF_DW:
move_imm(ctx, t1, off, is32);
emit_insn(ctx, ldxd, dst, src, t1);
break;
}
ret = add_exception_handler(insn, ctx, dst);
if (ret)
return ret;
break;
/* *(size *)(dst + off) = imm */
case BPF_ST | BPF_MEM | BPF_B:
case BPF_ST | BPF_MEM | BPF_H:
case BPF_ST | BPF_MEM | BPF_W:
case BPF_ST | BPF_MEM | BPF_DW:
switch (BPF_SIZE(code)) {
case BPF_B:
move_imm(ctx, t1, imm, is32);
if (is_signed_imm12(off)) {
emit_insn(ctx, stb, t1, dst, off);
} else {
move_imm(ctx, t2, off, is32);
emit_insn(ctx, stxb, t1, dst, t2);
}
break;
case BPF_H:
move_imm(ctx, t1, imm, is32);
if (is_signed_imm12(off)) {
emit_insn(ctx, sth, t1, dst, off);
} else {
move_imm(ctx, t2, off, is32);
emit_insn(ctx, stxh, t1, dst, t2);
}
break;
case BPF_W:
move_imm(ctx, t1, imm, is32);
if (is_signed_imm12(off)) {
emit_insn(ctx, stw, t1, dst, off);
} else if (is_signed_imm14(off)) {
emit_insn(ctx, stptrw, t1, dst, off);
} else {
move_imm(ctx, t2, off, is32);
emit_insn(ctx, stxw, t1, dst, t2);
}
break;
case BPF_DW:
move_imm(ctx, t1, imm, is32);
if (is_signed_imm12(off)) {
emit_insn(ctx, std, t1, dst, off);
} else if (is_signed_imm14(off)) {
emit_insn(ctx, stptrd, t1, dst, off);
} else {
move_imm(ctx, t2, off, is32);
emit_insn(ctx, stxd, t1, dst, t2);
}
break;
}
break;
/* *(size *)(dst + off) = src */
case BPF_STX | BPF_MEM | BPF_B:
case BPF_STX | BPF_MEM | BPF_H:
case BPF_STX | BPF_MEM | BPF_W:
case BPF_STX | BPF_MEM | BPF_DW:
switch (BPF_SIZE(code)) {
case BPF_B:
if (is_signed_imm12(off)) {
emit_insn(ctx, stb, src, dst, off);
} else {
move_imm(ctx, t1, off, is32);
emit_insn(ctx, stxb, src, dst, t1);
}
break;
case BPF_H:
if (is_signed_imm12(off)) {
emit_insn(ctx, sth, src, dst, off);
} else {
move_imm(ctx, t1, off, is32);
emit_insn(ctx, stxh, src, dst, t1);
}
break;
case BPF_W:
if (is_signed_imm12(off)) {
emit_insn(ctx, stw, src, dst, off);
} else if (is_signed_imm14(off)) {
emit_insn(ctx, stptrw, src, dst, off);
} else {
move_imm(ctx, t1, off, is32);
emit_insn(ctx, stxw, src, dst, t1);
}
break;
case BPF_DW:
if (is_signed_imm12(off)) {
emit_insn(ctx, std, src, dst, off);
} else if (is_signed_imm14(off)) {
emit_insn(ctx, stptrd, src, dst, off);
} else {
move_imm(ctx, t1, off, is32);
emit_insn(ctx, stxd, src, dst, t1);
}
break;
}
break;
case BPF_STX | BPF_ATOMIC | BPF_W:
case BPF_STX | BPF_ATOMIC | BPF_DW:
emit_atomic(insn, ctx);
break;
/* Speculation barrier */
case BPF_ST | BPF_NOSPEC:
break;
default:
pr_err("bpf_jit: unknown opcode %02x\n", code);
return -EINVAL;
}
return 0;
toofar:
pr_info_once("bpf_jit: opcode %02x, jump too far\n", code);
return -E2BIG;
}
| [[930, "\t{\n"], [931, "\t\tconst u64 imm64 = (u64)(insn + 1)->imm << 32 | (u32)insn->imm;\n"], [932, "\n"], [935, "\t}\n"]] | [[930, "\t{\n"], [931, "const u64 imm64 = (u64)(insn + 1)->imm << 32 | (u32)insn->imm;"], [932, "\n"], [935, "\t}\n"]] | [
"CVE-2024-26588"
] | [
"CWE-125",
"CWE-119"
] | 68 | {
"Execution Environment": [],
"Explanation": null,
"External Function": [],
"Function Argument": [
"insn",
"ctx"
],
"Globals": [],
"Type Execution Declaration": []
} |
69 | linux | https://github.com/torvalds/linux | crypto/ccm.c | 3b30460c5b0ed762be75a004e924ec3f8711e032 | crypto: ccm - move cbcmac input off the stack
Commit f15f05b0a5de ("crypto: ccm - switch to separate cbcmac driver")
refactored the CCM driver to allow separate implementations of the
underlying MAC to be provided by a platform. However, in doing so, it
moved some data from the linear region to the stack, which violates the
SG constraints when the stack is virtually mapped.
So move idata/odata back to the request ctx struct, of which we can
reasonably expect that it has been allocated using kmalloc() et al.
Reported-by: Johannes Berg <johannes@sipsolutions.net>
Fixes: f15f05b0a5de ("crypto: ccm - switch to separate cbcmac driver")
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Johannes Berg <johannes@sipsolutions.net>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> | true | 8be0a748ac85cea79753205e3e8bda1f | crypto_ccm_auth | static int crypto_ccm_auth(struct aead_request *req, struct scatterlist *plain,
unsigned int cryptlen)
{
struct crypto_ccm_req_priv_ctx *pctx = crypto_ccm_reqctx(req);
struct crypto_aead *aead = crypto_aead_reqtfm(req);
struct crypto_ccm_ctx *ctx = crypto_aead_ctx(aead);
AHASH_REQUEST_ON_STACK(ahreq, ctx->mac);
unsigned int assoclen = req->assoclen;
struct scatterlist sg[3];
u8 odata[16];
u8 idata[16];
int ilen, err;
/* format control data for input */
err = format_input(odata, req, cryptlen);
if (err)
goto out;
sg_init_table(sg, 3);
sg_set_buf(&sg[0], odata, 16);
/* format associated data and compute into mac */
if (assoclen) {
ilen = format_adata(idata, assoclen);
sg_set_buf(&sg[1], idata, ilen);
sg_chain(sg, 3, req->src);
} else {
ilen = 0;
sg_chain(sg, 2, req->src);
}
ahash_request_set_tfm(ahreq, ctx->mac);
ahash_request_set_callback(ahreq, pctx->flags, NULL, NULL);
ahash_request_set_crypt(ahreq, sg, NULL, assoclen + ilen + 16);
err = crypto_ahash_init(ahreq);
if (err)
goto out;
err = crypto_ahash_update(ahreq);
if (err)
goto out;
/* we need to pad the MAC input to a round multiple of the block size */
ilen = 16 - (assoclen + ilen) % 16;
if (ilen < 16) {
memset(idata, 0, ilen);
sg_init_table(sg, 2);
sg_set_buf(&sg[0], idata, ilen);
if (plain)
sg_chain(sg, 2, plain);
plain = sg;
cryptlen += ilen;
}
ahash_request_set_crypt(ahreq, plain, pctx->odata, cryptlen);
err = crypto_ahash_finup(ahreq);
out:
return err;
}
| [[186, "\tu8 odata[16];\n"], [187, "\tu8 idata[16];\n"]] | [[186, "u8 odata[16];"], [187, "u8 idata[16];"]] | [
"CVE-2017-8065"
] | [
"CWE-119"
] | 70 | {
"Execution Environment": [
"CONFIG_VMAP_STACK"
],
"Explanation": null,
"External Function": [
"crypto_ccm_reqctx"
],
"Function Argument": [],
"Globals": [],
"Type Execution Declaration": [
"struct crypto_ccm_req_priv_ctx"
]
} |
70 | linux | https://github.com/torvalds/linux | crypto/ccm.c | 3b30460c5b0ed762be75a004e924ec3f8711e032 | crypto: ccm - move cbcmac input off the stack
Commit f15f05b0a5de ("crypto: ccm - switch to separate cbcmac driver")
refactored the CCM driver to allow separate implementations of the
underlying MAC to be provided by a platform. However, in doing so, it
moved some data from the linear region to the stack, which violates the
SG constraints when the stack is virtually mapped.
So move idata/odata back to the request ctx struct, of which we can
reasonably expect that it has been allocated using kmalloc() et al.
Reported-by: Johannes Berg <johannes@sipsolutions.net>
Fixes: f15f05b0a5de ("crypto: ccm - switch to separate cbcmac driver")
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Johannes Berg <johannes@sipsolutions.net>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> | false | fc74134563bc6b3f9a0492a49919c45c | crypto_ccm_auth | static int crypto_ccm_auth(struct aead_request *req, struct scatterlist *plain,
unsigned int cryptlen)
{
struct crypto_ccm_req_priv_ctx *pctx = crypto_ccm_reqctx(req);
struct crypto_aead *aead = crypto_aead_reqtfm(req);
struct crypto_ccm_ctx *ctx = crypto_aead_ctx(aead);
AHASH_REQUEST_ON_STACK(ahreq, ctx->mac);
unsigned int assoclen = req->assoclen;
struct scatterlist sg[3];
u8 *odata = pctx->odata;
u8 *idata = pctx->idata;
int ilen, err;
/* format control data for input */
err = format_input(odata, req, cryptlen);
if (err)
goto out;
sg_init_table(sg, 3);
sg_set_buf(&sg[0], odata, 16);
/* format associated data and compute into mac */
if (assoclen) {
ilen = format_adata(idata, assoclen);
sg_set_buf(&sg[1], idata, ilen);
sg_chain(sg, 3, req->src);
} else {
ilen = 0;
sg_chain(sg, 2, req->src);
}
ahash_request_set_tfm(ahreq, ctx->mac);
ahash_request_set_callback(ahreq, pctx->flags, NULL, NULL);
ahash_request_set_crypt(ahreq, sg, NULL, assoclen + ilen + 16);
err = crypto_ahash_init(ahreq);
if (err)
goto out;
err = crypto_ahash_update(ahreq);
if (err)
goto out;
/* we need to pad the MAC input to a round multiple of the block size */
ilen = 16 - (assoclen + ilen) % 16;
if (ilen < 16) {
memset(idata, 0, ilen);
sg_init_table(sg, 2);
sg_set_buf(&sg[0], idata, ilen);
if (plain)
sg_chain(sg, 2, plain);
plain = sg;
cryptlen += ilen;
}
ahash_request_set_crypt(ahreq, plain, pctx->odata, cryptlen);
err = crypto_ahash_finup(ahreq);
out:
return err;
}
| [[187, "\tu8 *odata = pctx->odata;\n"], [188, "\tu8 *idata = pctx->idata;\n"]] | [[187, "u8 *odata = pctx->odata;"], [188, "u8 *idata = pctx->idata;"]] | [
"CVE-2017-8065"
] | [
"CWE-119"
] | 70 | {
"Execution Environment": [
"CONFIG_VMAP_STACK"
],
"Explanation": null,
"External Function": [
"crypto_ccm_reqctx"
],
"Function Argument": [],
"Globals": [],
"Type Execution Declaration": [
"struct crypto_ccm_req_priv_ctx"
]
} |
71 | linux | https://github.com/torvalds/linux | virt/kvm/iommu.c | 3d32e4dbe71374a6780eaf51d719d76f9a9bf22f | kvm: fix excessive pages un-pinning in kvm_iommu_map error path.
The third parameter of kvm_unpin_pages() when called from
kvm_iommu_map_pages() is wrong, it should be the number of pages to un-pin
and not the page size.
This error was facilitated with an inconsistent API: kvm_pin_pages() takes
a size, but kvn_unpin_pages() takes a number of pages, so fix the problem
by matching the two.
This was introduced by commit 350b8bd ("kvm: iommu: fix the third parameter
of kvm_iommu_put_pages (CVE-2014-3601)"), which fixes the lack of
un-pinning for pages intended to be un-pinned (i.e. memory leak) but
unfortunately potentially aggravated the number of pages we un-pin that
should have stayed pinned. As far as I understand though, the same
practical mitigations apply.
This issue was found during review of Red Hat 6.6 patches to prepare
Ksplice rebootless updates.
Thanks to Vegard for his time on a late Friday evening to help me in
understanding this code.
Fixes: 350b8bd ("kvm: iommu: fix the third parameter of... (CVE-2014-3601)")
Cc: stable@vger.kernel.org
Signed-off-by: Quentin Casasnovas <quentin.casasnovas@oracle.com>
Signed-off-by: Vegard Nossum <vegard.nossum@oracle.com>
Signed-off-by: Jamie Iles <jamie.iles@oracle.com>
Reviewed-by: Sasha Levin <sasha.levin@oracle.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> | true | 025055e371ed3d112a45d2a56ee88c2a | kvm_pin_pages | static pfn_t kvm_pin_pages(struct kvm_memory_slot *slot, gfn_t gfn,
unsigned long size)
{
gfn_t end_gfn;
pfn_t pfn;
pfn = gfn_to_pfn_memslot(slot, gfn);
end_gfn = gfn + (size >> PAGE_SHIFT);
gfn += 1;
if (is_error_noslot_pfn(pfn))
return pfn;
while (gfn < end_gfn)
gfn_to_pfn_memslot(slot, gfn++);
return pfn;
}
| [[46, "\t\t\t unsigned long size)\n"], [52, "\tend_gfn = gfn + (size >> PAGE_SHIFT);\n"]] | [[45, "static pfn_t kvm_pin_pages(struct kvm_memory_slot *slot, gfn_t gfn,\n\t\t\t unsigned long size)"], [52, "end_gfn = gfn + (size >> PAGE_SHIFT);"]] | [
"CVE-2014-8369"
] | [
"CWE-119"
] | 73 | {
"Execution Environment": [],
"Explanation": null,
"External Function": [
"gfn_to_pfn_memslot",
"is_error_noslot_pfn"
],
"Function Argument": [],
"Globals": [
"PAGE_SHIFT"
],
"Type Execution Declaration": [
"gfn_t",
"pfn_t"
]
} |
72 | linux | https://github.com/torvalds/linux | virt/kvm/iommu.c | 3d32e4dbe71374a6780eaf51d719d76f9a9bf22f | kvm: fix excessive pages un-pinning in kvm_iommu_map error path.
The third parameter of kvm_unpin_pages() when called from
kvm_iommu_map_pages() is wrong, it should be the number of pages to un-pin
and not the page size.
This error was facilitated with an inconsistent API: kvm_pin_pages() takes
a size, but kvn_unpin_pages() takes a number of pages, so fix the problem
by matching the two.
This was introduced by commit 350b8bd ("kvm: iommu: fix the third parameter
of kvm_iommu_put_pages (CVE-2014-3601)"), which fixes the lack of
un-pinning for pages intended to be un-pinned (i.e. memory leak) but
unfortunately potentially aggravated the number of pages we un-pin that
should have stayed pinned. As far as I understand though, the same
practical mitigations apply.
This issue was found during review of Red Hat 6.6 patches to prepare
Ksplice rebootless updates.
Thanks to Vegard for his time on a late Friday evening to help me in
understanding this code.
Fixes: 350b8bd ("kvm: iommu: fix the third parameter of... (CVE-2014-3601)")
Cc: stable@vger.kernel.org
Signed-off-by: Quentin Casasnovas <quentin.casasnovas@oracle.com>
Signed-off-by: Vegard Nossum <vegard.nossum@oracle.com>
Signed-off-by: Jamie Iles <jamie.iles@oracle.com>
Reviewed-by: Sasha Levin <sasha.levin@oracle.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> | true | a0427e269f02ecb7a6151653d483872c | kvm_iommu_map_pages | int kvm_iommu_map_pages(struct kvm *kvm, struct kvm_memory_slot *slot)
{
gfn_t gfn, end_gfn;
pfn_t pfn;
int r = 0;
struct iommu_domain *domain = kvm->arch.iommu_domain;
int flags;
/* check if iommu exists and in use */
if (!domain)
return 0;
gfn = slot->base_gfn;
end_gfn = gfn + slot->npages;
flags = IOMMU_READ;
if (!(slot->flags & KVM_MEM_READONLY))
flags |= IOMMU_WRITE;
if (!kvm->arch.iommu_noncoherent)
flags |= IOMMU_CACHE;
while (gfn < end_gfn) {
unsigned long page_size;
/* Check if already mapped */
if (iommu_iova_to_phys(domain, gfn_to_gpa(gfn))) {
gfn += 1;
continue;
}
/* Get the page size we could use to map */
page_size = kvm_host_page_size(kvm, gfn);
/* Make sure the page_size does not exceed the memslot */
while ((gfn + (page_size >> PAGE_SHIFT)) > end_gfn)
page_size >>= 1;
/* Make sure gfn is aligned to the page size we want to map */
while ((gfn << PAGE_SHIFT) & (page_size - 1))
page_size >>= 1;
/* Make sure hva is aligned to the page size we want to map */
while (__gfn_to_hva_memslot(slot, gfn) & (page_size - 1))
page_size >>= 1;
/*
* Pin all pages we are about to map in memory. This is
* important because we unmap and unpin in 4kb steps later.
*/
pfn = kvm_pin_pages(slot, gfn, page_size);
if (is_error_noslot_pfn(pfn)) {
gfn += 1;
continue;
}
/* Map into IO address space */
r = iommu_map(domain, gfn_to_gpa(gfn), pfn_to_hpa(pfn),
page_size, flags);
if (r) {
printk(KERN_ERR "kvm_iommu_map_address:"
"iommu failed to map pfn=%llx\n", pfn);
kvm_unpin_pages(kvm, pfn, page_size);
goto unmap_pages;
}
gfn += page_size >> PAGE_SHIFT;
}
return 0;
unmap_pages:
kvm_iommu_put_pages(kvm, slot->base_gfn, gfn - slot->base_gfn);
return r;
}
| [[122, "\t\tpfn = kvm_pin_pages(slot, gfn, page_size);\n"], [134, "\t\t\tkvm_unpin_pages(kvm, pfn, page_size);\n"]] | [[122, "pfn = kvm_pin_pages(slot, gfn, page_size);"], [134, "kvm_unpin_pages(kvm, pfn, page_size);"]] | [
"CVE-2014-8369"
] | [
"CWE-119"
] | 74 | {
"Execution Environment": [],
"Explanation": null,
"External Function": [
"kvm_pin_pages",
"kvm_unpin_pages"
],
"Function Argument": [],
"Globals": [
"PAGE_SHIFT"
],
"Type Execution Declaration": []
} |
73 | linux | https://github.com/torvalds/linux | virt/kvm/iommu.c | 3d32e4dbe71374a6780eaf51d719d76f9a9bf22f | kvm: fix excessive pages un-pinning in kvm_iommu_map error path.
The third parameter of kvm_unpin_pages() when called from
kvm_iommu_map_pages() is wrong, it should be the number of pages to un-pin
and not the page size.
This error was facilitated with an inconsistent API: kvm_pin_pages() takes
a size, but kvn_unpin_pages() takes a number of pages, so fix the problem
by matching the two.
This was introduced by commit 350b8bd ("kvm: iommu: fix the third parameter
of kvm_iommu_put_pages (CVE-2014-3601)"), which fixes the lack of
un-pinning for pages intended to be un-pinned (i.e. memory leak) but
unfortunately potentially aggravated the number of pages we un-pin that
should have stayed pinned. As far as I understand though, the same
practical mitigations apply.
This issue was found during review of Red Hat 6.6 patches to prepare
Ksplice rebootless updates.
Thanks to Vegard for his time on a late Friday evening to help me in
understanding this code.
Fixes: 350b8bd ("kvm: iommu: fix the third parameter of... (CVE-2014-3601)")
Cc: stable@vger.kernel.org
Signed-off-by: Quentin Casasnovas <quentin.casasnovas@oracle.com>
Signed-off-by: Vegard Nossum <vegard.nossum@oracle.com>
Signed-off-by: Jamie Iles <jamie.iles@oracle.com>
Reviewed-by: Sasha Levin <sasha.levin@oracle.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> | false | ba8be0d9d9f3ec211da15b296457e183 | kvm_pin_pages | static pfn_t kvm_pin_pages(struct kvm_memory_slot *slot, gfn_t gfn,
unsigned long npages)
{
gfn_t end_gfn;
pfn_t pfn;
pfn = gfn_to_pfn_memslot(slot, gfn);
end_gfn = gfn + npages;
gfn += 1;
if (is_error_noslot_pfn(pfn))
return pfn;
while (gfn < end_gfn)
gfn_to_pfn_memslot(slot, gfn++);
return pfn;
}
| [[46, "\t\t\t unsigned long npages)\n"], [52, "\tend_gfn = gfn + npages;\n"]] | [[45, "static pfn_t kvm_pin_pages(struct kvm_memory_slot *slot, gfn_t gfn,\n\t\t\t unsigned long npages)"], [52, "end_gfn = gfn + npages;"]] | [
"CVE-2014-8369"
] | [
"CWE-119"
] | 73 | {
"Execution Environment": [],
"Explanation": null,
"External Function": [
"gfn_to_pfn_memslot",
"is_error_noslot_pfn"
],
"Function Argument": [],
"Globals": [
"PAGE_SHIFT"
],
"Type Execution Declaration": [
"gfn_t",
"pfn_t"
]
} |
74 | linux | https://github.com/torvalds/linux | virt/kvm/iommu.c | 3d32e4dbe71374a6780eaf51d719d76f9a9bf22f | kvm: fix excessive pages un-pinning in kvm_iommu_map error path.
The third parameter of kvm_unpin_pages() when called from
kvm_iommu_map_pages() is wrong, it should be the number of pages to un-pin
and not the page size.
This error was facilitated with an inconsistent API: kvm_pin_pages() takes
a size, but kvn_unpin_pages() takes a number of pages, so fix the problem
by matching the two.
This was introduced by commit 350b8bd ("kvm: iommu: fix the third parameter
of kvm_iommu_put_pages (CVE-2014-3601)"), which fixes the lack of
un-pinning for pages intended to be un-pinned (i.e. memory leak) but
unfortunately potentially aggravated the number of pages we un-pin that
should have stayed pinned. As far as I understand though, the same
practical mitigations apply.
This issue was found during review of Red Hat 6.6 patches to prepare
Ksplice rebootless updates.
Thanks to Vegard for his time on a late Friday evening to help me in
understanding this code.
Fixes: 350b8bd ("kvm: iommu: fix the third parameter of... (CVE-2014-3601)")
Cc: stable@vger.kernel.org
Signed-off-by: Quentin Casasnovas <quentin.casasnovas@oracle.com>
Signed-off-by: Vegard Nossum <vegard.nossum@oracle.com>
Signed-off-by: Jamie Iles <jamie.iles@oracle.com>
Reviewed-by: Sasha Levin <sasha.levin@oracle.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> | false | 39fe2d1da52c0b34e9ae57deacceb7b5 | kvm_iommu_map_pages | int kvm_iommu_map_pages(struct kvm *kvm, struct kvm_memory_slot *slot)
{
gfn_t gfn, end_gfn;
pfn_t pfn;
int r = 0;
struct iommu_domain *domain = kvm->arch.iommu_domain;
int flags;
/* check if iommu exists and in use */
if (!domain)
return 0;
gfn = slot->base_gfn;
end_gfn = gfn + slot->npages;
flags = IOMMU_READ;
if (!(slot->flags & KVM_MEM_READONLY))
flags |= IOMMU_WRITE;
if (!kvm->arch.iommu_noncoherent)
flags |= IOMMU_CACHE;
while (gfn < end_gfn) {
unsigned long page_size;
/* Check if already mapped */
if (iommu_iova_to_phys(domain, gfn_to_gpa(gfn))) {
gfn += 1;
continue;
}
/* Get the page size we could use to map */
page_size = kvm_host_page_size(kvm, gfn);
/* Make sure the page_size does not exceed the memslot */
while ((gfn + (page_size >> PAGE_SHIFT)) > end_gfn)
page_size >>= 1;
/* Make sure gfn is aligned to the page size we want to map */
while ((gfn << PAGE_SHIFT) & (page_size - 1))
page_size >>= 1;
/* Make sure hva is aligned to the page size we want to map */
while (__gfn_to_hva_memslot(slot, gfn) & (page_size - 1))
page_size >>= 1;
/*
* Pin all pages we are about to map in memory. This is
* important because we unmap and unpin in 4kb steps later.
*/
pfn = kvm_pin_pages(slot, gfn, page_size >> PAGE_SHIFT);
if (is_error_noslot_pfn(pfn)) {
gfn += 1;
continue;
}
/* Map into IO address space */
r = iommu_map(domain, gfn_to_gpa(gfn), pfn_to_hpa(pfn),
page_size, flags);
if (r) {
printk(KERN_ERR "kvm_iommu_map_address:"
"iommu failed to map pfn=%llx\n", pfn);
kvm_unpin_pages(kvm, pfn, page_size >> PAGE_SHIFT);
goto unmap_pages;
}
gfn += page_size >> PAGE_SHIFT;
}
return 0;
unmap_pages:
kvm_iommu_put_pages(kvm, slot->base_gfn, gfn - slot->base_gfn);
return r;
}
| [[122, "\t\tpfn = kvm_pin_pages(slot, gfn, page_size >> PAGE_SHIFT);\n"], [134, "\t\t\tkvm_unpin_pages(kvm, pfn, page_size >> PAGE_SHIFT);\n"]] | [[122, "pfn = kvm_pin_pages(slot, gfn, page_size >> PAGE_SHIFT);"], [134, "kvm_unpin_pages(kvm, pfn, page_size >> PAGE_SHIFT);"]] | [
"CVE-2014-8369"
] | [
"CWE-119"
] | 74 | {
"Execution Environment": [],
"Explanation": null,
"External Function": [
"kvm_pin_pages",
"kvm_unpin_pages"
],
"Function Argument": [],
"Globals": [
"PAGE_SHIFT"
],
"Type Execution Declaration": []
} |
75 | linux | https://github.com/torvalds/linux | drivers/media/usb/dvb-usb/cxusb.c | 3f190e3aec212fc8c61e202c51400afa7384d4bc | [media] cxusb: Use a dma capable buffer also for reading
Commit 17ce039b4e54 ("[media] cxusb: don't do DMA on stack")
added a kmalloc'ed bounce buffer for writes, but missed to do the same
for reads. As the read only happens after the write is finished, we can
reuse the same buffer.
As dvb_usb_generic_rw handles a read length of 0 by itself, avoid calling
it using the dvb_usb_generic_read wrapper function.
Signed-off-by: Stefan Brüns <stefan.bruens@rwth-aachen.de>
Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com> | true | 77bd1536c3fe6c3a9de9751f3f548310 | cxusb_ctrl_msg | static int cxusb_ctrl_msg(struct dvb_usb_device *d,
u8 cmd, u8 *wbuf, int wlen, u8 *rbuf, int rlen)
{
struct cxusb_state *st = d->priv;
int ret, wo;
if (1 + wlen > MAX_XFER_SIZE) {
warn("i2c wr: len=%d is too big!\n", wlen);
return -EOPNOTSUPP;
}
wo = (rbuf == NULL || rlen == 0); /* write-only */
mutex_lock(&d->data_mutex);
st->data[0] = cmd;
memcpy(&st->data[1], wbuf, wlen);
if (wo)
ret = dvb_usb_generic_write(d, st->data, 1 + wlen);
else
ret = dvb_usb_generic_rw(d, st->data, 1 + wlen,
rbuf, rlen, 0);
mutex_unlock(&d->data_mutex);
return ret;
}
| [[62, "\tint ret, wo;\n"], [69, "\two = (rbuf == NULL || rlen == 0); /* write-only */\n"], [74, "\tif (wo)\n"], [75, "\t\tret = dvb_usb_generic_write(d, st->data, 1 + wlen);\n"], [76, "\telse\n"], [77, "\t\tret = dvb_usb_generic_rw(d, st->data, 1 + wlen,\n"], [78, "\t\t\t\t\t rbuf, rlen, 0);\n"]] | [[62, "int ret, wo;"], [69, "\two = (rbuf == NULL || rlen == 0); /* write-only */\n"], [74, "if (wo)"], [75, "ret = dvb_usb_generic_write(d, st->data, 1 + wlen);"], [76, "else"], [77, "ret = dvb_usb_generic_rw(d, st->data, 1 + wlen,\n\t\t\t\t\t rbuf, rlen, 0);"]] | [
"CVE-2017-8063"
] | [
"CWE-119"
] | 76 | {
"Execution Environment": [
"CONFIG_VMAP_STACK"
],
"Explanation": null,
"External Function": [
"dvb_usb_generic_write",
"dvb_usb_generic_rw"
],
"Function Argument": [
"d"
],
"Globals": [
"MAX_XFER_SIZE"
],
"Type Execution Declaration": [
"struct dvb_usb_device",
"struct cxusb_state"
]
} |
76 | linux | https://github.com/torvalds/linux | drivers/media/usb/dvb-usb/cxusb.c | 3f190e3aec212fc8c61e202c51400afa7384d4bc | [media] cxusb: Use a dma capable buffer also for reading
Commit 17ce039b4e54 ("[media] cxusb: don't do DMA on stack")
added a kmalloc'ed bounce buffer for writes, but missed to do the same
for reads. As the read only happens after the write is finished, we can
reuse the same buffer.
As dvb_usb_generic_rw handles a read length of 0 by itself, avoid calling
it using the dvb_usb_generic_read wrapper function.
Signed-off-by: Stefan Brüns <stefan.bruens@rwth-aachen.de>
Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com> | false | 5e2dd1cfc640cb73671b7b85a6699623 | cxusb_ctrl_msg | static int cxusb_ctrl_msg(struct dvb_usb_device *d,
u8 cmd, u8 *wbuf, int wlen, u8 *rbuf, int rlen)
{
struct cxusb_state *st = d->priv;
int ret;
if (1 + wlen > MAX_XFER_SIZE) {
warn("i2c wr: len=%d is too big!\n", wlen);
return -EOPNOTSUPP;
}
if (rlen > MAX_XFER_SIZE) {
warn("i2c rd: len=%d is too big!\n", rlen);
return -EOPNOTSUPP;
}
mutex_lock(&d->data_mutex);
st->data[0] = cmd;
memcpy(&st->data[1], wbuf, wlen);
ret = dvb_usb_generic_rw(d, st->data, 1 + wlen, st->data, rlen, 0);
if (!ret && rbuf && rlen)
memcpy(rbuf, st->data, rlen);
mutex_unlock(&d->data_mutex);
return ret;
}
| [[62, "\tint ret;\n"], [69, "\tif (rlen > MAX_XFER_SIZE) {\n"], [70, "\t\twarn(\"i2c rd: len=%d is too big!\\n\", rlen);\n"], [71, "\t\treturn -EOPNOTSUPP;\n"], [72, "\t}\n"], [77, "\tret = dvb_usb_generic_rw(d, st->data, 1 + wlen, st->data, rlen, 0);\n"], [78, "\tif (!ret && rbuf && rlen)\n"], [79, "\t\tmemcpy(rbuf, st->data, rlen);\n"]] | [[62, "int ret;"], [69, "if (rlen > MAX_XFER_SIZE)"], [70, "warn(\"i2c rd: len=%d is too big!\\n\", rlen);"], [71, "return -EOPNOTSUPP;"], [72, "\t}\n"], [77, "ret = dvb_usb_generic_rw(d, st->data, 1 + wlen, st->data, rlen, 0);"], [78, "if (!ret && rbuf && rlen)"], [79, "memcpy(rbuf, st->data, rlen);"]] | [
"CVE-2017-8063"
] | [
"CWE-119"
] | 76 | {
"Execution Environment": [
"CONFIG_VMAP_STACK"
],
"Explanation": null,
"External Function": [
"dvb_usb_generic_write",
"dvb_usb_generic_rw"
],
"Function Argument": [
"d"
],
"Globals": [
"MAX_XFER_SIZE"
],
"Type Execution Declaration": [
"struct dvb_usb_device",
"struct cxusb_state"
]
} |
77 | linux | https://github.com/torvalds/linux | drivers/net/ethernet/hisilicon/hns/hns_dsaf_gmac.c | 412b65d15a7f8a93794653968308fc100f2aa87c | net: hns: fix ethtool_get_strings overflow in hns driver
hns_get_sset_count() returns HNS_NET_STATS_CNT and the data space allocated
is not enough for ethtool_get_strings(), which will cause random memory
corruption.
When SLAB and DEBUG_SLAB are both enabled, memory corruptions like the
the following can be observed without this patch:
[ 43.115200] Slab corruption (Not tainted): Acpi-ParseExt start=ffff801fb0b69030, len=80
[ 43.115206] Redzone: 0x9f911029d006462/0x5f78745f31657070.
[ 43.115208] Last user: [<5f7272655f746b70>](0x5f7272655f746b70)
[ 43.115214] 010: 70 70 65 31 5f 74 78 5f 70 6b 74 00 6b 6b 6b 6b ppe1_tx_pkt.kkkk
[ 43.115217] 030: 70 70 65 31 5f 74 78 5f 70 6b 74 5f 6f 6b 00 6b ppe1_tx_pkt_ok.k
[ 43.115218] Next obj: start=ffff801fb0b69098, len=80
[ 43.115220] Redzone: 0x706d655f6f666966/0x9f911029d74e35b.
[ 43.115229] Last user: [<ffff0000084b11b0>](acpi_os_release_object+0x28/0x38)
[ 43.115231] 000: 74 79 00 6b 6b 6b 6b 6b 70 70 65 31 5f 74 78 5f ty.kkkkkppe1_tx_
[ 43.115232] 010: 70 6b 74 5f 65 72 72 5f 63 73 75 6d 5f 66 61 69 pkt_err_csum_fai
Signed-off-by: Timmy Li <lixiaoping3@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net> | true | 81976ee8c5029d5374310f4f35fc5cf2 | hns_gmac_get_sset_count | static int hns_gmac_get_sset_count(int stringset)
{
if (stringset == ETH_SS_STATS)
return ARRAY_SIZE(g_gmac_stats_string);
return 0;
}
| [[669, "\tif (stringset == ETH_SS_STATS)\n"]] | [[669, "if (stringset == ETH_SS_STATS)"]] | [
"CVE-2017-18222"
] | [
"CWE-119"
] | 78 | {
"Execution Environment": [],
"Explanation": null,
"External Function": [],
"Function Argument": [],
"Globals": [
"g_gmac_stats_string"
],
"Type Execution Declaration": [
"ETH_SS_STATS",
"ETH_SS_PRIV_FLAGS"
]
} |
78 | linux | https://github.com/torvalds/linux | drivers/net/ethernet/hisilicon/hns/hns_dsaf_gmac.c | 412b65d15a7f8a93794653968308fc100f2aa87c | net: hns: fix ethtool_get_strings overflow in hns driver
hns_get_sset_count() returns HNS_NET_STATS_CNT and the data space allocated
is not enough for ethtool_get_strings(), which will cause random memory
corruption.
When SLAB and DEBUG_SLAB are both enabled, memory corruptions like the
the following can be observed without this patch:
[ 43.115200] Slab corruption (Not tainted): Acpi-ParseExt start=ffff801fb0b69030, len=80
[ 43.115206] Redzone: 0x9f911029d006462/0x5f78745f31657070.
[ 43.115208] Last user: [<5f7272655f746b70>](0x5f7272655f746b70)
[ 43.115214] 010: 70 70 65 31 5f 74 78 5f 70 6b 74 00 6b 6b 6b 6b ppe1_tx_pkt.kkkk
[ 43.115217] 030: 70 70 65 31 5f 74 78 5f 70 6b 74 5f 6f 6b 00 6b ppe1_tx_pkt_ok.k
[ 43.115218] Next obj: start=ffff801fb0b69098, len=80
[ 43.115220] Redzone: 0x706d655f6f666966/0x9f911029d74e35b.
[ 43.115229] Last user: [<ffff0000084b11b0>](acpi_os_release_object+0x28/0x38)
[ 43.115231] 000: 74 79 00 6b 6b 6b 6b 6b 70 70 65 31 5f 74 78 5f ty.kkkkkppe1_tx_
[ 43.115232] 010: 70 6b 74 5f 65 72 72 5f 63 73 75 6d 5f 66 61 69 pkt_err_csum_fai
Signed-off-by: Timmy Li <lixiaoping3@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net> | false | 79daacf53ed59201efb9d51805defa72 | hns_gmac_get_sset_count | static int hns_gmac_get_sset_count(int stringset)
{
if (stringset == ETH_SS_STATS || stringset == ETH_SS_PRIV_FLAGS)
return ARRAY_SIZE(g_gmac_stats_string);
return 0;
}
| [[669, "\tif (stringset == ETH_SS_STATS || stringset == ETH_SS_PRIV_FLAGS)\n"]] | [[669, "if (stringset == ETH_SS_STATS || stringset == ETH_SS_PRIV_FLAGS)"]] | [
"CVE-2017-18222"
] | [
"CWE-119"
] | 78 | {
"Execution Environment": [],
"Explanation": null,
"External Function": [],
"Function Argument": [],
"Globals": [
"g_gmac_stats_string"
],
"Type Execution Declaration": [
"ETH_SS_STATS",
"ETH_SS_PRIV_FLAGS"
]
} |
79 | linux | https://github.com/torvalds/linux | drivers/net/ethernet/hisilicon/hns/hns_dsaf_ppe.c | 412b65d15a7f8a93794653968308fc100f2aa87c | net: hns: fix ethtool_get_strings overflow in hns driver
hns_get_sset_count() returns HNS_NET_STATS_CNT and the data space allocated
is not enough for ethtool_get_strings(), which will cause random memory
corruption.
When SLAB and DEBUG_SLAB are both enabled, memory corruptions like the
the following can be observed without this patch:
[ 43.115200] Slab corruption (Not tainted): Acpi-ParseExt start=ffff801fb0b69030, len=80
[ 43.115206] Redzone: 0x9f911029d006462/0x5f78745f31657070.
[ 43.115208] Last user: [<5f7272655f746b70>](0x5f7272655f746b70)
[ 43.115214] 010: 70 70 65 31 5f 74 78 5f 70 6b 74 00 6b 6b 6b 6b ppe1_tx_pkt.kkkk
[ 43.115217] 030: 70 70 65 31 5f 74 78 5f 70 6b 74 5f 6f 6b 00 6b ppe1_tx_pkt_ok.k
[ 43.115218] Next obj: start=ffff801fb0b69098, len=80
[ 43.115220] Redzone: 0x706d655f6f666966/0x9f911029d74e35b.
[ 43.115229] Last user: [<ffff0000084b11b0>](acpi_os_release_object+0x28/0x38)
[ 43.115231] 000: 74 79 00 6b 6b 6b 6b 6b 70 70 65 31 5f 74 78 5f ty.kkkkkppe1_tx_
[ 43.115232] 010: 70 6b 74 5f 65 72 72 5f 63 73 75 6d 5f 66 61 69 pkt_err_csum_fai
Signed-off-by: Timmy Li <lixiaoping3@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net> | true | b59067e2d17a1f9970488c10efed433f | hns_ppe_get_sset_count | int hns_ppe_get_sset_count(int stringset)
{
if (stringset == ETH_SS_STATS)
return ETH_PPE_STATIC_NUM;
return 0;
}
| [[425, "\tif (stringset == ETH_SS_STATS)\n"]] | [[425, "if (stringset == ETH_SS_STATS)"]] | [
"CVE-2017-18222"
] | [
"CWE-119"
] | 80 | {
"Execution Environment": [],
"Explanation": null,
"External Function": [],
"Function Argument": [],
"Globals": [
"ETH_SS_STATS",
"ETH_PPE_STATIC_NUM",
"ETH_SS_PRIV_FLAGS"
],
"Type Execution Declaration": []
} |
80 | linux | https://github.com/torvalds/linux | drivers/net/ethernet/hisilicon/hns/hns_dsaf_ppe.c | 412b65d15a7f8a93794653968308fc100f2aa87c | net: hns: fix ethtool_get_strings overflow in hns driver
hns_get_sset_count() returns HNS_NET_STATS_CNT and the data space allocated
is not enough for ethtool_get_strings(), which will cause random memory
corruption.
When SLAB and DEBUG_SLAB are both enabled, memory corruptions like the
the following can be observed without this patch:
[ 43.115200] Slab corruption (Not tainted): Acpi-ParseExt start=ffff801fb0b69030, len=80
[ 43.115206] Redzone: 0x9f911029d006462/0x5f78745f31657070.
[ 43.115208] Last user: [<5f7272655f746b70>](0x5f7272655f746b70)
[ 43.115214] 010: 70 70 65 31 5f 74 78 5f 70 6b 74 00 6b 6b 6b 6b ppe1_tx_pkt.kkkk
[ 43.115217] 030: 70 70 65 31 5f 74 78 5f 70 6b 74 5f 6f 6b 00 6b ppe1_tx_pkt_ok.k
[ 43.115218] Next obj: start=ffff801fb0b69098, len=80
[ 43.115220] Redzone: 0x706d655f6f666966/0x9f911029d74e35b.
[ 43.115229] Last user: [<ffff0000084b11b0>](acpi_os_release_object+0x28/0x38)
[ 43.115231] 000: 74 79 00 6b 6b 6b 6b 6b 70 70 65 31 5f 74 78 5f ty.kkkkkppe1_tx_
[ 43.115232] 010: 70 6b 74 5f 65 72 72 5f 63 73 75 6d 5f 66 61 69 pkt_err_csum_fai
Signed-off-by: Timmy Li <lixiaoping3@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net> | false | 9053b2b06696eb0116dd4c9dafae5f6d | hns_ppe_get_sset_count | int hns_ppe_get_sset_count(int stringset)
{
if (stringset == ETH_SS_STATS || stringset == ETH_SS_PRIV_FLAGS)
return ETH_PPE_STATIC_NUM;
return 0;
}
| [[425, "\tif (stringset == ETH_SS_STATS || stringset == ETH_SS_PRIV_FLAGS)\n"]] | [[425, "if (stringset == ETH_SS_STATS || stringset == ETH_SS_PRIV_FLAGS)"]] | [
"CVE-2017-18222"
] | [
"CWE-119"
] | 80 | {
"Execution Environment": [],
"Explanation": null,
"External Function": [],
"Function Argument": [],
"Globals": [
"ETH_SS_STATS",
"ETH_PPE_STATIC_NUM",
"ETH_SS_PRIV_FLAGS"
],
"Type Execution Declaration": []
} |
81 | linux | https://github.com/torvalds/linux | drivers/net/ethernet/hisilicon/hns/hns_dsaf_rcb.c | 412b65d15a7f8a93794653968308fc100f2aa87c | net: hns: fix ethtool_get_strings overflow in hns driver
hns_get_sset_count() returns HNS_NET_STATS_CNT and the data space allocated
is not enough for ethtool_get_strings(), which will cause random memory
corruption.
When SLAB and DEBUG_SLAB are both enabled, memory corruptions like the
the following can be observed without this patch:
[ 43.115200] Slab corruption (Not tainted): Acpi-ParseExt start=ffff801fb0b69030, len=80
[ 43.115206] Redzone: 0x9f911029d006462/0x5f78745f31657070.
[ 43.115208] Last user: [<5f7272655f746b70>](0x5f7272655f746b70)
[ 43.115214] 010: 70 70 65 31 5f 74 78 5f 70 6b 74 00 6b 6b 6b 6b ppe1_tx_pkt.kkkk
[ 43.115217] 030: 70 70 65 31 5f 74 78 5f 70 6b 74 5f 6f 6b 00 6b ppe1_tx_pkt_ok.k
[ 43.115218] Next obj: start=ffff801fb0b69098, len=80
[ 43.115220] Redzone: 0x706d655f6f666966/0x9f911029d74e35b.
[ 43.115229] Last user: [<ffff0000084b11b0>](acpi_os_release_object+0x28/0x38)
[ 43.115231] 000: 74 79 00 6b 6b 6b 6b 6b 70 70 65 31 5f 74 78 5f ty.kkkkkppe1_tx_
[ 43.115232] 010: 70 6b 74 5f 65 72 72 5f 63 73 75 6d 5f 66 61 69 pkt_err_csum_fai
Signed-off-by: Timmy Li <lixiaoping3@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net> | true | 35ff2b49ab41e4c4f4abee26d65269ca | hns_rcb_get_ring_sset_count | int hns_rcb_get_ring_sset_count(int stringset)
{
if (stringset == ETH_SS_STATS)
return HNS_RING_STATIC_REG_NUM;
return 0;
}
| [[879, "\tif (stringset == ETH_SS_STATS)\n"]] | [[879, "if (stringset == ETH_SS_STATS)"]] | [
"CVE-2017-18222"
] | [
"CWE-119"
] | 82 | {
"Execution Environment": [],
"Explanation": null,
"External Function": [],
"Function Argument": [],
"Globals": [
"ETH_SS_STATS",
"HNS_RING_STATIC_REG_NUM",
"ETH_SS_PRIV_FLAGS"
],
"Type Execution Declaration": []
} |
82 | linux | https://github.com/torvalds/linux | drivers/net/ethernet/hisilicon/hns/hns_dsaf_rcb.c | 412b65d15a7f8a93794653968308fc100f2aa87c | net: hns: fix ethtool_get_strings overflow in hns driver
hns_get_sset_count() returns HNS_NET_STATS_CNT and the data space allocated
is not enough for ethtool_get_strings(), which will cause random memory
corruption.
When SLAB and DEBUG_SLAB are both enabled, memory corruptions like the
the following can be observed without this patch:
[ 43.115200] Slab corruption (Not tainted): Acpi-ParseExt start=ffff801fb0b69030, len=80
[ 43.115206] Redzone: 0x9f911029d006462/0x5f78745f31657070.
[ 43.115208] Last user: [<5f7272655f746b70>](0x5f7272655f746b70)
[ 43.115214] 010: 70 70 65 31 5f 74 78 5f 70 6b 74 00 6b 6b 6b 6b ppe1_tx_pkt.kkkk
[ 43.115217] 030: 70 70 65 31 5f 74 78 5f 70 6b 74 5f 6f 6b 00 6b ppe1_tx_pkt_ok.k
[ 43.115218] Next obj: start=ffff801fb0b69098, len=80
[ 43.115220] Redzone: 0x706d655f6f666966/0x9f911029d74e35b.
[ 43.115229] Last user: [<ffff0000084b11b0>](acpi_os_release_object+0x28/0x38)
[ 43.115231] 000: 74 79 00 6b 6b 6b 6b 6b 70 70 65 31 5f 74 78 5f ty.kkkkkppe1_tx_
[ 43.115232] 010: 70 6b 74 5f 65 72 72 5f 63 73 75 6d 5f 66 61 69 pkt_err_csum_fai
Signed-off-by: Timmy Li <lixiaoping3@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net> | false | f7b732abd3325dd0a1cc35264c521e37 | hns_rcb_get_ring_sset_count | int hns_rcb_get_ring_sset_count(int stringset)
{
if (stringset == ETH_SS_STATS || stringset == ETH_SS_PRIV_FLAGS)
return HNS_RING_STATIC_REG_NUM;
return 0;
}
| [[879, "\tif (stringset == ETH_SS_STATS || stringset == ETH_SS_PRIV_FLAGS)\n"]] | [[879, "if (stringset == ETH_SS_STATS || stringset == ETH_SS_PRIV_FLAGS)"]] | [
"CVE-2017-18222"
] | [
"CWE-119"
] | 82 | {
"Execution Environment": [],
"Explanation": null,
"External Function": [],
"Function Argument": [],
"Globals": [
"ETH_SS_STATS",
"HNS_RING_STATIC_REG_NUM",
"ETH_SS_PRIV_FLAGS"
],
"Type Execution Declaration": []
} |
83 | linux | https://github.com/torvalds/linux | drivers/net/ethernet/hisilicon/hns/hns_dsaf_xgmac.c | 412b65d15a7f8a93794653968308fc100f2aa87c | net: hns: fix ethtool_get_strings overflow in hns driver
hns_get_sset_count() returns HNS_NET_STATS_CNT and the data space allocated
is not enough for ethtool_get_strings(), which will cause random memory
corruption.
When SLAB and DEBUG_SLAB are both enabled, memory corruptions like the
the following can be observed without this patch:
[ 43.115200] Slab corruption (Not tainted): Acpi-ParseExt start=ffff801fb0b69030, len=80
[ 43.115206] Redzone: 0x9f911029d006462/0x5f78745f31657070.
[ 43.115208] Last user: [<5f7272655f746b70>](0x5f7272655f746b70)
[ 43.115214] 010: 70 70 65 31 5f 74 78 5f 70 6b 74 00 6b 6b 6b 6b ppe1_tx_pkt.kkkk
[ 43.115217] 030: 70 70 65 31 5f 74 78 5f 70 6b 74 5f 6f 6b 00 6b ppe1_tx_pkt_ok.k
[ 43.115218] Next obj: start=ffff801fb0b69098, len=80
[ 43.115220] Redzone: 0x706d655f6f666966/0x9f911029d74e35b.
[ 43.115229] Last user: [<ffff0000084b11b0>](acpi_os_release_object+0x28/0x38)
[ 43.115231] 000: 74 79 00 6b 6b 6b 6b 6b 70 70 65 31 5f 74 78 5f ty.kkkkkppe1_tx_
[ 43.115232] 010: 70 6b 74 5f 65 72 72 5f 63 73 75 6d 5f 66 61 69 pkt_err_csum_fai
Signed-off-by: Timmy Li <lixiaoping3@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net> | true | 5db2f03c13fc70c46b700867894a85b8 | hns_xgmac_get_sset_count | static int hns_xgmac_get_sset_count(int stringset)
{
if (stringset == ETH_SS_STATS)
return ARRAY_SIZE(g_xgmac_stats_string);
return 0;
}
| [[784, "\tif (stringset == ETH_SS_STATS)\n"]] | [[784, "if (stringset == ETH_SS_STATS)"]] | [
"CVE-2017-18222"
] | [
"CWE-119"
] | 84 | {
"Execution Environment": [],
"Explanation": null,
"External Function": [],
"Function Argument": [],
"Globals": [
"ETH_SS_STATS",
"ETH_SS_PRIV_FLAGS"
],
"Type Execution Declaration": [
"g_xgmac_stats_string"
]
} |
84 | linux | https://github.com/torvalds/linux | drivers/net/ethernet/hisilicon/hns/hns_dsaf_xgmac.c | 412b65d15a7f8a93794653968308fc100f2aa87c | net: hns: fix ethtool_get_strings overflow in hns driver
hns_get_sset_count() returns HNS_NET_STATS_CNT and the data space allocated
is not enough for ethtool_get_strings(), which will cause random memory
corruption.
When SLAB and DEBUG_SLAB are both enabled, memory corruptions like the
the following can be observed without this patch:
[ 43.115200] Slab corruption (Not tainted): Acpi-ParseExt start=ffff801fb0b69030, len=80
[ 43.115206] Redzone: 0x9f911029d006462/0x5f78745f31657070.
[ 43.115208] Last user: [<5f7272655f746b70>](0x5f7272655f746b70)
[ 43.115214] 010: 70 70 65 31 5f 74 78 5f 70 6b 74 00 6b 6b 6b 6b ppe1_tx_pkt.kkkk
[ 43.115217] 030: 70 70 65 31 5f 74 78 5f 70 6b 74 5f 6f 6b 00 6b ppe1_tx_pkt_ok.k
[ 43.115218] Next obj: start=ffff801fb0b69098, len=80
[ 43.115220] Redzone: 0x706d655f6f666966/0x9f911029d74e35b.
[ 43.115229] Last user: [<ffff0000084b11b0>](acpi_os_release_object+0x28/0x38)
[ 43.115231] 000: 74 79 00 6b 6b 6b 6b 6b 70 70 65 31 5f 74 78 5f ty.kkkkkppe1_tx_
[ 43.115232] 010: 70 6b 74 5f 65 72 72 5f 63 73 75 6d 5f 66 61 69 pkt_err_csum_fai
Signed-off-by: Timmy Li <lixiaoping3@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net> | false | 504fc68b1907b2c93c7bf4b86676bf57 | hns_xgmac_get_sset_count | static int hns_xgmac_get_sset_count(int stringset)
{
if (stringset == ETH_SS_STATS || stringset == ETH_SS_PRIV_FLAGS)
return ARRAY_SIZE(g_xgmac_stats_string);
return 0;
}
| [[784, "\tif (stringset == ETH_SS_STATS || stringset == ETH_SS_PRIV_FLAGS)\n"]] | [[784, "if (stringset == ETH_SS_STATS || stringset == ETH_SS_PRIV_FLAGS)"]] | [
"CVE-2017-18222"
] | [
"CWE-119"
] | 84 | {
"Execution Environment": [],
"Explanation": null,
"External Function": [],
"Function Argument": [],
"Globals": [
"ETH_SS_STATS",
"ETH_SS_PRIV_FLAGS"
],
"Type Execution Declaration": [
"g_xgmac_stats_string"
]
} |
85 | linux | https://github.com/torvalds/linux | kernel/bpf/verifier.c | 4374f256ce8182019353c0c639bb8d0695b4c941 | bpf/verifier: fix bounds calculation on BPF_RSH
Incorrect signed bounds were being computed.
If the old upper signed bound was positive and the old lower signed bound was
negative, this could cause the new upper signed bound to be too low,
leading to security issues.
Fixes: b03c9f9fdc37 ("bpf/verifier: track signed and unsigned min/max values")
Reported-by: Jann Horn <jannh@google.com>
Signed-off-by: Edward Cree <ecree@solarflare.com>
Acked-by: Alexei Starovoitov <ast@kernel.org>
[jannh@google.com: changed description to reflect bug impact]
Signed-off-by: Jann Horn <jannh@google.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> | true | 968bc8c291b749db2a3e05e97a26a40f | adjust_scalar_min_max_vals | static int adjust_scalar_min_max_vals(struct bpf_verifier_env *env,
struct bpf_insn *insn,
struct bpf_reg_state *dst_reg,
struct bpf_reg_state src_reg)
{
struct bpf_reg_state *regs = cur_regs(env);
u8 opcode = BPF_OP(insn->code);
bool src_known, dst_known;
s64 smin_val, smax_val;
u64 umin_val, umax_val;
if (BPF_CLASS(insn->code) != BPF_ALU64) {
/* 32-bit ALU ops are (32,32)->64 */
coerce_reg_to_32(dst_reg);
coerce_reg_to_32(&src_reg);
}
smin_val = src_reg.smin_value;
smax_val = src_reg.smax_value;
umin_val = src_reg.umin_value;
umax_val = src_reg.umax_value;
src_known = tnum_is_const(src_reg.var_off);
dst_known = tnum_is_const(dst_reg->var_off);
switch (opcode) {
case BPF_ADD:
if (signed_add_overflows(dst_reg->smin_value, smin_val) ||
signed_add_overflows(dst_reg->smax_value, smax_val)) {
dst_reg->smin_value = S64_MIN;
dst_reg->smax_value = S64_MAX;
} else {
dst_reg->smin_value += smin_val;
dst_reg->smax_value += smax_val;
}
if (dst_reg->umin_value + umin_val < umin_val ||
dst_reg->umax_value + umax_val < umax_val) {
dst_reg->umin_value = 0;
dst_reg->umax_value = U64_MAX;
} else {
dst_reg->umin_value += umin_val;
dst_reg->umax_value += umax_val;
}
dst_reg->var_off = tnum_add(dst_reg->var_off, src_reg.var_off);
break;
case BPF_SUB:
if (signed_sub_overflows(dst_reg->smin_value, smax_val) ||
signed_sub_overflows(dst_reg->smax_value, smin_val)) {
/* Overflow possible, we know nothing */
dst_reg->smin_value = S64_MIN;
dst_reg->smax_value = S64_MAX;
} else {
dst_reg->smin_value -= smax_val;
dst_reg->smax_value -= smin_val;
}
if (dst_reg->umin_value < umax_val) {
/* Overflow possible, we know nothing */
dst_reg->umin_value = 0;
dst_reg->umax_value = U64_MAX;
} else {
/* Cannot overflow (as long as bounds are consistent) */
dst_reg->umin_value -= umax_val;
dst_reg->umax_value -= umin_val;
}
dst_reg->var_off = tnum_sub(dst_reg->var_off, src_reg.var_off);
break;
case BPF_MUL:
dst_reg->var_off = tnum_mul(dst_reg->var_off, src_reg.var_off);
if (smin_val < 0 || dst_reg->smin_value < 0) {
/* Ain't nobody got time to multiply that sign */
__mark_reg_unbounded(dst_reg);
__update_reg_bounds(dst_reg);
break;
}
/* Both values are positive, so we can work with unsigned and
* copy the result to signed (unless it exceeds S64_MAX).
*/
if (umax_val > U32_MAX || dst_reg->umax_value > U32_MAX) {
/* Potential overflow, we know nothing */
__mark_reg_unbounded(dst_reg);
/* (except what we can learn from the var_off) */
__update_reg_bounds(dst_reg);
break;
}
dst_reg->umin_value *= umin_val;
dst_reg->umax_value *= umax_val;
if (dst_reg->umax_value > S64_MAX) {
/* Overflow possible, we know nothing */
dst_reg->smin_value = S64_MIN;
dst_reg->smax_value = S64_MAX;
} else {
dst_reg->smin_value = dst_reg->umin_value;
dst_reg->smax_value = dst_reg->umax_value;
}
break;
case BPF_AND:
if (src_known && dst_known) {
__mark_reg_known(dst_reg, dst_reg->var_off.value &
src_reg.var_off.value);
break;
}
/* We get our minimum from the var_off, since that's inherently
* bitwise. Our maximum is the minimum of the operands' maxima.
*/
dst_reg->var_off = tnum_and(dst_reg->var_off, src_reg.var_off);
dst_reg->umin_value = dst_reg->var_off.value;
dst_reg->umax_value = min(dst_reg->umax_value, umax_val);
if (dst_reg->smin_value < 0 || smin_val < 0) {
/* Lose signed bounds when ANDing negative numbers,
* ain't nobody got time for that.
*/
dst_reg->smin_value = S64_MIN;
dst_reg->smax_value = S64_MAX;
} else {
/* ANDing two positives gives a positive, so safe to
* cast result into s64.
*/
dst_reg->smin_value = dst_reg->umin_value;
dst_reg->smax_value = dst_reg->umax_value;
}
/* We may learn something more from the var_off */
__update_reg_bounds(dst_reg);
break;
case BPF_OR:
if (src_known && dst_known) {
__mark_reg_known(dst_reg, dst_reg->var_off.value |
src_reg.var_off.value);
break;
}
/* We get our maximum from the var_off, and our minimum is the
* maximum of the operands' minima
*/
dst_reg->var_off = tnum_or(dst_reg->var_off, src_reg.var_off);
dst_reg->umin_value = max(dst_reg->umin_value, umin_val);
dst_reg->umax_value = dst_reg->var_off.value |
dst_reg->var_off.mask;
if (dst_reg->smin_value < 0 || smin_val < 0) {
/* Lose signed bounds when ORing negative numbers,
* ain't nobody got time for that.
*/
dst_reg->smin_value = S64_MIN;
dst_reg->smax_value = S64_MAX;
} else {
/* ORing two positives gives a positive, so safe to
* cast result into s64.
*/
dst_reg->smin_value = dst_reg->umin_value;
dst_reg->smax_value = dst_reg->umax_value;
}
/* We may learn something more from the var_off */
__update_reg_bounds(dst_reg);
break;
case BPF_LSH:
if (umax_val > 63) {
/* Shifts greater than 63 are undefined. This includes
* shifts by a negative number.
*/
mark_reg_unknown(env, regs, insn->dst_reg);
break;
}
/* We lose all sign bit information (except what we can pick
* up from var_off)
*/
dst_reg->smin_value = S64_MIN;
dst_reg->smax_value = S64_MAX;
/* If we might shift our top bit out, then we know nothing */
if (dst_reg->umax_value > 1ULL << (63 - umax_val)) {
dst_reg->umin_value = 0;
dst_reg->umax_value = U64_MAX;
} else {
dst_reg->umin_value <<= umin_val;
dst_reg->umax_value <<= umax_val;
}
if (src_known)
dst_reg->var_off = tnum_lshift(dst_reg->var_off, umin_val);
else
dst_reg->var_off = tnum_lshift(tnum_unknown, umin_val);
/* We may learn something more from the var_off */
__update_reg_bounds(dst_reg);
break;
case BPF_RSH:
if (umax_val > 63) {
/* Shifts greater than 63 are undefined. This includes
* shifts by a negative number.
*/
mark_reg_unknown(env, regs, insn->dst_reg);
break;
}
/* BPF_RSH is an unsigned shift, so make the appropriate casts */
if (dst_reg->smin_value < 0) {
if (umin_val) {
/* Sign bit will be cleared */
dst_reg->smin_value = 0;
} else {
/* Lost sign bit information */
dst_reg->smin_value = S64_MIN;
dst_reg->smax_value = S64_MAX;
}
} else {
dst_reg->smin_value =
(u64)(dst_reg->smin_value) >> umax_val;
}
if (src_known)
dst_reg->var_off = tnum_rshift(dst_reg->var_off,
umin_val);
else
dst_reg->var_off = tnum_rshift(tnum_unknown, umin_val);
dst_reg->umin_value >>= umax_val;
dst_reg->umax_value >>= umin_val;
/* We may learn something more from the var_off */
__update_reg_bounds(dst_reg);
break;
default:
mark_reg_unknown(env, regs, insn->dst_reg);
break;
}
__reg_deduce_bounds(dst_reg);
__reg_bound_offset(dst_reg);
return 0;
}
| [[2193, "\t\t/* BPF_RSH is an unsigned shift, so make the appropriate casts */\n"], [2194, "\t\tif (dst_reg->smin_value < 0) {\n"], [2195, "\t\t\tif (umin_val) {\n"], [2196, "\t\t\t\t/* Sign bit will be cleared */\n"], [2197, "\t\t\t\tdst_reg->smin_value = 0;\n"], [2198, "\t\t\t} else {\n"], [2199, "\t\t\t\t/* Lost sign bit information */\n"], [2200, "\t\t\t\tdst_reg->smin_value = S64_MIN;\n"], [2201, "\t\t\t\tdst_reg->smax_value = S64_MAX;\n"], [2202, "\t\t\t}\n"], [2203, "\t\t} else {\n"], [2204, "\t\t\tdst_reg->smin_value =\n"], [2205, "\t\t\t\t(u64)(dst_reg->smin_value) >> umax_val;\n"], [2206, "\t\t}\n"]] | [[2193, "/* BPF_RSH is an unsigned shift, so make the appropriate casts */"], [2194, "if (dst_reg->smin_value < 0)"], [2195, "if (umin_val)"], [2196, "/* Sign bit will be cleared */"], [2197, "dst_reg->smin_value = 0;"], [2198, "else"], [2199, "/* Lost sign bit information */"], [2200, "dst_reg->smin_value = S64_MIN;"], [2201, "dst_reg->smax_value = S64_MAX;"], [2202, "\t\t\t}\n"], [2203, "else"], [2204, "dst_reg->smin_value =\n\t\t\t\t(u64)(dst_reg->smin_value) >> umax_val;"], [2206, "\t\t}\n"]] | [
"CVE-2017-17853"
] | [
"CWE-119"
] | 86 | {
"Execution Environment": [],
"Explanation": null,
"External Function": [
"cur_regs"
],
"Function Argument": [],
"Globals": [
"S64_MIN",
"S64_MAX",
"U64_MAX"
],
"Type Execution Declaration": [
"struct bpf_reg_state",
"struct bpf_verifier_env",
"struct bpf_insn"
]
} |
86 | linux | https://github.com/torvalds/linux | kernel/bpf/verifier.c | 4374f256ce8182019353c0c639bb8d0695b4c941 | bpf/verifier: fix bounds calculation on BPF_RSH
Incorrect signed bounds were being computed.
If the old upper signed bound was positive and the old lower signed bound was
negative, this could cause the new upper signed bound to be too low,
leading to security issues.
Fixes: b03c9f9fdc37 ("bpf/verifier: track signed and unsigned min/max values")
Reported-by: Jann Horn <jannh@google.com>
Signed-off-by: Edward Cree <ecree@solarflare.com>
Acked-by: Alexei Starovoitov <ast@kernel.org>
[jannh@google.com: changed description to reflect bug impact]
Signed-off-by: Jann Horn <jannh@google.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> | false | e6c61340dd642843d207cbd96de86c35 | adjust_scalar_min_max_vals | static int adjust_scalar_min_max_vals(struct bpf_verifier_env *env,
struct bpf_insn *insn,
struct bpf_reg_state *dst_reg,
struct bpf_reg_state src_reg)
{
struct bpf_reg_state *regs = cur_regs(env);
u8 opcode = BPF_OP(insn->code);
bool src_known, dst_known;
s64 smin_val, smax_val;
u64 umin_val, umax_val;
if (BPF_CLASS(insn->code) != BPF_ALU64) {
/* 32-bit ALU ops are (32,32)->64 */
coerce_reg_to_32(dst_reg);
coerce_reg_to_32(&src_reg);
}
smin_val = src_reg.smin_value;
smax_val = src_reg.smax_value;
umin_val = src_reg.umin_value;
umax_val = src_reg.umax_value;
src_known = tnum_is_const(src_reg.var_off);
dst_known = tnum_is_const(dst_reg->var_off);
switch (opcode) {
case BPF_ADD:
if (signed_add_overflows(dst_reg->smin_value, smin_val) ||
signed_add_overflows(dst_reg->smax_value, smax_val)) {
dst_reg->smin_value = S64_MIN;
dst_reg->smax_value = S64_MAX;
} else {
dst_reg->smin_value += smin_val;
dst_reg->smax_value += smax_val;
}
if (dst_reg->umin_value + umin_val < umin_val ||
dst_reg->umax_value + umax_val < umax_val) {
dst_reg->umin_value = 0;
dst_reg->umax_value = U64_MAX;
} else {
dst_reg->umin_value += umin_val;
dst_reg->umax_value += umax_val;
}
dst_reg->var_off = tnum_add(dst_reg->var_off, src_reg.var_off);
break;
case BPF_SUB:
if (signed_sub_overflows(dst_reg->smin_value, smax_val) ||
signed_sub_overflows(dst_reg->smax_value, smin_val)) {
/* Overflow possible, we know nothing */
dst_reg->smin_value = S64_MIN;
dst_reg->smax_value = S64_MAX;
} else {
dst_reg->smin_value -= smax_val;
dst_reg->smax_value -= smin_val;
}
if (dst_reg->umin_value < umax_val) {
/* Overflow possible, we know nothing */
dst_reg->umin_value = 0;
dst_reg->umax_value = U64_MAX;
} else {
/* Cannot overflow (as long as bounds are consistent) */
dst_reg->umin_value -= umax_val;
dst_reg->umax_value -= umin_val;
}
dst_reg->var_off = tnum_sub(dst_reg->var_off, src_reg.var_off);
break;
case BPF_MUL:
dst_reg->var_off = tnum_mul(dst_reg->var_off, src_reg.var_off);
if (smin_val < 0 || dst_reg->smin_value < 0) {
/* Ain't nobody got time to multiply that sign */
__mark_reg_unbounded(dst_reg);
__update_reg_bounds(dst_reg);
break;
}
/* Both values are positive, so we can work with unsigned and
* copy the result to signed (unless it exceeds S64_MAX).
*/
if (umax_val > U32_MAX || dst_reg->umax_value > U32_MAX) {
/* Potential overflow, we know nothing */
__mark_reg_unbounded(dst_reg);
/* (except what we can learn from the var_off) */
__update_reg_bounds(dst_reg);
break;
}
dst_reg->umin_value *= umin_val;
dst_reg->umax_value *= umax_val;
if (dst_reg->umax_value > S64_MAX) {
/* Overflow possible, we know nothing */
dst_reg->smin_value = S64_MIN;
dst_reg->smax_value = S64_MAX;
} else {
dst_reg->smin_value = dst_reg->umin_value;
dst_reg->smax_value = dst_reg->umax_value;
}
break;
case BPF_AND:
if (src_known && dst_known) {
__mark_reg_known(dst_reg, dst_reg->var_off.value &
src_reg.var_off.value);
break;
}
/* We get our minimum from the var_off, since that's inherently
* bitwise. Our maximum is the minimum of the operands' maxima.
*/
dst_reg->var_off = tnum_and(dst_reg->var_off, src_reg.var_off);
dst_reg->umin_value = dst_reg->var_off.value;
dst_reg->umax_value = min(dst_reg->umax_value, umax_val);
if (dst_reg->smin_value < 0 || smin_val < 0) {
/* Lose signed bounds when ANDing negative numbers,
* ain't nobody got time for that.
*/
dst_reg->smin_value = S64_MIN;
dst_reg->smax_value = S64_MAX;
} else {
/* ANDing two positives gives a positive, so safe to
* cast result into s64.
*/
dst_reg->smin_value = dst_reg->umin_value;
dst_reg->smax_value = dst_reg->umax_value;
}
/* We may learn something more from the var_off */
__update_reg_bounds(dst_reg);
break;
case BPF_OR:
if (src_known && dst_known) {
__mark_reg_known(dst_reg, dst_reg->var_off.value |
src_reg.var_off.value);
break;
}
/* We get our maximum from the var_off, and our minimum is the
* maximum of the operands' minima
*/
dst_reg->var_off = tnum_or(dst_reg->var_off, src_reg.var_off);
dst_reg->umin_value = max(dst_reg->umin_value, umin_val);
dst_reg->umax_value = dst_reg->var_off.value |
dst_reg->var_off.mask;
if (dst_reg->smin_value < 0 || smin_val < 0) {
/* Lose signed bounds when ORing negative numbers,
* ain't nobody got time for that.
*/
dst_reg->smin_value = S64_MIN;
dst_reg->smax_value = S64_MAX;
} else {
/* ORing two positives gives a positive, so safe to
* cast result into s64.
*/
dst_reg->smin_value = dst_reg->umin_value;
dst_reg->smax_value = dst_reg->umax_value;
}
/* We may learn something more from the var_off */
__update_reg_bounds(dst_reg);
break;
case BPF_LSH:
if (umax_val > 63) {
/* Shifts greater than 63 are undefined. This includes
* shifts by a negative number.
*/
mark_reg_unknown(env, regs, insn->dst_reg);
break;
}
/* We lose all sign bit information (except what we can pick
* up from var_off)
*/
dst_reg->smin_value = S64_MIN;
dst_reg->smax_value = S64_MAX;
/* If we might shift our top bit out, then we know nothing */
if (dst_reg->umax_value > 1ULL << (63 - umax_val)) {
dst_reg->umin_value = 0;
dst_reg->umax_value = U64_MAX;
} else {
dst_reg->umin_value <<= umin_val;
dst_reg->umax_value <<= umax_val;
}
if (src_known)
dst_reg->var_off = tnum_lshift(dst_reg->var_off, umin_val);
else
dst_reg->var_off = tnum_lshift(tnum_unknown, umin_val);
/* We may learn something more from the var_off */
__update_reg_bounds(dst_reg);
break;
case BPF_RSH:
if (umax_val > 63) {
/* Shifts greater than 63 are undefined. This includes
* shifts by a negative number.
*/
mark_reg_unknown(env, regs, insn->dst_reg);
break;
}
/* BPF_RSH is an unsigned shift. If the value in dst_reg might
* be negative, then either:
* 1) src_reg might be zero, so the sign bit of the result is
* unknown, so we lose our signed bounds
* 2) it's known negative, thus the unsigned bounds capture the
* signed bounds
* 3) the signed bounds cross zero, so they tell us nothing
* about the result
* If the value in dst_reg is known nonnegative, then again the
* unsigned bounts capture the signed bounds.
* Thus, in all cases it suffices to blow away our signed bounds
* and rely on inferring new ones from the unsigned bounds and
* var_off of the result.
*/
dst_reg->smin_value = S64_MIN;
dst_reg->smax_value = S64_MAX;
if (src_known)
dst_reg->var_off = tnum_rshift(dst_reg->var_off,
umin_val);
else
dst_reg->var_off = tnum_rshift(tnum_unknown, umin_val);
dst_reg->umin_value >>= umax_val;
dst_reg->umax_value >>= umin_val;
/* We may learn something more from the var_off */
__update_reg_bounds(dst_reg);
break;
default:
mark_reg_unknown(env, regs, insn->dst_reg);
break;
}
__reg_deduce_bounds(dst_reg);
__reg_bound_offset(dst_reg);
return 0;
}
| [[2193, "\t\t/* BPF_RSH is an unsigned shift. If the value in dst_reg might\n"], [2194, "\t\t * be negative, then either:\n"], [2195, "\t\t * 1) src_reg might be zero, so the sign bit of the result is\n"], [2196, "\t\t * unknown, so we lose our signed bounds\n"], [2197, "\t\t * 2) it's known negative, thus the unsigned bounds capture the\n"], [2198, "\t\t * signed bounds\n"], [2199, "\t\t * 3) the signed bounds cross zero, so they tell us nothing\n"], [2200, "\t\t * about the result\n"], [2201, "\t\t * If the value in dst_reg is known nonnegative, then again the\n"], [2202, "\t\t * unsigned bounts capture the signed bounds.\n"], [2203, "\t\t * Thus, in all cases it suffices to blow away our signed bounds\n"], [2204, "\t\t * and rely on inferring new ones from the unsigned bounds and\n"], [2205, "\t\t * var_off of the result.\n"], [2206, "\t\t */\n"], [2207, "\t\tdst_reg->smin_value = S64_MIN;\n"], [2208, "\t\tdst_reg->smax_value = S64_MAX;\n"]] | [[2193, "/* BPF_RSH is an unsigned shift. If the value in dst_reg might\n\t\t * be negative, then either:\n\t\t * 1) src_reg might be zero, so the sign bit of the result is\n\t\t * unknown, so we lose our signed bounds\n\t\t * 2) it's known negative, thus the unsigned bounds capture the\n\t\t * signed bounds\n\t\t * 3) the signed bounds cross zero, so they tell us nothing\n\t\t * about the result\n\t\t * If the value in dst_reg is known nonnegative, then again the\n\t\t * unsigned bounts capture the signed bounds.\n\t\t * Thus, in all cases it suffices to blow away our signed bounds\n\t\t * and rely on inferring new ones from the unsigned bounds and\n\t\t * var_off of the result.\n\t\t */"], [2207, "dst_reg->smin_value = S64_MIN;"], [2208, "dst_reg->smax_value = S64_MAX;"]] | [
"CVE-2017-17853"
] | [
"CWE-119"
] | 86 | {
"Execution Environment": [],
"Explanation": null,
"External Function": [
"cur_regs"
],
"Function Argument": [],
"Globals": [
"S64_MIN",
"S64_MAX",
"U64_MAX"
],
"Type Execution Declaration": [
"struct bpf_reg_state",
"struct bpf_verifier_env",
"struct bpf_insn"
]
} |
87 | linux | https://github.com/torvalds/linux | kernel/bpf/verifier.c | 468f6eafa6c44cb2c5d8aad35e12f06c240a812a | bpf: fix 32-bit ALU op verification
32-bit ALU ops operate on 32-bit values and have 32-bit outputs.
Adjust the verifier accordingly.
Fixes: f1174f77b50c ("bpf/verifier: rework value tracking")
Signed-off-by: Jann Horn <jannh@google.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> | true | ec2babadd00a8fdcf6292cc95449f524 | adjust_scalar_min_max_vals | static int adjust_scalar_min_max_vals(struct bpf_verifier_env *env,
struct bpf_insn *insn,
struct bpf_reg_state *dst_reg,
struct bpf_reg_state src_reg)
{
struct bpf_reg_state *regs = cur_regs(env);
u8 opcode = BPF_OP(insn->code);
bool src_known, dst_known;
s64 smin_val, smax_val;
u64 umin_val, umax_val;
if (BPF_CLASS(insn->code) != BPF_ALU64) {
/* 32-bit ALU ops are (32,32)->64 */
coerce_reg_to_size(dst_reg, 4);
coerce_reg_to_size(&src_reg, 4);
}
smin_val = src_reg.smin_value;
smax_val = src_reg.smax_value;
umin_val = src_reg.umin_value;
umax_val = src_reg.umax_value;
src_known = tnum_is_const(src_reg.var_off);
dst_known = tnum_is_const(dst_reg->var_off);
switch (opcode) {
case BPF_ADD:
if (signed_add_overflows(dst_reg->smin_value, smin_val) ||
signed_add_overflows(dst_reg->smax_value, smax_val)) {
dst_reg->smin_value = S64_MIN;
dst_reg->smax_value = S64_MAX;
} else {
dst_reg->smin_value += smin_val;
dst_reg->smax_value += smax_val;
}
if (dst_reg->umin_value + umin_val < umin_val ||
dst_reg->umax_value + umax_val < umax_val) {
dst_reg->umin_value = 0;
dst_reg->umax_value = U64_MAX;
} else {
dst_reg->umin_value += umin_val;
dst_reg->umax_value += umax_val;
}
dst_reg->var_off = tnum_add(dst_reg->var_off, src_reg.var_off);
break;
case BPF_SUB:
if (signed_sub_overflows(dst_reg->smin_value, smax_val) ||
signed_sub_overflows(dst_reg->smax_value, smin_val)) {
/* Overflow possible, we know nothing */
dst_reg->smin_value = S64_MIN;
dst_reg->smax_value = S64_MAX;
} else {
dst_reg->smin_value -= smax_val;
dst_reg->smax_value -= smin_val;
}
if (dst_reg->umin_value < umax_val) {
/* Overflow possible, we know nothing */
dst_reg->umin_value = 0;
dst_reg->umax_value = U64_MAX;
} else {
/* Cannot overflow (as long as bounds are consistent) */
dst_reg->umin_value -= umax_val;
dst_reg->umax_value -= umin_val;
}
dst_reg->var_off = tnum_sub(dst_reg->var_off, src_reg.var_off);
break;
case BPF_MUL:
dst_reg->var_off = tnum_mul(dst_reg->var_off, src_reg.var_off);
if (smin_val < 0 || dst_reg->smin_value < 0) {
/* Ain't nobody got time to multiply that sign */
__mark_reg_unbounded(dst_reg);
__update_reg_bounds(dst_reg);
break;
}
/* Both values are positive, so we can work with unsigned and
* copy the result to signed (unless it exceeds S64_MAX).
*/
if (umax_val > U32_MAX || dst_reg->umax_value > U32_MAX) {
/* Potential overflow, we know nothing */
__mark_reg_unbounded(dst_reg);
/* (except what we can learn from the var_off) */
__update_reg_bounds(dst_reg);
break;
}
dst_reg->umin_value *= umin_val;
dst_reg->umax_value *= umax_val;
if (dst_reg->umax_value > S64_MAX) {
/* Overflow possible, we know nothing */
dst_reg->smin_value = S64_MIN;
dst_reg->smax_value = S64_MAX;
} else {
dst_reg->smin_value = dst_reg->umin_value;
dst_reg->smax_value = dst_reg->umax_value;
}
break;
case BPF_AND:
if (src_known && dst_known) {
__mark_reg_known(dst_reg, dst_reg->var_off.value &
src_reg.var_off.value);
break;
}
/* We get our minimum from the var_off, since that's inherently
* bitwise. Our maximum is the minimum of the operands' maxima.
*/
dst_reg->var_off = tnum_and(dst_reg->var_off, src_reg.var_off);
dst_reg->umin_value = dst_reg->var_off.value;
dst_reg->umax_value = min(dst_reg->umax_value, umax_val);
if (dst_reg->smin_value < 0 || smin_val < 0) {
/* Lose signed bounds when ANDing negative numbers,
* ain't nobody got time for that.
*/
dst_reg->smin_value = S64_MIN;
dst_reg->smax_value = S64_MAX;
} else {
/* ANDing two positives gives a positive, so safe to
* cast result into s64.
*/
dst_reg->smin_value = dst_reg->umin_value;
dst_reg->smax_value = dst_reg->umax_value;
}
/* We may learn something more from the var_off */
__update_reg_bounds(dst_reg);
break;
case BPF_OR:
if (src_known && dst_known) {
__mark_reg_known(dst_reg, dst_reg->var_off.value |
src_reg.var_off.value);
break;
}
/* We get our maximum from the var_off, and our minimum is the
* maximum of the operands' minima
*/
dst_reg->var_off = tnum_or(dst_reg->var_off, src_reg.var_off);
dst_reg->umin_value = max(dst_reg->umin_value, umin_val);
dst_reg->umax_value = dst_reg->var_off.value |
dst_reg->var_off.mask;
if (dst_reg->smin_value < 0 || smin_val < 0) {
/* Lose signed bounds when ORing negative numbers,
* ain't nobody got time for that.
*/
dst_reg->smin_value = S64_MIN;
dst_reg->smax_value = S64_MAX;
} else {
/* ORing two positives gives a positive, so safe to
* cast result into s64.
*/
dst_reg->smin_value = dst_reg->umin_value;
dst_reg->smax_value = dst_reg->umax_value;
}
/* We may learn something more from the var_off */
__update_reg_bounds(dst_reg);
break;
case BPF_LSH:
if (umax_val > 63) {
/* Shifts greater than 63 are undefined. This includes
* shifts by a negative number.
*/
mark_reg_unknown(env, regs, insn->dst_reg);
break;
}
/* We lose all sign bit information (except what we can pick
* up from var_off)
*/
dst_reg->smin_value = S64_MIN;
dst_reg->smax_value = S64_MAX;
/* If we might shift our top bit out, then we know nothing */
if (dst_reg->umax_value > 1ULL << (63 - umax_val)) {
dst_reg->umin_value = 0;
dst_reg->umax_value = U64_MAX;
} else {
dst_reg->umin_value <<= umin_val;
dst_reg->umax_value <<= umax_val;
}
if (src_known)
dst_reg->var_off = tnum_lshift(dst_reg->var_off, umin_val);
else
dst_reg->var_off = tnum_lshift(tnum_unknown, umin_val);
/* We may learn something more from the var_off */
__update_reg_bounds(dst_reg);
break;
case BPF_RSH:
if (umax_val > 63) {
/* Shifts greater than 63 are undefined. This includes
* shifts by a negative number.
*/
mark_reg_unknown(env, regs, insn->dst_reg);
break;
}
/* BPF_RSH is an unsigned shift. If the value in dst_reg might
* be negative, then either:
* 1) src_reg might be zero, so the sign bit of the result is
* unknown, so we lose our signed bounds
* 2) it's known negative, thus the unsigned bounds capture the
* signed bounds
* 3) the signed bounds cross zero, so they tell us nothing
* about the result
* If the value in dst_reg is known nonnegative, then again the
* unsigned bounts capture the signed bounds.
* Thus, in all cases it suffices to blow away our signed bounds
* and rely on inferring new ones from the unsigned bounds and
* var_off of the result.
*/
dst_reg->smin_value = S64_MIN;
dst_reg->smax_value = S64_MAX;
if (src_known)
dst_reg->var_off = tnum_rshift(dst_reg->var_off,
umin_val);
else
dst_reg->var_off = tnum_rshift(tnum_unknown, umin_val);
dst_reg->umin_value >>= umax_val;
dst_reg->umax_value >>= umin_val;
/* We may learn something more from the var_off */
__update_reg_bounds(dst_reg);
break;
default:
mark_reg_unknown(env, regs, insn->dst_reg);
break;
}
__reg_deduce_bounds(dst_reg);
__reg_bound_offset(dst_reg);
return 0;
}
| [[2031, "\tif (BPF_CLASS(insn->code) != BPF_ALU64) {\n"], [2032, "\t\t/* 32-bit ALU ops are (32,32)->64 */\n"], [2033, "\t\tcoerce_reg_to_size(dst_reg, 4);\n"], [2034, "\t\tcoerce_reg_to_size(&src_reg, 4);\n"], [2035, "\t}\n"], [2171, "\t\tif (umax_val > 63) {\n"], [2172, "\t\t\t/* Shifts greater than 63 are undefined. This includes\n"], [2173, "\t\t\t * shifts by a negative number.\n"], [2199, "\t\tif (umax_val > 63) {\n"], [2200, "\t\t\t/* Shifts greater than 63 are undefined. This includes\n"], [2201, "\t\t\t * shifts by a negative number.\n"]] | [[2031, "if (BPF_CLASS(insn->code) != BPF_ALU64)"], [2032, "/* 32-bit ALU ops are (32,32)->64 */"], [2033, "coerce_reg_to_size(dst_reg, 4);"], [2034, "coerce_reg_to_size(&src_reg, 4);"], [2035, "\t}\n"], [2171, "if (umax_val > 63)"], [2172, "/* Shifts greater than 63 are undefined. This includes\n\t\t\t * shifts by a negative number.\n\t\t\t */"], [2199, "if (umax_val > 63)"], [2200, "/* Shifts greater than 63 are undefined. This includes\n\t\t\t * shifts by a negative number.\n\t\t\t */"]] | [
"CVE-2017-17852"
] | [
"CWE-119"
] | 88 | {
"Execution Environment": [],
"Explanation": null,
"External Function": [
"cur_regs",
"BPF_CLASS",
"BPF_OP",
"coerce_reg_to_size",
"signed_add_overflows",
"signed_sub_overflows",
"tnum_is_const",
"tnum_add",
"tnum_sub",
"tnum_mul",
"tnum_and",
"tnum_or",
"tnum_lshift",
"tnum_rshift",
"mark_reg_unknown",
"__mark_reg_unbounded",
"__mark_reg_known",
"__update_reg_bounds",
"__reg_deduce_bounds",
"__reg_bound_offset",
"min",
"max"
],
"Function Argument": [
"env",
"insn",
"dst_reg",
"src_reg"
],
"Globals": [
"BPF_ALU64",
"BPF_ADD",
"BPF_SUB",
"BPF_MUL",
"BPF_AND",
"BPF_OR",
"BPF_LSH",
"BPF_RSH",
"S64_MIN",
"S64_MAX",
"U64_MAX"
],
"Type Execution Declaration": [
"struct bpf_verifier_env",
"struct bpf_insn",
"struct bpf_reg_state"
]
} |
88 | linux | https://github.com/torvalds/linux | kernel/bpf/verifier.c | 468f6eafa6c44cb2c5d8aad35e12f06c240a812a | bpf: fix 32-bit ALU op verification
32-bit ALU ops operate on 32-bit values and have 32-bit outputs.
Adjust the verifier accordingly.
Fixes: f1174f77b50c ("bpf/verifier: rework value tracking")
Signed-off-by: Jann Horn <jannh@google.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> | false | e9b0ca2149ac8dcfaabfdeb364bddd26 | adjust_scalar_min_max_vals | static int adjust_scalar_min_max_vals(struct bpf_verifier_env *env,
struct bpf_insn *insn,
struct bpf_reg_state *dst_reg,
struct bpf_reg_state src_reg)
{
struct bpf_reg_state *regs = cur_regs(env);
u8 opcode = BPF_OP(insn->code);
bool src_known, dst_known;
s64 smin_val, smax_val;
u64 umin_val, umax_val;
u64 insn_bitness = (BPF_CLASS(insn->code) == BPF_ALU64) ? 64 : 32;
smin_val = src_reg.smin_value;
smax_val = src_reg.smax_value;
umin_val = src_reg.umin_value;
umax_val = src_reg.umax_value;
src_known = tnum_is_const(src_reg.var_off);
dst_known = tnum_is_const(dst_reg->var_off);
switch (opcode) {
case BPF_ADD:
if (signed_add_overflows(dst_reg->smin_value, smin_val) ||
signed_add_overflows(dst_reg->smax_value, smax_val)) {
dst_reg->smin_value = S64_MIN;
dst_reg->smax_value = S64_MAX;
} else {
dst_reg->smin_value += smin_val;
dst_reg->smax_value += smax_val;
}
if (dst_reg->umin_value + umin_val < umin_val ||
dst_reg->umax_value + umax_val < umax_val) {
dst_reg->umin_value = 0;
dst_reg->umax_value = U64_MAX;
} else {
dst_reg->umin_value += umin_val;
dst_reg->umax_value += umax_val;
}
dst_reg->var_off = tnum_add(dst_reg->var_off, src_reg.var_off);
break;
case BPF_SUB:
if (signed_sub_overflows(dst_reg->smin_value, smax_val) ||
signed_sub_overflows(dst_reg->smax_value, smin_val)) {
/* Overflow possible, we know nothing */
dst_reg->smin_value = S64_MIN;
dst_reg->smax_value = S64_MAX;
} else {
dst_reg->smin_value -= smax_val;
dst_reg->smax_value -= smin_val;
}
if (dst_reg->umin_value < umax_val) {
/* Overflow possible, we know nothing */
dst_reg->umin_value = 0;
dst_reg->umax_value = U64_MAX;
} else {
/* Cannot overflow (as long as bounds are consistent) */
dst_reg->umin_value -= umax_val;
dst_reg->umax_value -= umin_val;
}
dst_reg->var_off = tnum_sub(dst_reg->var_off, src_reg.var_off);
break;
case BPF_MUL:
dst_reg->var_off = tnum_mul(dst_reg->var_off, src_reg.var_off);
if (smin_val < 0 || dst_reg->smin_value < 0) {
/* Ain't nobody got time to multiply that sign */
__mark_reg_unbounded(dst_reg);
__update_reg_bounds(dst_reg);
break;
}
/* Both values are positive, so we can work with unsigned and
* copy the result to signed (unless it exceeds S64_MAX).
*/
if (umax_val > U32_MAX || dst_reg->umax_value > U32_MAX) {
/* Potential overflow, we know nothing */
__mark_reg_unbounded(dst_reg);
/* (except what we can learn from the var_off) */
__update_reg_bounds(dst_reg);
break;
}
dst_reg->umin_value *= umin_val;
dst_reg->umax_value *= umax_val;
if (dst_reg->umax_value > S64_MAX) {
/* Overflow possible, we know nothing */
dst_reg->smin_value = S64_MIN;
dst_reg->smax_value = S64_MAX;
} else {
dst_reg->smin_value = dst_reg->umin_value;
dst_reg->smax_value = dst_reg->umax_value;
}
break;
case BPF_AND:
if (src_known && dst_known) {
__mark_reg_known(dst_reg, dst_reg->var_off.value &
src_reg.var_off.value);
break;
}
/* We get our minimum from the var_off, since that's inherently
* bitwise. Our maximum is the minimum of the operands' maxima.
*/
dst_reg->var_off = tnum_and(dst_reg->var_off, src_reg.var_off);
dst_reg->umin_value = dst_reg->var_off.value;
dst_reg->umax_value = min(dst_reg->umax_value, umax_val);
if (dst_reg->smin_value < 0 || smin_val < 0) {
/* Lose signed bounds when ANDing negative numbers,
* ain't nobody got time for that.
*/
dst_reg->smin_value = S64_MIN;
dst_reg->smax_value = S64_MAX;
} else {
/* ANDing two positives gives a positive, so safe to
* cast result into s64.
*/
dst_reg->smin_value = dst_reg->umin_value;
dst_reg->smax_value = dst_reg->umax_value;
}
/* We may learn something more from the var_off */
__update_reg_bounds(dst_reg);
break;
case BPF_OR:
if (src_known && dst_known) {
__mark_reg_known(dst_reg, dst_reg->var_off.value |
src_reg.var_off.value);
break;
}
/* We get our maximum from the var_off, and our minimum is the
* maximum of the operands' minima
*/
dst_reg->var_off = tnum_or(dst_reg->var_off, src_reg.var_off);
dst_reg->umin_value = max(dst_reg->umin_value, umin_val);
dst_reg->umax_value = dst_reg->var_off.value |
dst_reg->var_off.mask;
if (dst_reg->smin_value < 0 || smin_val < 0) {
/* Lose signed bounds when ORing negative numbers,
* ain't nobody got time for that.
*/
dst_reg->smin_value = S64_MIN;
dst_reg->smax_value = S64_MAX;
} else {
/* ORing two positives gives a positive, so safe to
* cast result into s64.
*/
dst_reg->smin_value = dst_reg->umin_value;
dst_reg->smax_value = dst_reg->umax_value;
}
/* We may learn something more from the var_off */
__update_reg_bounds(dst_reg);
break;
case BPF_LSH:
if (umax_val >= insn_bitness) {
/* Shifts greater than 31 or 63 are undefined.
* This includes shifts by a negative number.
*/
mark_reg_unknown(env, regs, insn->dst_reg);
break;
}
/* We lose all sign bit information (except what we can pick
* up from var_off)
*/
dst_reg->smin_value = S64_MIN;
dst_reg->smax_value = S64_MAX;
/* If we might shift our top bit out, then we know nothing */
if (dst_reg->umax_value > 1ULL << (63 - umax_val)) {
dst_reg->umin_value = 0;
dst_reg->umax_value = U64_MAX;
} else {
dst_reg->umin_value <<= umin_val;
dst_reg->umax_value <<= umax_val;
}
if (src_known)
dst_reg->var_off = tnum_lshift(dst_reg->var_off, umin_val);
else
dst_reg->var_off = tnum_lshift(tnum_unknown, umin_val);
/* We may learn something more from the var_off */
__update_reg_bounds(dst_reg);
break;
case BPF_RSH:
if (umax_val >= insn_bitness) {
/* Shifts greater than 31 or 63 are undefined.
* This includes shifts by a negative number.
*/
mark_reg_unknown(env, regs, insn->dst_reg);
break;
}
/* BPF_RSH is an unsigned shift. If the value in dst_reg might
* be negative, then either:
* 1) src_reg might be zero, so the sign bit of the result is
* unknown, so we lose our signed bounds
* 2) it's known negative, thus the unsigned bounds capture the
* signed bounds
* 3) the signed bounds cross zero, so they tell us nothing
* about the result
* If the value in dst_reg is known nonnegative, then again the
* unsigned bounts capture the signed bounds.
* Thus, in all cases it suffices to blow away our signed bounds
* and rely on inferring new ones from the unsigned bounds and
* var_off of the result.
*/
dst_reg->smin_value = S64_MIN;
dst_reg->smax_value = S64_MAX;
if (src_known)
dst_reg->var_off = tnum_rshift(dst_reg->var_off,
umin_val);
else
dst_reg->var_off = tnum_rshift(tnum_unknown, umin_val);
dst_reg->umin_value >>= umax_val;
dst_reg->umax_value >>= umin_val;
/* We may learn something more from the var_off */
__update_reg_bounds(dst_reg);
break;
default:
mark_reg_unknown(env, regs, insn->dst_reg);
break;
}
if (BPF_CLASS(insn->code) != BPF_ALU64) {
/* 32-bit ALU ops are (32,32)->32 */
coerce_reg_to_size(dst_reg, 4);
coerce_reg_to_size(&src_reg, 4);
}
__reg_deduce_bounds(dst_reg);
__reg_bound_offset(dst_reg);
return 0;
}
| [[2034, "\tu64 insn_bitness = (BPF_CLASS(insn->code) == BPF_ALU64) ? 64 : 32;\n"], [2171, "\t\tif (umax_val >= insn_bitness) {\n"], [2172, "\t\t\t/* Shifts greater than 31 or 63 are undefined.\n"], [2173, "\t\t\t * This includes shifts by a negative number.\n"], [2199, "\t\tif (umax_val >= insn_bitness) {\n"], [2200, "\t\t\t/* Shifts greater than 31 or 63 are undefined.\n"], [2201, "\t\t\t * This includes shifts by a negative number.\n"], [2237, "\tif (BPF_CLASS(insn->code) != BPF_ALU64) {\n"], [2238, "\t\t/* 32-bit ALU ops are (32,32)->32 */\n"], [2239, "\t\tcoerce_reg_to_size(dst_reg, 4);\n"], [2240, "\t\tcoerce_reg_to_size(&src_reg, 4);\n"], [2241, "\t}\n"], [2242, "\n"]] | [[2034, "u64 insn_bitness = (BPF_CLASS(insn->code) == BPF_ALU64) ? 64 : 32;"], [2171, "if (umax_val >= insn_bitness)"], [2172, "/* Shifts greater than 31 or 63 are undefined.\n\t\t\t * This includes shifts by a negative number.\n\t\t\t */"], [2199, "if (umax_val >= insn_bitness)"], [2200, "/* Shifts greater than 31 or 63 are undefined.\n\t\t\t * This includes shifts by a negative number.\n\t\t\t */"], [2237, "if (BPF_CLASS(insn->code) != BPF_ALU64)"], [2238, "/* 32-bit ALU ops are (32,32)->32 */"], [2239, "coerce_reg_to_size(dst_reg, 4);"], [2240, "coerce_reg_to_size(&src_reg, 4);"], [2241, "\t}\n"], [2242, "\n"]] | [
"CVE-2017-17852"
] | [
"CWE-119"
] | 88 | {
"Execution Environment": [],
"Explanation": null,
"External Function": [
"cur_regs",
"BPF_CLASS",
"BPF_OP",
"coerce_reg_to_size",
"signed_add_overflows",
"signed_sub_overflows",
"tnum_is_const",
"tnum_add",
"tnum_sub",
"tnum_mul",
"tnum_and",
"tnum_or",
"tnum_lshift",
"tnum_rshift",
"mark_reg_unknown",
"__mark_reg_unbounded",
"__mark_reg_known",
"__update_reg_bounds",
"__reg_deduce_bounds",
"__reg_bound_offset",
"min",
"max"
],
"Function Argument": [
"env",
"insn",
"dst_reg",
"src_reg"
],
"Globals": [
"BPF_ALU64",
"BPF_ADD",
"BPF_SUB",
"BPF_MUL",
"BPF_AND",
"BPF_OR",
"BPF_LSH",
"BPF_RSH",
"S64_MIN",
"S64_MAX",
"U64_MAX"
],
"Type Execution Declaration": [
"struct bpf_verifier_env",
"struct bpf_insn",
"struct bpf_reg_state"
]
} |
89 | linux | https://github.com/torvalds/linux | drivers/edac/thunderx_edac.c | 475c58e1a471e9b873e3e39958c64a2d278275c8 | EDAC/thunderx: Fix possible out-of-bounds string access
Enabling -Wstringop-overflow globally exposes a warning for a common bug
in the usage of strncat():
drivers/edac/thunderx_edac.c: In function 'thunderx_ocx_com_threaded_isr':
drivers/edac/thunderx_edac.c:1136:17: error: 'strncat' specified bound 1024 equals destination size [-Werror=stringop-overflow=]
1136 | strncat(msg, other, OCX_MESSAGE_SIZE);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
...
1145 | strncat(msg, other, OCX_MESSAGE_SIZE);
...
1150 | strncat(msg, other, OCX_MESSAGE_SIZE);
...
Apparently the author of this driver expected strncat() to behave the
way that strlcat() does, which uses the size of the destination buffer
as its third argument rather than the length of the source buffer. The
result is that there is no check on the size of the allocated buffer.
Change it to strlcat().
[ bp: Trim compiler output, fixup commit message. ]
Fixes: 41003396f932 ("EDAC, thunderx: Add Cavium ThunderX EDAC driver")
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>
Reviewed-by: Gustavo A. R. Silva <gustavoars@kernel.org>
Link: https://lore.kernel.org/r/20231122222007.3199885-1-arnd@kernel.org | true | 613329b4c8f761ef99c76a284ecf07de | thunderx_ocx_com_threaded_isr | static irqreturn_t thunderx_ocx_com_threaded_isr(int irq, void *irq_id)
{
struct msix_entry *msix = irq_id;
struct thunderx_ocx *ocx = container_of(msix, struct thunderx_ocx,
msix_ent[msix->entry]);
irqreturn_t ret = IRQ_NONE;
unsigned long tail;
struct ocx_com_err_ctx *ctx;
int lane;
char *msg;
char *other;
msg = kmalloc(OCX_MESSAGE_SIZE, GFP_KERNEL);
other = kmalloc(OCX_OTHER_SIZE, GFP_KERNEL);
if (!msg || !other)
goto err_free;
while (CIRC_CNT(ocx->com_ring_head, ocx->com_ring_tail,
ARRAY_SIZE(ocx->com_err_ctx))) {
tail = ring_pos(ocx->com_ring_tail,
ARRAY_SIZE(ocx->com_err_ctx));
ctx = &ocx->com_err_ctx[tail];
snprintf(msg, OCX_MESSAGE_SIZE, "%s: OCX_COM_INT: %016llx",
ocx->edac_dev->ctl_name, ctx->reg_com_int);
decode_register(other, OCX_OTHER_SIZE,
ocx_com_errors, ctx->reg_com_int);
strncat(msg, other, OCX_MESSAGE_SIZE);
for (lane = 0; lane < OCX_RX_LANES; lane++)
if (ctx->reg_com_int & BIT(lane)) {
snprintf(other, OCX_OTHER_SIZE,
"\n\tOCX_LNE_INT[%02d]: %016llx OCX_LNE_STAT11[%02d]: %016llx",
lane, ctx->reg_lane_int[lane],
lane, ctx->reg_lane_stat11[lane]);
strncat(msg, other, OCX_MESSAGE_SIZE);
decode_register(other, OCX_OTHER_SIZE,
ocx_lane_errors,
ctx->reg_lane_int[lane]);
strncat(msg, other, OCX_MESSAGE_SIZE);
}
if (ctx->reg_com_int & OCX_COM_INT_CE)
edac_device_handle_ce(ocx->edac_dev, 0, 0, msg);
ocx->com_ring_tail++;
}
ret = IRQ_HANDLED;
err_free:
kfree(other);
kfree(msg);
return ret;
}
| [[1136, "\t\tstrncat(msg, other, OCX_MESSAGE_SIZE);\n"], [1145, "\t\t\t\tstrncat(msg, other, OCX_MESSAGE_SIZE);\n"], [1150, "\t\t\t\tstrncat(msg, other, OCX_MESSAGE_SIZE);\n"]] | [[1136, "strncat(msg, other, OCX_MESSAGE_SIZE);"], [1145, "strncat(msg, other, OCX_MESSAGE_SIZE);"], [1150, "strncat(msg, other, OCX_MESSAGE_SIZE);"]] | [
"CVE-2023-52464"
] | [
"CWE-119"
] | 90 | {
"Execution Environment": [],
"Explanation": null,
"External Function": [
"decode_register"
],
"Function Argument": [],
"Globals": [],
"Type Execution Declaration": [
"OCX_MESSAGE_SIZE",
"OCX_OTHER_SIZE"
]
} |
90 | linux | https://github.com/torvalds/linux | drivers/edac/thunderx_edac.c | 475c58e1a471e9b873e3e39958c64a2d278275c8 | EDAC/thunderx: Fix possible out-of-bounds string access
Enabling -Wstringop-overflow globally exposes a warning for a common bug
in the usage of strncat():
drivers/edac/thunderx_edac.c: In function 'thunderx_ocx_com_threaded_isr':
drivers/edac/thunderx_edac.c:1136:17: error: 'strncat' specified bound 1024 equals destination size [-Werror=stringop-overflow=]
1136 | strncat(msg, other, OCX_MESSAGE_SIZE);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
...
1145 | strncat(msg, other, OCX_MESSAGE_SIZE);
...
1150 | strncat(msg, other, OCX_MESSAGE_SIZE);
...
Apparently the author of this driver expected strncat() to behave the
way that strlcat() does, which uses the size of the destination buffer
as its third argument rather than the length of the source buffer. The
result is that there is no check on the size of the allocated buffer.
Change it to strlcat().
[ bp: Trim compiler output, fixup commit message. ]
Fixes: 41003396f932 ("EDAC, thunderx: Add Cavium ThunderX EDAC driver")
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>
Reviewed-by: Gustavo A. R. Silva <gustavoars@kernel.org>
Link: https://lore.kernel.org/r/20231122222007.3199885-1-arnd@kernel.org | false | 820b4c2c3b4ba8e47d2e7ae7274444c4 | thunderx_ocx_com_threaded_isr | static irqreturn_t thunderx_ocx_com_threaded_isr(int irq, void *irq_id)
{
struct msix_entry *msix = irq_id;
struct thunderx_ocx *ocx = container_of(msix, struct thunderx_ocx,
msix_ent[msix->entry]);
irqreturn_t ret = IRQ_NONE;
unsigned long tail;
struct ocx_com_err_ctx *ctx;
int lane;
char *msg;
char *other;
msg = kmalloc(OCX_MESSAGE_SIZE, GFP_KERNEL);
other = kmalloc(OCX_OTHER_SIZE, GFP_KERNEL);
if (!msg || !other)
goto err_free;
while (CIRC_CNT(ocx->com_ring_head, ocx->com_ring_tail,
ARRAY_SIZE(ocx->com_err_ctx))) {
tail = ring_pos(ocx->com_ring_tail,
ARRAY_SIZE(ocx->com_err_ctx));
ctx = &ocx->com_err_ctx[tail];
snprintf(msg, OCX_MESSAGE_SIZE, "%s: OCX_COM_INT: %016llx",
ocx->edac_dev->ctl_name, ctx->reg_com_int);
decode_register(other, OCX_OTHER_SIZE,
ocx_com_errors, ctx->reg_com_int);
strlcat(msg, other, OCX_MESSAGE_SIZE);
for (lane = 0; lane < OCX_RX_LANES; lane++)
if (ctx->reg_com_int & BIT(lane)) {
snprintf(other, OCX_OTHER_SIZE,
"\n\tOCX_LNE_INT[%02d]: %016llx OCX_LNE_STAT11[%02d]: %016llx",
lane, ctx->reg_lane_int[lane],
lane, ctx->reg_lane_stat11[lane]);
strlcat(msg, other, OCX_MESSAGE_SIZE);
decode_register(other, OCX_OTHER_SIZE,
ocx_lane_errors,
ctx->reg_lane_int[lane]);
strlcat(msg, other, OCX_MESSAGE_SIZE);
}
if (ctx->reg_com_int & OCX_COM_INT_CE)
edac_device_handle_ce(ocx->edac_dev, 0, 0, msg);
ocx->com_ring_tail++;
}
ret = IRQ_HANDLED;
err_free:
kfree(other);
kfree(msg);
return ret;
}
| [[1136, "\t\tstrlcat(msg, other, OCX_MESSAGE_SIZE);\n"], [1145, "\t\t\t\tstrlcat(msg, other, OCX_MESSAGE_SIZE);\n"], [1150, "\t\t\t\tstrlcat(msg, other, OCX_MESSAGE_SIZE);\n"]] | [[1136, "strlcat(msg, other, OCX_MESSAGE_SIZE);"], [1145, "strlcat(msg, other, OCX_MESSAGE_SIZE);"], [1150, "strlcat(msg, other, OCX_MESSAGE_SIZE);"]] | [
"CVE-2023-52464"
] | [
"CWE-119"
] | 90 | {
"Execution Environment": [],
"Explanation": null,
"External Function": [
"decode_register"
],
"Function Argument": [],
"Globals": [],
"Type Execution Declaration": [
"OCX_MESSAGE_SIZE",
"OCX_OTHER_SIZE"
]
} |
91 | linux | https://github.com/torvalds/linux | drivers/net/virtio_net.c | 48900cb6af4282fa0fb6ff4d72a81aa3dadb5c39 | virtio-net: drop NETIF_F_FRAGLIST
virtio declares support for NETIF_F_FRAGLIST, but assumes
that there are at most MAX_SKB_FRAGS + 2 fragments which isn't
always true with a fraglist.
A longer fraglist in the skb will make the call to skb_to_sgvec overflow
the sg array, leading to memory corruption.
Drop NETIF_F_FRAGLIST so we only get what we can handle.
Cc: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net> | true | 4ee5395c83acadd472a234db87a1f0ca | virtnet_probe | static int virtnet_probe(struct virtio_device *vdev)
{
int i, err;
struct net_device *dev;
struct virtnet_info *vi;
u16 max_queue_pairs;
if (!vdev->config->get) {
dev_err(&vdev->dev, "%s failure: config access disabled\n",
__func__);
return -EINVAL;
}
if (!virtnet_validate_features(vdev))
return -EINVAL;
/* Find if host supports multiqueue virtio_net device */
err = virtio_cread_feature(vdev, VIRTIO_NET_F_MQ,
struct virtio_net_config,
max_virtqueue_pairs, &max_queue_pairs);
/* We need at least 2 queue's */
if (err || max_queue_pairs < VIRTIO_NET_CTRL_MQ_VQ_PAIRS_MIN ||
max_queue_pairs > VIRTIO_NET_CTRL_MQ_VQ_PAIRS_MAX ||
!virtio_has_feature(vdev, VIRTIO_NET_F_CTRL_VQ))
max_queue_pairs = 1;
/* Allocate ourselves a network device with room for our info */
dev = alloc_etherdev_mq(sizeof(struct virtnet_info), max_queue_pairs);
if (!dev)
return -ENOMEM;
/* Set up network device as normal. */
dev->priv_flags |= IFF_UNICAST_FLT | IFF_LIVE_ADDR_CHANGE;
dev->netdev_ops = &virtnet_netdev;
dev->features = NETIF_F_HIGHDMA;
dev->ethtool_ops = &virtnet_ethtool_ops;
SET_NETDEV_DEV(dev, &vdev->dev);
/* Do we support "hardware" checksums? */
if (virtio_has_feature(vdev, VIRTIO_NET_F_CSUM)) {
/* This opens up the world of extra features. */
dev->hw_features |= NETIF_F_HW_CSUM|NETIF_F_SG|NETIF_F_FRAGLIST;
if (csum)
dev->features |= NETIF_F_HW_CSUM|NETIF_F_SG|NETIF_F_FRAGLIST;
if (virtio_has_feature(vdev, VIRTIO_NET_F_GSO)) {
dev->hw_features |= NETIF_F_TSO | NETIF_F_UFO
| NETIF_F_TSO_ECN | NETIF_F_TSO6;
}
/* Individual feature bits: what can host handle? */
if (virtio_has_feature(vdev, VIRTIO_NET_F_HOST_TSO4))
dev->hw_features |= NETIF_F_TSO;
if (virtio_has_feature(vdev, VIRTIO_NET_F_HOST_TSO6))
dev->hw_features |= NETIF_F_TSO6;
if (virtio_has_feature(vdev, VIRTIO_NET_F_HOST_ECN))
dev->hw_features |= NETIF_F_TSO_ECN;
if (virtio_has_feature(vdev, VIRTIO_NET_F_HOST_UFO))
dev->hw_features |= NETIF_F_UFO;
dev->features |= NETIF_F_GSO_ROBUST;
if (gso)
dev->features |= dev->hw_features & (NETIF_F_ALL_TSO|NETIF_F_UFO);
/* (!csum && gso) case will be fixed by register_netdev() */
}
if (virtio_has_feature(vdev, VIRTIO_NET_F_GUEST_CSUM))
dev->features |= NETIF_F_RXCSUM;
dev->vlan_features = dev->features;
/* Configuration may specify what MAC to use. Otherwise random. */
if (virtio_has_feature(vdev, VIRTIO_NET_F_MAC))
virtio_cread_bytes(vdev,
offsetof(struct virtio_net_config, mac),
dev->dev_addr, dev->addr_len);
else
eth_hw_addr_random(dev);
/* Set up our device-specific information */
vi = netdev_priv(dev);
vi->dev = dev;
vi->vdev = vdev;
vdev->priv = vi;
vi->stats = alloc_percpu(struct virtnet_stats);
err = -ENOMEM;
if (vi->stats == NULL)
goto free;
for_each_possible_cpu(i) {
struct virtnet_stats *virtnet_stats;
virtnet_stats = per_cpu_ptr(vi->stats, i);
u64_stats_init(&virtnet_stats->tx_syncp);
u64_stats_init(&virtnet_stats->rx_syncp);
}
INIT_WORK(&vi->config_work, virtnet_config_changed_work);
/* If we can receive ANY GSO packets, we must allocate large ones. */
if (virtio_has_feature(vdev, VIRTIO_NET_F_GUEST_TSO4) ||
virtio_has_feature(vdev, VIRTIO_NET_F_GUEST_TSO6) ||
virtio_has_feature(vdev, VIRTIO_NET_F_GUEST_ECN) ||
virtio_has_feature(vdev, VIRTIO_NET_F_GUEST_UFO))
vi->big_packets = true;
if (virtio_has_feature(vdev, VIRTIO_NET_F_MRG_RXBUF))
vi->mergeable_rx_bufs = true;
if (virtio_has_feature(vdev, VIRTIO_NET_F_MRG_RXBUF) ||
virtio_has_feature(vdev, VIRTIO_F_VERSION_1))
vi->hdr_len = sizeof(struct virtio_net_hdr_mrg_rxbuf);
else
vi->hdr_len = sizeof(struct virtio_net_hdr);
if (virtio_has_feature(vdev, VIRTIO_F_ANY_LAYOUT) ||
virtio_has_feature(vdev, VIRTIO_F_VERSION_1))
vi->any_header_sg = true;
if (virtio_has_feature(vdev, VIRTIO_NET_F_CTRL_VQ))
vi->has_cvq = true;
if (vi->any_header_sg)
dev->needed_headroom = vi->hdr_len;
/* Use single tx/rx queue pair as default */
vi->curr_queue_pairs = 1;
vi->max_queue_pairs = max_queue_pairs;
/* Allocate/initialize the rx/tx queues, and invoke find_vqs */
err = init_vqs(vi);
if (err)
goto free_stats;
// #ifdef CONFIG_SYSFS
if (vi->mergeable_rx_bufs)
dev->sysfs_rx_queue_group = &virtio_net_mrg_rx_group;
#endif
netif_set_real_num_tx_queues(dev, vi->curr_queue_pairs);
netif_set_real_num_rx_queues(dev, vi->curr_queue_pairs);
err = register_netdev(dev);
if (err) {
pr_debug("virtio_net: registering device failed\n");
goto free_vqs;
}
virtio_device_ready(vdev);
/* Last of all, set up some receive buffers. */
for (i = 0; i < vi->curr_queue_pairs; i++) {
try_fill_recv(vi, &vi->rq[i], GFP_KERNEL);
/* If we didn't even get one input buffer, we're useless. */
if (vi->rq[i].vq->num_free ==
virtqueue_get_vring_size(vi->rq[i].vq)) {
free_unused_bufs(vi);
err = -ENOMEM;
goto free_recv_bufs;
}
}
vi->nb.notifier_call = &virtnet_cpu_callback;
err = register_hotcpu_notifier(&vi->nb);
if (err) {
pr_debug("virtio_net: registering cpu notifier failed\n");
goto free_recv_bufs;
}
/* Assume link up if device can't report link status,
otherwise get link status from config. */
if (virtio_has_feature(vi->vdev, VIRTIO_NET_F_STATUS)) {
netif_carrier_off(dev);
schedule_work(&vi->config_work);
} else {
vi->status = VIRTIO_NET_S_LINK_UP;
netif_carrier_on(dev);
}
pr_debug("virtnet: registered device %s with %d RX and TX vq's\n",
dev->name, max_queue_pairs);
return 0;
free_recv_bufs:
vi->vdev->config->reset(vdev);
free_receive_bufs(vi);
unregister_netdev(dev);
free_vqs:
cancel_delayed_work_sync(&vi->refill);
free_receive_page_frags(vi);
virtnet_del_vqs(vi);
free_stats:
free_percpu(vi->stats);
free:
free_netdev(dev);
return err;
}
| [[1759, "\t\tdev->hw_features |= NETIF_F_HW_CSUM|NETIF_F_SG|NETIF_F_FRAGLIST;\n"], [1761, "\t\t\tdev->features |= NETIF_F_HW_CSUM|NETIF_F_SG|NETIF_F_FRAGLIST;\n"]] | [[1759, "dev->hw_features |= NETIF_F_HW_CSUM|NETIF_F_SG|NETIF_F_FRAGLIST;"], [1761, "dev->features |= NETIF_F_HW_CSUM|NETIF_F_SG|NETIF_F_FRAGLIST;"]] | [
"CVE-2015-5156"
] | [
"CWE-119"
] | 92 | {
"Execution Environment": [],
"Explanation": null,
"External Function": [],
"Function Argument": [],
"Globals": [
"NETIF_F_FRAGLIST",
"MAX_SKB_FRAGS"
],
"Type Execution Declaration": []
} |
92 | linux | https://github.com/torvalds/linux | drivers/net/virtio_net.c | 48900cb6af4282fa0fb6ff4d72a81aa3dadb5c39 | virtio-net: drop NETIF_F_FRAGLIST
virtio declares support for NETIF_F_FRAGLIST, but assumes
that there are at most MAX_SKB_FRAGS + 2 fragments which isn't
always true with a fraglist.
A longer fraglist in the skb will make the call to skb_to_sgvec overflow
the sg array, leading to memory corruption.
Drop NETIF_F_FRAGLIST so we only get what we can handle.
Cc: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net> | false | 39c937d46c5354959db4476bca14afc8 | virtnet_probe | static int virtnet_probe(struct virtio_device *vdev)
{
int i, err;
struct net_device *dev;
struct virtnet_info *vi;
u16 max_queue_pairs;
if (!vdev->config->get) {
dev_err(&vdev->dev, "%s failure: config access disabled\n",
__func__);
return -EINVAL;
}
if (!virtnet_validate_features(vdev))
return -EINVAL;
/* Find if host supports multiqueue virtio_net device */
err = virtio_cread_feature(vdev, VIRTIO_NET_F_MQ,
struct virtio_net_config,
max_virtqueue_pairs, &max_queue_pairs);
/* We need at least 2 queue's */
if (err || max_queue_pairs < VIRTIO_NET_CTRL_MQ_VQ_PAIRS_MIN ||
max_queue_pairs > VIRTIO_NET_CTRL_MQ_VQ_PAIRS_MAX ||
!virtio_has_feature(vdev, VIRTIO_NET_F_CTRL_VQ))
max_queue_pairs = 1;
/* Allocate ourselves a network device with room for our info */
dev = alloc_etherdev_mq(sizeof(struct virtnet_info), max_queue_pairs);
if (!dev)
return -ENOMEM;
/* Set up network device as normal. */
dev->priv_flags |= IFF_UNICAST_FLT | IFF_LIVE_ADDR_CHANGE;
dev->netdev_ops = &virtnet_netdev;
dev->features = NETIF_F_HIGHDMA;
dev->ethtool_ops = &virtnet_ethtool_ops;
SET_NETDEV_DEV(dev, &vdev->dev);
/* Do we support "hardware" checksums? */
if (virtio_has_feature(vdev, VIRTIO_NET_F_CSUM)) {
/* This opens up the world of extra features. */
dev->hw_features |= NETIF_F_HW_CSUM | NETIF_F_SG;
if (csum)
dev->features |= NETIF_F_HW_CSUM | NETIF_F_SG;
if (virtio_has_feature(vdev, VIRTIO_NET_F_GSO)) {
dev->hw_features |= NETIF_F_TSO | NETIF_F_UFO
| NETIF_F_TSO_ECN | NETIF_F_TSO6;
}
/* Individual feature bits: what can host handle? */
if (virtio_has_feature(vdev, VIRTIO_NET_F_HOST_TSO4))
dev->hw_features |= NETIF_F_TSO;
if (virtio_has_feature(vdev, VIRTIO_NET_F_HOST_TSO6))
dev->hw_features |= NETIF_F_TSO6;
if (virtio_has_feature(vdev, VIRTIO_NET_F_HOST_ECN))
dev->hw_features |= NETIF_F_TSO_ECN;
if (virtio_has_feature(vdev, VIRTIO_NET_F_HOST_UFO))
dev->hw_features |= NETIF_F_UFO;
dev->features |= NETIF_F_GSO_ROBUST;
if (gso)
dev->features |= dev->hw_features & (NETIF_F_ALL_TSO|NETIF_F_UFO);
/* (!csum && gso) case will be fixed by register_netdev() */
}
if (virtio_has_feature(vdev, VIRTIO_NET_F_GUEST_CSUM))
dev->features |= NETIF_F_RXCSUM;
dev->vlan_features = dev->features;
/* Configuration may specify what MAC to use. Otherwise random. */
if (virtio_has_feature(vdev, VIRTIO_NET_F_MAC))
virtio_cread_bytes(vdev,
offsetof(struct virtio_net_config, mac),
dev->dev_addr, dev->addr_len);
else
eth_hw_addr_random(dev);
/* Set up our device-specific information */
vi = netdev_priv(dev);
vi->dev = dev;
vi->vdev = vdev;
vdev->priv = vi;
vi->stats = alloc_percpu(struct virtnet_stats);
err = -ENOMEM;
if (vi->stats == NULL)
goto free;
for_each_possible_cpu(i) {
struct virtnet_stats *virtnet_stats;
virtnet_stats = per_cpu_ptr(vi->stats, i);
u64_stats_init(&virtnet_stats->tx_syncp);
u64_stats_init(&virtnet_stats->rx_syncp);
}
INIT_WORK(&vi->config_work, virtnet_config_changed_work);
/* If we can receive ANY GSO packets, we must allocate large ones. */
if (virtio_has_feature(vdev, VIRTIO_NET_F_GUEST_TSO4) ||
virtio_has_feature(vdev, VIRTIO_NET_F_GUEST_TSO6) ||
virtio_has_feature(vdev, VIRTIO_NET_F_GUEST_ECN) ||
virtio_has_feature(vdev, VIRTIO_NET_F_GUEST_UFO))
vi->big_packets = true;
if (virtio_has_feature(vdev, VIRTIO_NET_F_MRG_RXBUF))
vi->mergeable_rx_bufs = true;
if (virtio_has_feature(vdev, VIRTIO_NET_F_MRG_RXBUF) ||
virtio_has_feature(vdev, VIRTIO_F_VERSION_1))
vi->hdr_len = sizeof(struct virtio_net_hdr_mrg_rxbuf);
else
vi->hdr_len = sizeof(struct virtio_net_hdr);
if (virtio_has_feature(vdev, VIRTIO_F_ANY_LAYOUT) ||
virtio_has_feature(vdev, VIRTIO_F_VERSION_1))
vi->any_header_sg = true;
if (virtio_has_feature(vdev, VIRTIO_NET_F_CTRL_VQ))
vi->has_cvq = true;
if (vi->any_header_sg)
dev->needed_headroom = vi->hdr_len;
/* Use single tx/rx queue pair as default */
vi->curr_queue_pairs = 1;
vi->max_queue_pairs = max_queue_pairs;
/* Allocate/initialize the rx/tx queues, and invoke find_vqs */
err = init_vqs(vi);
if (err)
goto free_stats;
// #ifdef CONFIG_SYSFS
if (vi->mergeable_rx_bufs)
dev->sysfs_rx_queue_group = &virtio_net_mrg_rx_group;
#endif
netif_set_real_num_tx_queues(dev, vi->curr_queue_pairs);
netif_set_real_num_rx_queues(dev, vi->curr_queue_pairs);
err = register_netdev(dev);
if (err) {
pr_debug("virtio_net: registering device failed\n");
goto free_vqs;
}
virtio_device_ready(vdev);
/* Last of all, set up some receive buffers. */
for (i = 0; i < vi->curr_queue_pairs; i++) {
try_fill_recv(vi, &vi->rq[i], GFP_KERNEL);
/* If we didn't even get one input buffer, we're useless. */
if (vi->rq[i].vq->num_free ==
virtqueue_get_vring_size(vi->rq[i].vq)) {
free_unused_bufs(vi);
err = -ENOMEM;
goto free_recv_bufs;
}
}
vi->nb.notifier_call = &virtnet_cpu_callback;
err = register_hotcpu_notifier(&vi->nb);
if (err) {
pr_debug("virtio_net: registering cpu notifier failed\n");
goto free_recv_bufs;
}
/* Assume link up if device can't report link status,
otherwise get link status from config. */
if (virtio_has_feature(vi->vdev, VIRTIO_NET_F_STATUS)) {
netif_carrier_off(dev);
schedule_work(&vi->config_work);
} else {
vi->status = VIRTIO_NET_S_LINK_UP;
netif_carrier_on(dev);
}
pr_debug("virtnet: registered device %s with %d RX and TX vq's\n",
dev->name, max_queue_pairs);
return 0;
free_recv_bufs:
vi->vdev->config->reset(vdev);
free_receive_bufs(vi);
unregister_netdev(dev);
free_vqs:
cancel_delayed_work_sync(&vi->refill);
free_receive_page_frags(vi);
virtnet_del_vqs(vi);
free_stats:
free_percpu(vi->stats);
free:
free_netdev(dev);
return err;
}
| [[1759, "\t\tdev->hw_features |= NETIF_F_HW_CSUM | NETIF_F_SG;\n"], [1761, "\t\t\tdev->features |= NETIF_F_HW_CSUM | NETIF_F_SG;\n"]] | [[1759, "dev->hw_features |= NETIF_F_HW_CSUM | NETIF_F_SG;"], [1761, "dev->features |= NETIF_F_HW_CSUM | NETIF_F_SG;"]] | [
"CVE-2015-5156"
] | [
"CWE-119"
] | 92 | {
"Execution Environment": [],
"Explanation": null,
"External Function": [],
"Function Argument": [],
"Globals": [
"NETIF_F_FRAGLIST",
"MAX_SKB_FRAGS"
],
"Type Execution Declaration": []
} |
93 | linux | https://github.com/torvalds/linux | drivers/hid/hid-cherry.c | 4ab25786c87eb20857bbb715c3ae34ec8fd6a214 | HID: fix a couple of off-by-ones
There are a few very theoretical off-by-one bugs in report descriptor size
checking when performing a pre-parsing fixup. Fix those.
Cc: stable@vger.kernel.org
Reported-by: Ben Hawkes <hawkes@google.com>
Reviewed-by: Benjamin Tissoires <benjamin.tissoires@redhat.com>
Signed-off-by: Jiri Kosina <jkosina@suse.cz> | true | 7078cb37d711fc277f8f0139fea12d59 | ch_report_fixup | static __u8 *ch_report_fixup(struct hid_device *hdev, __u8 *rdesc,
unsigned int *rsize)
{
if (*rsize >= 17 && rdesc[11] == 0x3c && rdesc[12] == 0x02) {
hid_info(hdev, "fixing up Cherry Cymotion report descriptor\n");
rdesc[11] = rdesc[16] = 0xff;
rdesc[12] = rdesc[17] = 0x03;
}
return rdesc;
}
| [[31, "\tif (*rsize >= 17 && rdesc[11] == 0x3c && rdesc[12] == 0x02) {\n"]] | [[31, "if (*rsize >= 17 && rdesc[11] == 0x3c && rdesc[12] == 0x02)"]] | [
"CVE-2014-3184"
] | [
"CWE-119"
] | 94 | {
"Execution Environment": [],
"Explanation": null,
"External Function": [],
"Function Argument": [
"rdesc",
"rsize"
],
"Globals": [],
"Type Execution Declaration": []
} |
94 | linux | https://github.com/torvalds/linux | drivers/hid/hid-cherry.c | 4ab25786c87eb20857bbb715c3ae34ec8fd6a214 | HID: fix a couple of off-by-ones
There are a few very theoretical off-by-one bugs in report descriptor size
checking when performing a pre-parsing fixup. Fix those.
Cc: stable@vger.kernel.org
Reported-by: Ben Hawkes <hawkes@google.com>
Reviewed-by: Benjamin Tissoires <benjamin.tissoires@redhat.com>
Signed-off-by: Jiri Kosina <jkosina@suse.cz> | false | dd46fe8afcfdd405f35ff039f320b512 | ch_report_fixup | static __u8 *ch_report_fixup(struct hid_device *hdev, __u8 *rdesc,
unsigned int *rsize)
{
if (*rsize >= 18 && rdesc[11] == 0x3c && rdesc[12] == 0x02) {
hid_info(hdev, "fixing up Cherry Cymotion report descriptor\n");
rdesc[11] = rdesc[16] = 0xff;
rdesc[12] = rdesc[17] = 0x03;
}
return rdesc;
}
| [[31, "\tif (*rsize >= 18 && rdesc[11] == 0x3c && rdesc[12] == 0x02) {\n"]] | [[31, "if (*rsize >= 18 && rdesc[11] == 0x3c && rdesc[12] == 0x02)"]] | [
"CVE-2014-3184"
] | [
"CWE-119"
] | 94 | {
"Execution Environment": [],
"Explanation": null,
"External Function": [],
"Function Argument": [
"rdesc",
"rsize"
],
"Globals": [],
"Type Execution Declaration": []
} |
95 | linux | https://github.com/torvalds/linux | drivers/hid/hid-kye.c | 4ab25786c87eb20857bbb715c3ae34ec8fd6a214 | HID: fix a couple of off-by-ones
There are a few very theoretical off-by-one bugs in report descriptor size
checking when performing a pre-parsing fixup. Fix those.
Cc: stable@vger.kernel.org
Reported-by: Ben Hawkes <hawkes@google.com>
Reviewed-by: Benjamin Tissoires <benjamin.tissoires@redhat.com>
Signed-off-by: Jiri Kosina <jkosina@suse.cz> | true | d3d0680a256c71e6969ed7e2635dbb00 | kye_report_fixup | static __u8 *kye_report_fixup(struct hid_device *hdev, __u8 *rdesc,
unsigned int *rsize)
{
switch (hdev->product) {
case USB_DEVICE_ID_KYE_ERGO_525V:
/* the fixups that need to be done:
* - change led usage page to button for extra buttons
* - report size 8 count 1 must be size 1 count 8 for button
* bitfield
* - change the button usage range to 4-7 for the extra
* buttons
*/
if (*rsize >= 74 &&
rdesc[61] == 0x05 && rdesc[62] == 0x08 &&
rdesc[63] == 0x19 && rdesc[64] == 0x08 &&
rdesc[65] == 0x29 && rdesc[66] == 0x0f &&
rdesc[71] == 0x75 && rdesc[72] == 0x08 &&
rdesc[73] == 0x95 && rdesc[74] == 0x01) {
hid_info(hdev,
"fixing up Kye/Genius Ergo Mouse "
"report descriptor\n");
rdesc[62] = 0x09;
rdesc[64] = 0x04;
rdesc[66] = 0x07;
rdesc[72] = 0x01;
rdesc[74] = 0x08;
}
break;
case USB_DEVICE_ID_KYE_EASYPEN_I405X:
if (*rsize == EASYPEN_I405X_RDESC_ORIG_SIZE) {
rdesc = easypen_i405x_rdesc_fixed;
*rsize = sizeof(easypen_i405x_rdesc_fixed);
}
break;
case USB_DEVICE_ID_KYE_MOUSEPEN_I608X:
if (*rsize == MOUSEPEN_I608X_RDESC_ORIG_SIZE) {
rdesc = mousepen_i608x_rdesc_fixed;
*rsize = sizeof(mousepen_i608x_rdesc_fixed);
}
break;
case USB_DEVICE_ID_KYE_EASYPEN_M610X:
if (*rsize == EASYPEN_M610X_RDESC_ORIG_SIZE) {
rdesc = easypen_m610x_rdesc_fixed;
*rsize = sizeof(easypen_m610x_rdesc_fixed);
}
break;
case USB_DEVICE_ID_GENIUS_GILA_GAMING_MOUSE:
rdesc = kye_consumer_control_fixup(hdev, rdesc, rsize, 104,
"Genius Gila Gaming Mouse");
break;
case USB_DEVICE_ID_GENIUS_GX_IMPERATOR:
rdesc = kye_consumer_control_fixup(hdev, rdesc, rsize, 83,
"Genius Gx Imperator Keyboard");
break;
case USB_DEVICE_ID_GENIUS_MANTICORE:
rdesc = kye_consumer_control_fixup(hdev, rdesc, rsize, 104,
"Genius Manticore Keyboard");
break;
}
return rdesc;
}
| [[303, "\t\tif (*rsize >= 74 &&\n"]] | [[303, "if (*rsize >= 74 &&\n\t\t\trdesc[61] == 0x05 && rdesc[62] == 0x08 &&\n\t\t\trdesc[63] == 0x19 && rdesc[64] == 0x08 &&\n\t\t\trdesc[65] == 0x29 && rdesc[66] == 0x0f &&\n\t\t\trdesc[71] == 0x75 && rdesc[72] == 0x08 &&\n\t\t\trdesc[73] == 0x95 && rdesc[74] == 0x01)"]] | [
"CVE-2014-3184"
] | [
"CWE-119"
] | 96 | {
"Execution Environment": [],
"Explanation": null,
"External Function": [],
"Function Argument": [
"hdev",
"rdesc",
"rsize"
],
"Globals": [
"USB_DEVICE_ID_KYE_ERGO_525V",
"USB_DEVICE_ID_KYE_EASYPEN_I405X",
"EASYPEN_I405X_RDESC_ORIG_SIZE",
"easypen_i405x_rdesc_fixed",
"USB_DEVICE_ID_KYE_MOUSEPEN_I608X",
"MOUSEPEN_I608X_RDESC_ORIG_SIZE",
"mousepen_i608x_rdesc_fixed",
"USB_DEVICE_ID_KYE_EASYPEN_M610X",
"EASYPEN_M610X_RDESC_ORIG_SIZE",
"easypen_m610x_rdesc_fixed",
"USB_DEVICE_ID_GENIUS_GILA_GAMING_MOUSE",
"USB_DEVICE_ID_GENIUS_GX_IMPERATOR",
"USB_DEVICE_ID_GENIUS_MANTICORE"
],
"Type Execution Declaration": []
} |
96 | linux | https://github.com/torvalds/linux | drivers/hid/hid-kye.c | 4ab25786c87eb20857bbb715c3ae34ec8fd6a214 | HID: fix a couple of off-by-ones
There are a few very theoretical off-by-one bugs in report descriptor size
checking when performing a pre-parsing fixup. Fix those.
Cc: stable@vger.kernel.org
Reported-by: Ben Hawkes <hawkes@google.com>
Reviewed-by: Benjamin Tissoires <benjamin.tissoires@redhat.com>
Signed-off-by: Jiri Kosina <jkosina@suse.cz> | false | ed2f31685d1c495df1c0a4476a6ff055 | kye_report_fixup | static __u8 *kye_report_fixup(struct hid_device *hdev, __u8 *rdesc,
unsigned int *rsize)
{
switch (hdev->product) {
case USB_DEVICE_ID_KYE_ERGO_525V:
/* the fixups that need to be done:
* - change led usage page to button for extra buttons
* - report size 8 count 1 must be size 1 count 8 for button
* bitfield
* - change the button usage range to 4-7 for the extra
* buttons
*/
if (*rsize >= 75 &&
rdesc[61] == 0x05 && rdesc[62] == 0x08 &&
rdesc[63] == 0x19 && rdesc[64] == 0x08 &&
rdesc[65] == 0x29 && rdesc[66] == 0x0f &&
rdesc[71] == 0x75 && rdesc[72] == 0x08 &&
rdesc[73] == 0x95 && rdesc[74] == 0x01) {
hid_info(hdev,
"fixing up Kye/Genius Ergo Mouse "
"report descriptor\n");
rdesc[62] = 0x09;
rdesc[64] = 0x04;
rdesc[66] = 0x07;
rdesc[72] = 0x01;
rdesc[74] = 0x08;
}
break;
case USB_DEVICE_ID_KYE_EASYPEN_I405X:
if (*rsize == EASYPEN_I405X_RDESC_ORIG_SIZE) {
rdesc = easypen_i405x_rdesc_fixed;
*rsize = sizeof(easypen_i405x_rdesc_fixed);
}
break;
case USB_DEVICE_ID_KYE_MOUSEPEN_I608X:
if (*rsize == MOUSEPEN_I608X_RDESC_ORIG_SIZE) {
rdesc = mousepen_i608x_rdesc_fixed;
*rsize = sizeof(mousepen_i608x_rdesc_fixed);
}
break;
case USB_DEVICE_ID_KYE_EASYPEN_M610X:
if (*rsize == EASYPEN_M610X_RDESC_ORIG_SIZE) {
rdesc = easypen_m610x_rdesc_fixed;
*rsize = sizeof(easypen_m610x_rdesc_fixed);
}
break;
case USB_DEVICE_ID_GENIUS_GILA_GAMING_MOUSE:
rdesc = kye_consumer_control_fixup(hdev, rdesc, rsize, 104,
"Genius Gila Gaming Mouse");
break;
case USB_DEVICE_ID_GENIUS_GX_IMPERATOR:
rdesc = kye_consumer_control_fixup(hdev, rdesc, rsize, 83,
"Genius Gx Imperator Keyboard");
break;
case USB_DEVICE_ID_GENIUS_MANTICORE:
rdesc = kye_consumer_control_fixup(hdev, rdesc, rsize, 104,
"Genius Manticore Keyboard");
break;
}
return rdesc;
}
| [[303, "\t\tif (*rsize >= 75 &&\n"]] | [[303, "if (*rsize >= 75 &&\n\t\t\trdesc[61] == 0x05 && rdesc[62] == 0x08 &&\n\t\t\trdesc[63] == 0x19 && rdesc[64] == 0x08 &&\n\t\t\trdesc[65] == 0x29 && rdesc[66] == 0x0f &&\n\t\t\trdesc[71] == 0x75 && rdesc[72] == 0x08 &&\n\t\t\trdesc[73] == 0x95 && rdesc[74] == 0x01)"]] | [
"CVE-2014-3184"
] | [
"CWE-119"
] | 96 | {
"Execution Environment": [],
"Explanation": null,
"External Function": [],
"Function Argument": [
"hdev",
"rdesc",
"rsize"
],
"Globals": [
"USB_DEVICE_ID_KYE_ERGO_525V",
"USB_DEVICE_ID_KYE_EASYPEN_I405X",
"EASYPEN_I405X_RDESC_ORIG_SIZE",
"easypen_i405x_rdesc_fixed",
"USB_DEVICE_ID_KYE_MOUSEPEN_I608X",
"MOUSEPEN_I608X_RDESC_ORIG_SIZE",
"mousepen_i608x_rdesc_fixed",
"USB_DEVICE_ID_KYE_EASYPEN_M610X",
"EASYPEN_M610X_RDESC_ORIG_SIZE",
"easypen_m610x_rdesc_fixed",
"USB_DEVICE_ID_GENIUS_GILA_GAMING_MOUSE",
"USB_DEVICE_ID_GENIUS_GX_IMPERATOR",
"USB_DEVICE_ID_GENIUS_MANTICORE"
],
"Type Execution Declaration": []
} |
97 | linux | https://github.com/torvalds/linux | drivers/hid/hid-lg.c | 4ab25786c87eb20857bbb715c3ae34ec8fd6a214 | HID: fix a couple of off-by-ones
There are a few very theoretical off-by-one bugs in report descriptor size
checking when performing a pre-parsing fixup. Fix those.
Cc: stable@vger.kernel.org
Reported-by: Ben Hawkes <hawkes@google.com>
Reviewed-by: Benjamin Tissoires <benjamin.tissoires@redhat.com>
Signed-off-by: Jiri Kosina <jkosina@suse.cz> | true | d51dad61fb3100872f7f341d0ef36499 | lg_report_fixup | static __u8 *lg_report_fixup(struct hid_device *hdev, __u8 *rdesc,
unsigned int *rsize)
{
struct lg_drv_data *drv_data = hid_get_drvdata(hdev);
struct usb_device_descriptor *udesc;
__u16 bcdDevice, rev_maj, rev_min;
if ((drv_data->quirks & LG_RDESC) && *rsize >= 90 && rdesc[83] == 0x26 &&
rdesc[84] == 0x8c && rdesc[85] == 0x02) {
hid_info(hdev,
"fixing up Logitech keyboard report descriptor\n");
rdesc[84] = rdesc[89] = 0x4d;
rdesc[85] = rdesc[90] = 0x10;
}
if ((drv_data->quirks & LG_RDESC_REL_ABS) && *rsize >= 50 &&
rdesc[32] == 0x81 && rdesc[33] == 0x06 &&
rdesc[49] == 0x81 && rdesc[50] == 0x06) {
hid_info(hdev,
"fixing up rel/abs in Logitech report descriptor\n");
rdesc[33] = rdesc[50] = 0x02;
}
switch (hdev->product) {
/* Several wheels report as this id when operating in emulation mode. */
case USB_DEVICE_ID_LOGITECH_WHEEL:
udesc = &(hid_to_usb_dev(hdev)->descriptor);
if (!udesc) {
hid_err(hdev, "NULL USB device descriptor\n");
break;
}
bcdDevice = le16_to_cpu(udesc->bcdDevice);
rev_maj = bcdDevice >> 8;
rev_min = bcdDevice & 0xff;
/* Update the report descriptor for only the Driving Force wheel */
if (rev_maj == 1 && rev_min == 2 &&
*rsize == DF_RDESC_ORIG_SIZE) {
hid_info(hdev,
"fixing up Logitech Driving Force report descriptor\n");
rdesc = df_rdesc_fixed;
*rsize = sizeof(df_rdesc_fixed);
}
break;
case USB_DEVICE_ID_LOGITECH_MOMO_WHEEL:
if (*rsize == MOMO_RDESC_ORIG_SIZE) {
hid_info(hdev,
"fixing up Logitech Momo Force (Red) report descriptor\n");
rdesc = momo_rdesc_fixed;
*rsize = sizeof(momo_rdesc_fixed);
}
break;
case USB_DEVICE_ID_LOGITECH_MOMO_WHEEL2:
if (*rsize == MOMO2_RDESC_ORIG_SIZE) {
hid_info(hdev,
"fixing up Logitech Momo Racing Force (Black) report descriptor\n");
rdesc = momo2_rdesc_fixed;
*rsize = sizeof(momo2_rdesc_fixed);
}
break;
case USB_DEVICE_ID_LOGITECH_VIBRATION_WHEEL:
if (*rsize == FV_RDESC_ORIG_SIZE) {
hid_info(hdev,
"fixing up Logitech Formula Vibration report descriptor\n");
rdesc = fv_rdesc_fixed;
*rsize = sizeof(fv_rdesc_fixed);
}
break;
case USB_DEVICE_ID_LOGITECH_DFP_WHEEL:
if (*rsize == DFP_RDESC_ORIG_SIZE) {
hid_info(hdev,
"fixing up Logitech Driving Force Pro report descriptor\n");
rdesc = dfp_rdesc_fixed;
*rsize = sizeof(dfp_rdesc_fixed);
}
break;
case USB_DEVICE_ID_LOGITECH_WII_WHEEL:
if (*rsize >= 101 && rdesc[41] == 0x95 && rdesc[42] == 0x0B &&
rdesc[47] == 0x05 && rdesc[48] == 0x09) {
hid_info(hdev, "fixing up Logitech Speed Force Wireless report descriptor\n");
rdesc[41] = 0x05;
rdesc[42] = 0x09;
rdesc[47] = 0x95;
rdesc[48] = 0x0B;
}
break;
}
return rdesc;
}
| [[348, "\tif ((drv_data->quirks & LG_RDESC) && *rsize >= 90 && rdesc[83] == 0x26 &&\n"], [355, "\tif ((drv_data->quirks & LG_RDESC_REL_ABS) && *rsize >= 50 &&\n"]] | [[348, "if ((drv_data->quirks & LG_RDESC) && *rsize >= 90 && rdesc[83] == 0x26 &&\n\t\t\trdesc[84] == 0x8c && rdesc[85] == 0x02)"], [355, "if ((drv_data->quirks & LG_RDESC_REL_ABS) && *rsize >= 50 &&\n\t\t\trdesc[32] == 0x81 && rdesc[33] == 0x06 &&\n\t\t\trdesc[49] == 0x81 && rdesc[50] == 0x06)"]] | [
"CVE-2014-3184"
] | [
"CWE-119"
] | 98 | {
"Execution Environment": [],
"Explanation": null,
"External Function": [
"hid_get_drvdata"
],
"Function Argument": [
"hdev",
"rdesc",
"rsize"
],
"Globals": [],
"Type Execution Declaration": [
"struct lg_drv_data"
]
} |
98 | linux | https://github.com/torvalds/linux | drivers/hid/hid-lg.c | 4ab25786c87eb20857bbb715c3ae34ec8fd6a214 | HID: fix a couple of off-by-ones
There are a few very theoretical off-by-one bugs in report descriptor size
checking when performing a pre-parsing fixup. Fix those.
Cc: stable@vger.kernel.org
Reported-by: Ben Hawkes <hawkes@google.com>
Reviewed-by: Benjamin Tissoires <benjamin.tissoires@redhat.com>
Signed-off-by: Jiri Kosina <jkosina@suse.cz> | false | 9791b14637d65a92a90693945cfc6a99 | lg_report_fixup | static __u8 *lg_report_fixup(struct hid_device *hdev, __u8 *rdesc,
unsigned int *rsize)
{
struct lg_drv_data *drv_data = hid_get_drvdata(hdev);
struct usb_device_descriptor *udesc;
__u16 bcdDevice, rev_maj, rev_min;
if ((drv_data->quirks & LG_RDESC) && *rsize >= 91 && rdesc[83] == 0x26 &&
rdesc[84] == 0x8c && rdesc[85] == 0x02) {
hid_info(hdev,
"fixing up Logitech keyboard report descriptor\n");
rdesc[84] = rdesc[89] = 0x4d;
rdesc[85] = rdesc[90] = 0x10;
}
if ((drv_data->quirks & LG_RDESC_REL_ABS) && *rsize >= 51 &&
rdesc[32] == 0x81 && rdesc[33] == 0x06 &&
rdesc[49] == 0x81 && rdesc[50] == 0x06) {
hid_info(hdev,
"fixing up rel/abs in Logitech report descriptor\n");
rdesc[33] = rdesc[50] = 0x02;
}
switch (hdev->product) {
/* Several wheels report as this id when operating in emulation mode. */
case USB_DEVICE_ID_LOGITECH_WHEEL:
udesc = &(hid_to_usb_dev(hdev)->descriptor);
if (!udesc) {
hid_err(hdev, "NULL USB device descriptor\n");
break;
}
bcdDevice = le16_to_cpu(udesc->bcdDevice);
rev_maj = bcdDevice >> 8;
rev_min = bcdDevice & 0xff;
/* Update the report descriptor for only the Driving Force wheel */
if (rev_maj == 1 && rev_min == 2 &&
*rsize == DF_RDESC_ORIG_SIZE) {
hid_info(hdev,
"fixing up Logitech Driving Force report descriptor\n");
rdesc = df_rdesc_fixed;
*rsize = sizeof(df_rdesc_fixed);
}
break;
case USB_DEVICE_ID_LOGITECH_MOMO_WHEEL:
if (*rsize == MOMO_RDESC_ORIG_SIZE) {
hid_info(hdev,
"fixing up Logitech Momo Force (Red) report descriptor\n");
rdesc = momo_rdesc_fixed;
*rsize = sizeof(momo_rdesc_fixed);
}
break;
case USB_DEVICE_ID_LOGITECH_MOMO_WHEEL2:
if (*rsize == MOMO2_RDESC_ORIG_SIZE) {
hid_info(hdev,
"fixing up Logitech Momo Racing Force (Black) report descriptor\n");
rdesc = momo2_rdesc_fixed;
*rsize = sizeof(momo2_rdesc_fixed);
}
break;
case USB_DEVICE_ID_LOGITECH_VIBRATION_WHEEL:
if (*rsize == FV_RDESC_ORIG_SIZE) {
hid_info(hdev,
"fixing up Logitech Formula Vibration report descriptor\n");
rdesc = fv_rdesc_fixed;
*rsize = sizeof(fv_rdesc_fixed);
}
break;
case USB_DEVICE_ID_LOGITECH_DFP_WHEEL:
if (*rsize == DFP_RDESC_ORIG_SIZE) {
hid_info(hdev,
"fixing up Logitech Driving Force Pro report descriptor\n");
rdesc = dfp_rdesc_fixed;
*rsize = sizeof(dfp_rdesc_fixed);
}
break;
case USB_DEVICE_ID_LOGITECH_WII_WHEEL:
if (*rsize >= 101 && rdesc[41] == 0x95 && rdesc[42] == 0x0B &&
rdesc[47] == 0x05 && rdesc[48] == 0x09) {
hid_info(hdev, "fixing up Logitech Speed Force Wireless report descriptor\n");
rdesc[41] = 0x05;
rdesc[42] = 0x09;
rdesc[47] = 0x95;
rdesc[48] = 0x0B;
}
break;
}
return rdesc;
}
| [[348, "\tif ((drv_data->quirks & LG_RDESC) && *rsize >= 91 && rdesc[83] == 0x26 &&\n"], [355, "\tif ((drv_data->quirks & LG_RDESC_REL_ABS) && *rsize >= 51 &&\n"]] | [[348, "if ((drv_data->quirks & LG_RDESC) && *rsize >= 91 && rdesc[83] == 0x26 &&\n\t\t\trdesc[84] == 0x8c && rdesc[85] == 0x02)"], [355, "if ((drv_data->quirks & LG_RDESC_REL_ABS) && *rsize >= 51 &&\n\t\t\trdesc[32] == 0x81 && rdesc[33] == 0x06 &&\n\t\t\trdesc[49] == 0x81 && rdesc[50] == 0x06)"]] | [
"CVE-2014-3184"
] | [
"CWE-119"
] | 98 | {
"Execution Environment": [],
"Explanation": null,
"External Function": [
"hid_get_drvdata"
],
"Function Argument": [
"hdev",
"rdesc",
"rsize"
],
"Globals": [],
"Type Execution Declaration": [
"struct lg_drv_data"
]
} |
99 | linux | https://github.com/torvalds/linux | drivers/hid/hid-monterey.c | 4ab25786c87eb20857bbb715c3ae34ec8fd6a214 | HID: fix a couple of off-by-ones
There are a few very theoretical off-by-one bugs in report descriptor size
checking when performing a pre-parsing fixup. Fix those.
Cc: stable@vger.kernel.org
Reported-by: Ben Hawkes <hawkes@google.com>
Reviewed-by: Benjamin Tissoires <benjamin.tissoires@redhat.com>
Signed-off-by: Jiri Kosina <jkosina@suse.cz> | true | 26c10232845901f7ab1d88218fe046c4 | mr_report_fixup | static __u8 *mr_report_fixup(struct hid_device *hdev, __u8 *rdesc,
unsigned int *rsize)
{
if (*rsize >= 30 && rdesc[29] == 0x05 && rdesc[30] == 0x09) {
hid_info(hdev, "fixing up button/consumer in HID report descriptor\n");
rdesc[30] = 0x0c;
}
return rdesc;
}
| [[27, "\tif (*rsize >= 30 && rdesc[29] == 0x05 && rdesc[30] == 0x09) {\n"]] | [[27, "if (*rsize >= 30 && rdesc[29] == 0x05 && rdesc[30] == 0x09)"]] | [
"CVE-2014-3184"
] | [
"CWE-119"
] | 100 | {
"Execution Environment": [
"HID device with user-controlled/bad report descriptor"
],
"Explanation": null,
"External Function": [],
"Function Argument": [],
"Globals": [],
"Type Execution Declaration": []
} |
Subsets and Splits
GitHub Repo File Analysis
Identifies the most common file types and their counts in GitHub repositories, providing insights into the predominant programming languages used.
Top 200 CWE-863
Retrieves 200 samples from the dataset where the CWE list includes CWE-863, providing basic filtering of specific security vulnerabilities.