idx int64 0 25.4k | project stringclasses 707 values | project_url stringclasses 735 values | filepath stringlengths 4 100 | commit_id stringlengths 7 40 | commit_message stringlengths 0 18.3k ⌀ | is_vulnerable bool 2 classes | hash stringlengths 32 32 | func_name stringlengths 3 112 | func_body stringlengths 23 235k | changed_lines stringlengths 2 27.6k | changed_statements stringlengths 2 161k | cve_list listlengths 1 19 | cwe_list listlengths 1 6 | fixed_func_idx int64 1 25.4k ⌀ | context dict |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | linux | https://github.com/torvalds/linux | drivers/media/usb/dvb-usb-v2/dvb_usb_core.c | 005145378c9ad7575a01b6ce1ba118fb427f583a | [media] dvb-usb-v2: avoid use-after-free
I ran into a stack frame size warning because of the on-stack copy of
the USB device structure:
drivers/media/usb/dvb-usb-v2/dvb_usb_core.c: In function 'dvb_usbv2_disconnect':
drivers/media/usb/dvb-usb-v2/dvb_usb_core.c:1029:1: error: the frame size of 1104 bytes is larger than 1024 bytes [-Werror=frame-larger-than=]
Copying a device structure like this is wrong for a number of other reasons
too aside from the possible stack overflow. One of them is that the
dev_info() call will print the name of the device later, but AFAICT
we have only copied a pointer to the name earlier and the actual name
has been freed by the time it gets printed.
This removes the on-stack copy of the device and instead copies the
device name using kstrdup(). I'm ignoring the possible failure here
as both printk() and kfree() are able to deal with NULL pointers.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com> | true | 5eed13a1f22c5a961ab0b4dcab91a1a7 | dvb_usbv2_disconnect | void dvb_usbv2_disconnect(struct usb_interface *intf)
{
struct dvb_usb_device *d = usb_get_intfdata(intf);
const char *name = d->name;
struct device dev = d->udev->dev;
dev_dbg(&d->udev->dev, "%s: bInterfaceNumber=%d\n", __func__,
intf->cur_altsetting->desc.bInterfaceNumber);
if (d->props->exit)
d->props->exit(d);
dvb_usbv2_exit(d);
dev_info(&dev, "%s: '%s' successfully deinitialized and disconnected\n",
KBUILD_MODNAME, name);
}
| [[1015, "\tconst char *name = d->name;\n"], [1016, "\tstruct device dev = d->udev->dev;\n"], [1026, "\tdev_info(&dev, \"%s: '%s' successfully deinitialized and disconnected\\n\",\n"], [1027, "\t\t\tKBUILD_MODNAME, name);\n"]] | [[1015, "const char *name = d->name;"], [1016, "struct device dev = d->udev->dev;"], [1026, "dev_info(&dev, \"%s: '%s' successfully deinitialized and disconnected\\n\",\n\t\t\tKBUILD_MODNAME, name);"]] | [
"CVE-2017-8064"
] | [
"CWE-119"
] | 1 | {
"Execution Environment": [
"CONFIG_VMAP_STACK"
],
"Explanation": null,
"External Function": [
"usb_get_intfdata",
"dev_info"
],
"Function Argument": [
"intf"
],
"Globals": [
"KBUILD_MODNAME"
],
"Type Execution Declaration": [
"struct dvb_usb_device",
"struct device"
]
} |
1 | linux | https://github.com/torvalds/linux | drivers/media/usb/dvb-usb-v2/dvb_usb_core.c | 005145378c9ad7575a01b6ce1ba118fb427f583a | [media] dvb-usb-v2: avoid use-after-free
I ran into a stack frame size warning because of the on-stack copy of
the USB device structure:
drivers/media/usb/dvb-usb-v2/dvb_usb_core.c: In function 'dvb_usbv2_disconnect':
drivers/media/usb/dvb-usb-v2/dvb_usb_core.c:1029:1: error: the frame size of 1104 bytes is larger than 1024 bytes [-Werror=frame-larger-than=]
Copying a device structure like this is wrong for a number of other reasons
too aside from the possible stack overflow. One of them is that the
dev_info() call will print the name of the device later, but AFAICT
we have only copied a pointer to the name earlier and the actual name
has been freed by the time it gets printed.
This removes the on-stack copy of the device and instead copies the
device name using kstrdup(). I'm ignoring the possible failure here
as both printk() and kfree() are able to deal with NULL pointers.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com> | false | 1adb6cd316c6ed00994b824a788e5c69 | dvb_usbv2_disconnect | void dvb_usbv2_disconnect(struct usb_interface *intf)
{
struct dvb_usb_device *d = usb_get_intfdata(intf);
const char *devname = kstrdup(dev_name(&d->udev->dev), GFP_KERNEL);
const char *drvname = d->name;
dev_dbg(&d->udev->dev, "%s: bInterfaceNumber=%d\n", __func__,
intf->cur_altsetting->desc.bInterfaceNumber);
if (d->props->exit)
d->props->exit(d);
dvb_usbv2_exit(d);
pr_info("%s: '%s:%s' successfully deinitialized and disconnected\n",
KBUILD_MODNAME, drvname, devname);
kfree(devname);
}
| [[1015, "\tconst char *devname = kstrdup(dev_name(&d->udev->dev), GFP_KERNEL);\n"], [1016, "\tconst char *drvname = d->name;\n"], [1026, "\tpr_info(\"%s: '%s:%s' successfully deinitialized and disconnected\\n\",\n"], [1027, "\t\tKBUILD_MODNAME, drvname, devname);\n"], [1028, "\tkfree(devname);\n"]] | [[1015, "const char *devname = kstrdup(dev_name(&d->udev->dev), GFP_KERNEL);"], [1016, "const char *drvname = d->name;"], [1026, "pr_info(\"%s: '%s:%s' successfully deinitialized and disconnected\\n\",\n\t\tKBUILD_MODNAME, drvname, devname);"], [1028, "kfree(devname);"]] | [
"CVE-2017-8064"
] | [
"CWE-119"
] | 1 | {
"Execution Environment": [
"CONFIG_VMAP_STACK"
],
"Explanation": null,
"External Function": [
"usb_get_intfdata",
"dev_info"
],
"Function Argument": [
"intf"
],
"Globals": [
"KBUILD_MODNAME"
],
"Type Execution Declaration": [
"struct dvb_usb_device",
"struct device"
]
} |
2 | linux | https://github.com/torvalds/linux | drivers/net/ethernet/netronome/nfp/nfpcore/nfp_cppcore.c | 02e1a114fdb71e59ee6770294166c30d437bf86a | nfp: fix use-after-free in area_cache_get()
area_cache_get() is used to distribute cache->area and set cache->id,
and if cache->id is not 0 and cache->area->kref refcount is 0, it will
release the cache->area by nfp_cpp_area_release(). area_cache_get()
set cache->id before cpp->op->area_init() and nfp_cpp_area_acquire().
But if area_init() or nfp_cpp_area_acquire() fails, the cache->id is
is already set but the refcount is not increased as expected. At this
time, calling the nfp_cpp_area_release() will cause use-after-free.
To avoid the use-after-free, set cache->id after area_init() and
nfp_cpp_area_acquire() complete successfully.
Note: This vulnerability is triggerable by providing emulated device
equipped with specified configuration.
BUG: KASAN: use-after-free in nfp6000_area_init (drivers/net/ethernet/netronome/nfp/nfpcore/nfp6000_pcie.c:760)
Write of size 4 at addr ffff888005b7f4a0 by task swapper/0/1
Call Trace:
<TASK>
nfp6000_area_init (drivers/net/ethernet/netronome/nfp/nfpcore/nfp6000_pcie.c:760)
area_cache_get.constprop.8 (drivers/net/ethernet/netronome/nfp/nfpcore/nfp_cppcore.c:884)
Allocated by task 1:
nfp_cpp_area_alloc_with_name (drivers/net/ethernet/netronome/nfp/nfpcore/nfp_cppcore.c:303)
nfp_cpp_area_cache_add (drivers/net/ethernet/netronome/nfp/nfpcore/nfp_cppcore.c:802)
nfp6000_init (drivers/net/ethernet/netronome/nfp/nfpcore/nfp6000_pcie.c:1230)
nfp_cpp_from_operations (drivers/net/ethernet/netronome/nfp/nfpcore/nfp_cppcore.c:1215)
nfp_pci_probe (drivers/net/ethernet/netronome/nfp/nfp_main.c:744)
Freed by task 1:
kfree (mm/slub.c:4562)
area_cache_get.constprop.8 (drivers/net/ethernet/netronome/nfp/nfpcore/nfp_cppcore.c:873)
nfp_cpp_read (drivers/net/ethernet/netronome/nfp/nfpcore/nfp_cppcore.c:924 drivers/net/ethernet/netronome/nfp/nfpcore/nfp_cppcore.c:973)
nfp_cpp_readl (drivers/net/ethernet/netronome/nfp/nfpcore/nfp_cpplib.c:48)
Signed-off-by: Jialiang Wang <wangjialiang0806@163.com>
Reviewed-by: Yinjun Zhang <yinjun.zhang@corigine.com>
Acked-by: Simon Horman <simon.horman@corigine.com>
Link: https://lore.kernel.org/r/20220810073057.4032-1-wangjialiang0806@163.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org> | true | b09c47a4db6314a9ae2201fbee26ee43 | area_cache_get | static struct nfp_cpp_area_cache *
area_cache_get(struct nfp_cpp *cpp, u32 id,
u64 addr, unsigned long *offset, size_t length)
{
struct nfp_cpp_area_cache *cache;
int err;
/* Early exit when length == 0, which prevents
* the need for special case code below when
* checking against available cache size.
*/
if (length == 0 || id == 0)
return NULL;
/* Remap from cpp_island to cpp_target */
err = nfp_target_cpp(id, addr, &id, &addr, cpp->imb_cat_table);
if (err < 0)
return NULL;
mutex_lock(&cpp->area_cache_mutex);
if (list_empty(&cpp->area_cache_list)) {
mutex_unlock(&cpp->area_cache_mutex);
return NULL;
}
addr += *offset;
/* See if we have a match */
list_for_each_entry(cache, &cpp->area_cache_list, entry) {
if (id == cache->id &&
addr >= cache->addr &&
addr + length <= cache->addr + cache->size)
goto exit;
}
/* No matches - inspect the tail of the LRU */
cache = list_entry(cpp->area_cache_list.prev,
struct nfp_cpp_area_cache, entry);
/* Can we fit in the cache entry? */
if (round_down(addr + length - 1, cache->size) !=
round_down(addr, cache->size)) {
mutex_unlock(&cpp->area_cache_mutex);
return NULL;
}
/* If id != 0, we will need to release it */
if (cache->id) {
nfp_cpp_area_release(cache->area);
cache->id = 0;
cache->addr = 0;
}
/* Adjust the start address to be cache size aligned */
cache->id = id;
cache->addr = addr & ~(u64)(cache->size - 1);
/* Re-init to the new ID and address */
if (cpp->op->area_init) {
err = cpp->op->area_init(cache->area,
id, cache->addr, cache->size);
if (err < 0) {
mutex_unlock(&cpp->area_cache_mutex);
return NULL;
}
}
/* Attempt to acquire */
err = nfp_cpp_area_acquire(cache->area);
if (err < 0) {
mutex_unlock(&cpp->area_cache_mutex);
return NULL;
}
exit:
/* Adjust offset */
*offset = addr - cache->addr;
return cache;
}
| [[877, "\tcache->id = id;\n"]] | [[877, "cache->id = id;"]] | [
"CVE-2022-3545"
] | [
"CWE-119"
] | 3 | {
"Execution Environment": [],
"Explanation": null,
"External Function": [
"nfp_cpp_area_release",
"cpp->op->area_init",
"nfp_cpp_area_acquire"
],
"Function Argument": [],
"Globals": [],
"Type Execution Declaration": []
} |
3 | linux | https://github.com/torvalds/linux | drivers/net/ethernet/netronome/nfp/nfpcore/nfp_cppcore.c | 02e1a114fdb71e59ee6770294166c30d437bf86a | nfp: fix use-after-free in area_cache_get()
area_cache_get() is used to distribute cache->area and set cache->id,
and if cache->id is not 0 and cache->area->kref refcount is 0, it will
release the cache->area by nfp_cpp_area_release(). area_cache_get()
set cache->id before cpp->op->area_init() and nfp_cpp_area_acquire().
But if area_init() or nfp_cpp_area_acquire() fails, the cache->id is
is already set but the refcount is not increased as expected. At this
time, calling the nfp_cpp_area_release() will cause use-after-free.
To avoid the use-after-free, set cache->id after area_init() and
nfp_cpp_area_acquire() complete successfully.
Note: This vulnerability is triggerable by providing emulated device
equipped with specified configuration.
BUG: KASAN: use-after-free in nfp6000_area_init (drivers/net/ethernet/netronome/nfp/nfpcore/nfp6000_pcie.c:760)
Write of size 4 at addr ffff888005b7f4a0 by task swapper/0/1
Call Trace:
<TASK>
nfp6000_area_init (drivers/net/ethernet/netronome/nfp/nfpcore/nfp6000_pcie.c:760)
area_cache_get.constprop.8 (drivers/net/ethernet/netronome/nfp/nfpcore/nfp_cppcore.c:884)
Allocated by task 1:
nfp_cpp_area_alloc_with_name (drivers/net/ethernet/netronome/nfp/nfpcore/nfp_cppcore.c:303)
nfp_cpp_area_cache_add (drivers/net/ethernet/netronome/nfp/nfpcore/nfp_cppcore.c:802)
nfp6000_init (drivers/net/ethernet/netronome/nfp/nfpcore/nfp6000_pcie.c:1230)
nfp_cpp_from_operations (drivers/net/ethernet/netronome/nfp/nfpcore/nfp_cppcore.c:1215)
nfp_pci_probe (drivers/net/ethernet/netronome/nfp/nfp_main.c:744)
Freed by task 1:
kfree (mm/slub.c:4562)
area_cache_get.constprop.8 (drivers/net/ethernet/netronome/nfp/nfpcore/nfp_cppcore.c:873)
nfp_cpp_read (drivers/net/ethernet/netronome/nfp/nfpcore/nfp_cppcore.c:924 drivers/net/ethernet/netronome/nfp/nfpcore/nfp_cppcore.c:973)
nfp_cpp_readl (drivers/net/ethernet/netronome/nfp/nfpcore/nfp_cpplib.c:48)
Signed-off-by: Jialiang Wang <wangjialiang0806@163.com>
Reviewed-by: Yinjun Zhang <yinjun.zhang@corigine.com>
Acked-by: Simon Horman <simon.horman@corigine.com>
Link: https://lore.kernel.org/r/20220810073057.4032-1-wangjialiang0806@163.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org> | false | 6eb6e5761715e630d11c438f0e0f15e5 | area_cache_get | static struct nfp_cpp_area_cache *
area_cache_get(struct nfp_cpp *cpp, u32 id,
u64 addr, unsigned long *offset, size_t length)
{
struct nfp_cpp_area_cache *cache;
int err;
/* Early exit when length == 0, which prevents
* the need for special case code below when
* checking against available cache size.
*/
if (length == 0 || id == 0)
return NULL;
/* Remap from cpp_island to cpp_target */
err = nfp_target_cpp(id, addr, &id, &addr, cpp->imb_cat_table);
if (err < 0)
return NULL;
mutex_lock(&cpp->area_cache_mutex);
if (list_empty(&cpp->area_cache_list)) {
mutex_unlock(&cpp->area_cache_mutex);
return NULL;
}
addr += *offset;
/* See if we have a match */
list_for_each_entry(cache, &cpp->area_cache_list, entry) {
if (id == cache->id &&
addr >= cache->addr &&
addr + length <= cache->addr + cache->size)
goto exit;
}
/* No matches - inspect the tail of the LRU */
cache = list_entry(cpp->area_cache_list.prev,
struct nfp_cpp_area_cache, entry);
/* Can we fit in the cache entry? */
if (round_down(addr + length - 1, cache->size) !=
round_down(addr, cache->size)) {
mutex_unlock(&cpp->area_cache_mutex);
return NULL;
}
/* If id != 0, we will need to release it */
if (cache->id) {
nfp_cpp_area_release(cache->area);
cache->id = 0;
cache->addr = 0;
}
/* Adjust the start address to be cache size aligned */
cache->addr = addr & ~(u64)(cache->size - 1);
/* Re-init to the new ID and address */
if (cpp->op->area_init) {
err = cpp->op->area_init(cache->area,
id, cache->addr, cache->size);
if (err < 0) {
mutex_unlock(&cpp->area_cache_mutex);
return NULL;
}
}
/* Attempt to acquire */
err = nfp_cpp_area_acquire(cache->area);
if (err < 0) {
mutex_unlock(&cpp->area_cache_mutex);
return NULL;
}
cache->id = id;
exit:
/* Adjust offset */
*offset = addr - cache->addr;
return cache;
}
| [[896, "\tcache->id = id;\n"], [897, "\n"]] | [[896, "cache->id = id;"], [897, "\n"]] | [
"CVE-2022-3545"
] | [
"CWE-119"
] | 3 | {
"Execution Environment": [],
"Explanation": null,
"External Function": [
"nfp_cpp_area_release",
"cpp->op->area_init",
"nfp_cpp_area_acquire"
],
"Function Argument": [],
"Globals": [],
"Type Execution Declaration": []
} |
4 | linux | https://github.com/torvalds/linux | net/netfilter/ipvs/ip_vs_ctl.c | 04bcef2a83f40c6db24222b27a52892cba39dffb | ipvs: Add boundary check on ioctl arguments
The ipvs code has a nifty system for doing the size of ioctl command
copies; it defines an array with values into which it indexes the cmd
to find the right length.
Unfortunately, the ipvs code forgot to check if the cmd was in the
range that the array provides, allowing for an index outside of the
array, which then gives a "garbage" result into the length, which
then gets used for copying into a stack buffer.
Fix this by adding sanity checks on these as well as the copy size.
[ horms@verge.net.au: adjusted limit to IP_VS_SO_GET_MAX ]
Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>
Acked-by: Julian Anastasov <ja@ssi.bg>
Signed-off-by: Simon Horman <horms@verge.net.au>
Signed-off-by: Patrick McHardy <kaber@trash.net> | true | 4142c8c9b956090ae535f2e0dfa2b660 | do_ip_vs_get_ctl | static int
do_ip_vs_get_ctl(struct sock *sk, int cmd, void __user *user, int *len)
{
unsigned char arg[128];
int ret = 0;
if (!capable(CAP_NET_ADMIN))
return -EPERM;
if (*len < get_arglen[GET_CMDID(cmd)]) {
pr_err("get_ctl: len %u < %u\n",
*len, get_arglen[GET_CMDID(cmd)]);
return -EINVAL;
}
if (copy_from_user(arg, user, get_arglen[GET_CMDID(cmd)]) != 0)
return -EFAULT;
if (mutex_lock_interruptible(&__ip_vs_mutex))
return -ERESTARTSYS;
switch (cmd) {
case IP_VS_SO_GET_VERSION:
{
char buf[64];
sprintf(buf, "IP Virtual Server version %d.%d.%d (size=%d)",
NVERSION(IP_VS_VERSION_CODE), IP_VS_CONN_TAB_SIZE);
if (copy_to_user(user, buf, strlen(buf)+1) != 0) {
ret = -EFAULT;
goto out;
}
*len = strlen(buf)+1;
}
break;
case IP_VS_SO_GET_INFO:
{
struct ip_vs_getinfo info;
info.version = IP_VS_VERSION_CODE;
info.size = IP_VS_CONN_TAB_SIZE;
info.num_services = ip_vs_num_services;
if (copy_to_user(user, &info, sizeof(info)) != 0)
ret = -EFAULT;
}
break;
case IP_VS_SO_GET_SERVICES:
{
struct ip_vs_get_services *get;
int size;
get = (struct ip_vs_get_services *)arg;
size = sizeof(*get) +
sizeof(struct ip_vs_service_entry) * get->num_services;
if (*len != size) {
pr_err("length: %u != %u\n", *len, size);
ret = -EINVAL;
goto out;
}
ret = __ip_vs_get_service_entries(get, user);
}
break;
case IP_VS_SO_GET_SERVICE:
{
struct ip_vs_service_entry *entry;
struct ip_vs_service *svc;
union nf_inet_addr addr;
entry = (struct ip_vs_service_entry *)arg;
addr.ip = entry->addr;
if (entry->fwmark)
svc = __ip_vs_svc_fwm_get(AF_INET, entry->fwmark);
else
svc = __ip_vs_service_get(AF_INET, entry->protocol,
&addr, entry->port);
if (svc) {
ip_vs_copy_service(entry, svc);
if (copy_to_user(user, entry, sizeof(*entry)) != 0)
ret = -EFAULT;
ip_vs_service_put(svc);
} else
ret = -ESRCH;
}
break;
case IP_VS_SO_GET_DESTS:
{
struct ip_vs_get_dests *get;
int size;
get = (struct ip_vs_get_dests *)arg;
size = sizeof(*get) +
sizeof(struct ip_vs_dest_entry) * get->num_dests;
if (*len != size) {
pr_err("length: %u != %u\n", *len, size);
ret = -EINVAL;
goto out;
}
ret = __ip_vs_get_dest_entries(get, user);
}
break;
case IP_VS_SO_GET_TIMEOUT:
{
struct ip_vs_timeout_user t;
__ip_vs_get_timeouts(&t);
if (copy_to_user(user, &t, sizeof(t)) != 0)
ret = -EFAULT;
}
break;
case IP_VS_SO_GET_DAEMON:
{
struct ip_vs_daemon_user d[2];
memset(&d, 0, sizeof(d));
if (ip_vs_sync_state & IP_VS_STATE_MASTER) {
d[0].state = IP_VS_STATE_MASTER;
strlcpy(d[0].mcast_ifn, ip_vs_master_mcast_ifn, sizeof(d[0].mcast_ifn));
d[0].syncid = ip_vs_master_syncid;
}
if (ip_vs_sync_state & IP_VS_STATE_BACKUP) {
d[1].state = IP_VS_STATE_BACKUP;
strlcpy(d[1].mcast_ifn, ip_vs_backup_mcast_ifn, sizeof(d[1].mcast_ifn));
d[1].syncid = ip_vs_backup_syncid;
}
if (copy_to_user(user, &d, sizeof(d)) != 0)
ret = -EFAULT;
}
break;
default:
ret = -EINVAL;
}
out:
mutex_unlock(&__ip_vs_mutex);
return ret;
}
| [[2365, "\tif (copy_from_user(arg, user, get_arglen[GET_CMDID(cmd)]) != 0)\n"]] | [[2365, "if (copy_from_user(arg, user, get_arglen[GET_CMDID(cmd)]) != 0)"]] | [
"CVE-2013-4588"
] | [
"CWE-119"
] | 6 | {
"Execution Environment": [],
"Explanation": null,
"External Function": [
"GET_CMDID"
],
"Function Argument": [
"cmd"
],
"Globals": [
"get_arglen"
],
"Type Execution Declaration": []
} |
5 | linux | https://github.com/torvalds/linux | net/netfilter/ipvs/ip_vs_ctl.c | 04bcef2a83f40c6db24222b27a52892cba39dffb | ipvs: Add boundary check on ioctl arguments
The ipvs code has a nifty system for doing the size of ioctl command
copies; it defines an array with values into which it indexes the cmd
to find the right length.
Unfortunately, the ipvs code forgot to check if the cmd was in the
range that the array provides, allowing for an index outside of the
array, which then gives a "garbage" result into the length, which
then gets used for copying into a stack buffer.
Fix this by adding sanity checks on these as well as the copy size.
[ horms@verge.net.au: adjusted limit to IP_VS_SO_GET_MAX ]
Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>
Acked-by: Julian Anastasov <ja@ssi.bg>
Signed-off-by: Simon Horman <horms@verge.net.au>
Signed-off-by: Patrick McHardy <kaber@trash.net> | false | 3952069a1f9b3d417354575f7e77abec | do_ip_vs_set_ctl | static int
do_ip_vs_set_ctl(struct sock *sk, int cmd, void __user *user, unsigned int len)
{
int ret;
unsigned char arg[MAX_ARG_LEN];
struct ip_vs_service_user *usvc_compat;
struct ip_vs_service_user_kern usvc;
struct ip_vs_service *svc;
struct ip_vs_dest_user *udest_compat;
struct ip_vs_dest_user_kern udest;
if (!capable(CAP_NET_ADMIN))
return -EPERM;
if (cmd < IP_VS_BASE_CTL || cmd > IP_VS_SO_SET_MAX)
return -EINVAL;
if (len < 0 || len > MAX_ARG_LEN)
return -EINVAL;
if (len != set_arglen[SET_CMDID(cmd)]) {
pr_err("set_ctl: len %u != %u\n",
len, set_arglen[SET_CMDID(cmd)]);
return -EINVAL;
}
if (copy_from_user(arg, user, len) != 0)
return -EFAULT;
/* increase the module use count */
ip_vs_use_count_inc();
if (mutex_lock_interruptible(&__ip_vs_mutex)) {
ret = -ERESTARTSYS;
goto out_dec;
}
if (cmd == IP_VS_SO_SET_FLUSH) {
/* Flush the virtual service */
ret = ip_vs_flush();
goto out_unlock;
} else if (cmd == IP_VS_SO_SET_TIMEOUT) {
/* Set timeout values for (tcp tcpfin udp) */
ret = ip_vs_set_timeout((struct ip_vs_timeout_user *)arg);
goto out_unlock;
} else if (cmd == IP_VS_SO_SET_STARTDAEMON) {
struct ip_vs_daemon_user *dm = (struct ip_vs_daemon_user *)arg;
ret = start_sync_thread(dm->state, dm->mcast_ifn, dm->syncid);
goto out_unlock;
} else if (cmd == IP_VS_SO_SET_STOPDAEMON) {
struct ip_vs_daemon_user *dm = (struct ip_vs_daemon_user *)arg;
ret = stop_sync_thread(dm->state);
goto out_unlock;
}
usvc_compat = (struct ip_vs_service_user *)arg;
udest_compat = (struct ip_vs_dest_user *)(usvc_compat + 1);
/* We only use the new structs internally, so copy userspace compat
* structs to extended internal versions */
ip_vs_copy_usvc_compat(&usvc, usvc_compat);
ip_vs_copy_udest_compat(&udest, udest_compat);
if (cmd == IP_VS_SO_SET_ZERO) {
/* if no service address is set, zero counters in all */
if (!usvc.fwmark && !usvc.addr.ip && !usvc.port) {
ret = ip_vs_zero_all();
goto out_unlock;
}
}
/* Check for valid protocol: TCP or UDP, even for fwmark!=0 */
if (usvc.protocol != IPPROTO_TCP && usvc.protocol != IPPROTO_UDP) {
pr_err("set_ctl: invalid protocol: %d %pI4:%d %s\n",
usvc.protocol, &usvc.addr.ip,
ntohs(usvc.port), usvc.sched_name);
ret = -EFAULT;
goto out_unlock;
}
/* Lookup the exact service by <protocol, addr, port> or fwmark */
if (usvc.fwmark == 0)
svc = __ip_vs_service_get(usvc.af, usvc.protocol,
&usvc.addr, usvc.port);
else
svc = __ip_vs_svc_fwm_get(usvc.af, usvc.fwmark);
if (cmd != IP_VS_SO_SET_ADD
&& (svc == NULL || svc->protocol != usvc.protocol)) {
ret = -ESRCH;
goto out_unlock;
}
switch (cmd) {
case IP_VS_SO_SET_ADD:
if (svc != NULL)
ret = -EEXIST;
else
ret = ip_vs_add_service(&usvc, &svc);
break;
case IP_VS_SO_SET_EDIT:
ret = ip_vs_edit_service(svc, &usvc);
break;
case IP_VS_SO_SET_DEL:
ret = ip_vs_del_service(svc);
if (!ret)
goto out_unlock;
break;
case IP_VS_SO_SET_ZERO:
ret = ip_vs_zero_service(svc);
break;
case IP_VS_SO_SET_ADDDEST:
ret = ip_vs_add_dest(svc, &udest);
break;
case IP_VS_SO_SET_EDITDEST:
ret = ip_vs_edit_dest(svc, &udest);
break;
case IP_VS_SO_SET_DELDEST:
ret = ip_vs_del_dest(svc, &udest);
break;
default:
ret = -EINVAL;
}
if (svc)
ip_vs_service_put(svc);
out_unlock:
mutex_unlock(&__ip_vs_mutex);
out_dec:
/* decrease the module use count */
ip_vs_use_count_dec();
return ret;
}
| [[2080, "\tif (cmd < IP_VS_BASE_CTL || cmd > IP_VS_SO_SET_MAX)\n"], [2081, "\t\treturn -EINVAL;\n"], [2082, "\tif (len < 0 || len > MAX_ARG_LEN)\n"], [2083, "\t\treturn -EINVAL;\n"]] | [[2080, "if (cmd < IP_VS_BASE_CTL || cmd > IP_VS_SO_SET_MAX)"], [2081, "return -EINVAL;"], [2082, "if (len < 0 || len > MAX_ARG_LEN)"], [2083, "return -EINVAL;"]] | [
"CVE-2013-4588"
] | [
"CWE-119"
] | null | {
"Execution Environment": null,
"Explanation": null,
"External Function": null,
"Function Argument": null,
"Globals": null,
"Type Execution Declaration": null
} |
6 | linux | https://github.com/torvalds/linux | net/netfilter/ipvs/ip_vs_ctl.c | 04bcef2a83f40c6db24222b27a52892cba39dffb | ipvs: Add boundary check on ioctl arguments
The ipvs code has a nifty system for doing the size of ioctl command
copies; it defines an array with values into which it indexes the cmd
to find the right length.
Unfortunately, the ipvs code forgot to check if the cmd was in the
range that the array provides, allowing for an index outside of the
array, which then gives a "garbage" result into the length, which
then gets used for copying into a stack buffer.
Fix this by adding sanity checks on these as well as the copy size.
[ horms@verge.net.au: adjusted limit to IP_VS_SO_GET_MAX ]
Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>
Acked-by: Julian Anastasov <ja@ssi.bg>
Signed-off-by: Simon Horman <horms@verge.net.au>
Signed-off-by: Patrick McHardy <kaber@trash.net> | false | cb91c5bff7f5d1bc520dfb0a8a207efb | do_ip_vs_get_ctl | static int
do_ip_vs_get_ctl(struct sock *sk, int cmd, void __user *user, int *len)
{
unsigned char arg[128];
int ret = 0;
unsigned int copylen;
if (!capable(CAP_NET_ADMIN))
return -EPERM;
if (cmd < IP_VS_BASE_CTL || cmd > IP_VS_SO_GET_MAX)
return -EINVAL;
if (*len < get_arglen[GET_CMDID(cmd)]) {
pr_err("get_ctl: len %u < %u\n",
*len, get_arglen[GET_CMDID(cmd)]);
return -EINVAL;
}
copylen = get_arglen[GET_CMDID(cmd)];
if (copylen > 128)
return -EINVAL;
if (copy_from_user(arg, user, copylen) != 0)
return -EFAULT;
if (mutex_lock_interruptible(&__ip_vs_mutex))
return -ERESTARTSYS;
switch (cmd) {
case IP_VS_SO_GET_VERSION:
{
char buf[64];
sprintf(buf, "IP Virtual Server version %d.%d.%d (size=%d)",
NVERSION(IP_VS_VERSION_CODE), IP_VS_CONN_TAB_SIZE);
if (copy_to_user(user, buf, strlen(buf)+1) != 0) {
ret = -EFAULT;
goto out;
}
*len = strlen(buf)+1;
}
break;
case IP_VS_SO_GET_INFO:
{
struct ip_vs_getinfo info;
info.version = IP_VS_VERSION_CODE;
info.size = IP_VS_CONN_TAB_SIZE;
info.num_services = ip_vs_num_services;
if (copy_to_user(user, &info, sizeof(info)) != 0)
ret = -EFAULT;
}
break;
case IP_VS_SO_GET_SERVICES:
{
struct ip_vs_get_services *get;
int size;
get = (struct ip_vs_get_services *)arg;
size = sizeof(*get) +
sizeof(struct ip_vs_service_entry) * get->num_services;
if (*len != size) {
pr_err("length: %u != %u\n", *len, size);
ret = -EINVAL;
goto out;
}
ret = __ip_vs_get_service_entries(get, user);
}
break;
case IP_VS_SO_GET_SERVICE:
{
struct ip_vs_service_entry *entry;
struct ip_vs_service *svc;
union nf_inet_addr addr;
entry = (struct ip_vs_service_entry *)arg;
addr.ip = entry->addr;
if (entry->fwmark)
svc = __ip_vs_svc_fwm_get(AF_INET, entry->fwmark);
else
svc = __ip_vs_service_get(AF_INET, entry->protocol,
&addr, entry->port);
if (svc) {
ip_vs_copy_service(entry, svc);
if (copy_to_user(user, entry, sizeof(*entry)) != 0)
ret = -EFAULT;
ip_vs_service_put(svc);
} else
ret = -ESRCH;
}
break;
case IP_VS_SO_GET_DESTS:
{
struct ip_vs_get_dests *get;
int size;
get = (struct ip_vs_get_dests *)arg;
size = sizeof(*get) +
sizeof(struct ip_vs_dest_entry) * get->num_dests;
if (*len != size) {
pr_err("length: %u != %u\n", *len, size);
ret = -EINVAL;
goto out;
}
ret = __ip_vs_get_dest_entries(get, user);
}
break;
case IP_VS_SO_GET_TIMEOUT:
{
struct ip_vs_timeout_user t;
__ip_vs_get_timeouts(&t);
if (copy_to_user(user, &t, sizeof(t)) != 0)
ret = -EFAULT;
}
break;
case IP_VS_SO_GET_DAEMON:
{
struct ip_vs_daemon_user d[2];
memset(&d, 0, sizeof(d));
if (ip_vs_sync_state & IP_VS_STATE_MASTER) {
d[0].state = IP_VS_STATE_MASTER;
strlcpy(d[0].mcast_ifn, ip_vs_master_mcast_ifn, sizeof(d[0].mcast_ifn));
d[0].syncid = ip_vs_master_syncid;
}
if (ip_vs_sync_state & IP_VS_STATE_BACKUP) {
d[1].state = IP_VS_STATE_BACKUP;
strlcpy(d[1].mcast_ifn, ip_vs_backup_mcast_ifn, sizeof(d[1].mcast_ifn));
d[1].syncid = ip_vs_backup_syncid;
}
if (copy_to_user(user, &d, sizeof(d)) != 0)
ret = -EFAULT;
}
break;
default:
ret = -EINVAL;
}
out:
mutex_unlock(&__ip_vs_mutex);
return ret;
}
| [[2359, "\tunsigned int copylen;\n"], [2364, "\tif (cmd < IP_VS_BASE_CTL || cmd > IP_VS_SO_GET_MAX)\n"], [2365, "\t\treturn -EINVAL;\n"], [2366, "\n"], [2373, "\tcopylen = get_arglen[GET_CMDID(cmd)];\n"], [2374, "\tif (copylen > 128)\n"], [2375, "\t\treturn -EINVAL;\n"], [2376, "\n"], [2377, "\tif (copy_from_user(arg, user, copylen) != 0)\n"]] | [[2359, "unsigned int copylen;"], [2364, "if (cmd < IP_VS_BASE_CTL || cmd > IP_VS_SO_GET_MAX)"], [2365, "return -EINVAL;"], [2366, "\n"], [2373, "copylen = get_arglen[GET_CMDID(cmd)];"], [2374, "if (copylen > 128)"], [2375, "return -EINVAL;"], [2376, "\n"], [2377, "if (copy_from_user(arg, user, copylen) != 0)"]] | [
"CVE-2013-4588"
] | [
"CWE-119"
] | 6 | {
"Execution Environment": [],
"Explanation": null,
"External Function": [
"GET_CMDID"
],
"Function Argument": [
"cmd"
],
"Globals": [
"get_arglen"
],
"Type Execution Declaration": []
} |
7 | linux | https://github.com/torvalds/linux | drivers/vfio/pci/vfio_pci.c | 05692d7005a364add85c6e25a6c4447ce08f913a | vfio/pci: Fix integer overflows, bitmask check
The VFIO_DEVICE_SET_IRQS ioctl did not sufficiently sanitize
user-supplied integers, potentially allowing memory corruption. This
patch adds appropriate integer overflow checks, checks the range bounds
for VFIO_IRQ_SET_DATA_NONE, and also verifies that only single element
in the VFIO_IRQ_SET_DATA_TYPE_MASK bitmask is set.
VFIO_IRQ_SET_ACTION_TYPE_MASK is already correctly checked later in
vfio_pci_set_irqs_ioctl().
Furthermore, a kzalloc is changed to a kcalloc because the use of a
kzalloc with an integer multiplication allowed an integer overflow
condition to be reached without this patch. kcalloc checks for overflow
and should prevent a similar occurrence.
Signed-off-by: Vlad Tsyrklevich <vlad@tsyrklevich.net>
Signed-off-by: Alex Williamson <alex.williamson@redhat.com> | true | a049777f01d743bcf5935c70c207b35e | vfio_pci_ioctl | static long vfio_pci_ioctl(void *device_data,
unsigned int cmd, unsigned long arg)
{
struct vfio_pci_device *vdev = device_data;
unsigned long minsz;
if (cmd == VFIO_DEVICE_GET_INFO) {
struct vfio_device_info info;
minsz = offsetofend(struct vfio_device_info, num_irqs);
if (copy_from_user(&info, (void __user *)arg, minsz))
return -EFAULT;
if (info.argsz < minsz)
return -EINVAL;
info.flags = VFIO_DEVICE_FLAGS_PCI;
if (vdev->reset_works)
info.flags |= VFIO_DEVICE_FLAGS_RESET;
info.num_regions = VFIO_PCI_NUM_REGIONS + vdev->num_regions;
info.num_irqs = VFIO_PCI_NUM_IRQS;
return copy_to_user((void __user *)arg, &info, minsz) ?
-EFAULT : 0;
} else if (cmd == VFIO_DEVICE_GET_REGION_INFO) {
struct pci_dev *pdev = vdev->pdev;
struct vfio_region_info info;
struct vfio_info_cap caps = { .buf = NULL, .size = 0 };
int i, ret;
minsz = offsetofend(struct vfio_region_info, offset);
if (copy_from_user(&info, (void __user *)arg, minsz))
return -EFAULT;
if (info.argsz < minsz)
return -EINVAL;
switch (info.index) {
case VFIO_PCI_CONFIG_REGION_INDEX:
info.offset = VFIO_PCI_INDEX_TO_OFFSET(info.index);
info.size = pdev->cfg_size;
info.flags = VFIO_REGION_INFO_FLAG_READ |
VFIO_REGION_INFO_FLAG_WRITE;
break;
case VFIO_PCI_BAR0_REGION_INDEX ... VFIO_PCI_BAR5_REGION_INDEX:
info.offset = VFIO_PCI_INDEX_TO_OFFSET(info.index);
info.size = pci_resource_len(pdev, info.index);
if (!info.size) {
info.flags = 0;
break;
}
info.flags = VFIO_REGION_INFO_FLAG_READ |
VFIO_REGION_INFO_FLAG_WRITE;
if (vdev->bar_mmap_supported[info.index]) {
info.flags |= VFIO_REGION_INFO_FLAG_MMAP;
if (info.index == vdev->msix_bar) {
ret = msix_sparse_mmap_cap(vdev, &caps);
if (ret)
return ret;
}
}
break;
case VFIO_PCI_ROM_REGION_INDEX:
{
void __iomem *io;
size_t size;
info.offset = VFIO_PCI_INDEX_TO_OFFSET(info.index);
info.flags = 0;
/* Report the BAR size, not the ROM size */
info.size = pci_resource_len(pdev, info.index);
if (!info.size) {
/* Shadow ROMs appear as PCI option ROMs */
if (pdev->resource[PCI_ROM_RESOURCE].flags &
IORESOURCE_ROM_SHADOW)
info.size = 0x20000;
else
break;
}
/* Is it really there? */
io = pci_map_rom(pdev, &size);
if (!io || !size) {
info.size = 0;
break;
}
pci_unmap_rom(pdev, io);
info.flags = VFIO_REGION_INFO_FLAG_READ;
break;
}
case VFIO_PCI_VGA_REGION_INDEX:
if (!vdev->has_vga)
return -EINVAL;
info.offset = VFIO_PCI_INDEX_TO_OFFSET(info.index);
info.size = 0xc0000;
info.flags = VFIO_REGION_INFO_FLAG_READ |
VFIO_REGION_INFO_FLAG_WRITE;
break;
default:
if (info.index >=
VFIO_PCI_NUM_REGIONS + vdev->num_regions)
return -EINVAL;
i = info.index - VFIO_PCI_NUM_REGIONS;
info.offset = VFIO_PCI_INDEX_TO_OFFSET(info.index);
info.size = vdev->region[i].size;
info.flags = vdev->region[i].flags;
ret = region_type_cap(vdev, &caps,
vdev->region[i].type,
vdev->region[i].subtype);
if (ret)
return ret;
}
if (caps.size) {
info.flags |= VFIO_REGION_INFO_FLAG_CAPS;
if (info.argsz < sizeof(info) + caps.size) {
info.argsz = sizeof(info) + caps.size;
info.cap_offset = 0;
} else {
vfio_info_cap_shift(&caps, sizeof(info));
if (copy_to_user((void __user *)arg +
sizeof(info), caps.buf,
caps.size)) {
kfree(caps.buf);
return -EFAULT;
}
info.cap_offset = sizeof(info);
}
kfree(caps.buf);
}
return copy_to_user((void __user *)arg, &info, minsz) ?
-EFAULT : 0;
} else if (cmd == VFIO_DEVICE_GET_IRQ_INFO) {
struct vfio_irq_info info;
minsz = offsetofend(struct vfio_irq_info, count);
if (copy_from_user(&info, (void __user *)arg, minsz))
return -EFAULT;
if (info.argsz < minsz || info.index >= VFIO_PCI_NUM_IRQS)
return -EINVAL;
switch (info.index) {
case VFIO_PCI_INTX_IRQ_INDEX ... VFIO_PCI_MSIX_IRQ_INDEX:
case VFIO_PCI_REQ_IRQ_INDEX:
break;
case VFIO_PCI_ERR_IRQ_INDEX:
if (pci_is_pcie(vdev->pdev))
break;
/* pass thru to return error */
default:
return -EINVAL;
}
info.flags = VFIO_IRQ_INFO_EVENTFD;
info.count = vfio_pci_get_irq_count(vdev, info.index);
if (info.index == VFIO_PCI_INTX_IRQ_INDEX)
info.flags |= (VFIO_IRQ_INFO_MASKABLE |
VFIO_IRQ_INFO_AUTOMASKED);
else
info.flags |= VFIO_IRQ_INFO_NORESIZE;
return copy_to_user((void __user *)arg, &info, minsz) ?
-EFAULT : 0;
} else if (cmd == VFIO_DEVICE_SET_IRQS) {
struct vfio_irq_set hdr;
u8 *data = NULL;
int ret = 0;
minsz = offsetofend(struct vfio_irq_set, count);
if (copy_from_user(&hdr, (void __user *)arg, minsz))
return -EFAULT;
if (hdr.argsz < minsz || hdr.index >= VFIO_PCI_NUM_IRQS ||
hdr.flags & ~(VFIO_IRQ_SET_DATA_TYPE_MASK |
VFIO_IRQ_SET_ACTION_TYPE_MASK))
return -EINVAL;
if (!(hdr.flags & VFIO_IRQ_SET_DATA_NONE)) {
size_t size;
int max = vfio_pci_get_irq_count(vdev, hdr.index);
if (hdr.flags & VFIO_IRQ_SET_DATA_BOOL)
size = sizeof(uint8_t);
else if (hdr.flags & VFIO_IRQ_SET_DATA_EVENTFD)
size = sizeof(int32_t);
else
return -EINVAL;
if (hdr.argsz - minsz < hdr.count * size ||
hdr.start >= max || hdr.start + hdr.count > max)
return -EINVAL;
data = memdup_user((void __user *)(arg + minsz),
hdr.count * size);
if (IS_ERR(data))
return PTR_ERR(data);
}
mutex_lock(&vdev->igate);
ret = vfio_pci_set_irqs_ioctl(vdev, hdr.flags, hdr.index,
hdr.start, hdr.count, data);
mutex_unlock(&vdev->igate);
kfree(data);
return ret;
} else if (cmd == VFIO_DEVICE_RESET) {
return vdev->reset_works ?
pci_try_reset_function(vdev->pdev) : -EINVAL;
} else if (cmd == VFIO_DEVICE_GET_PCI_HOT_RESET_INFO) {
struct vfio_pci_hot_reset_info hdr;
struct vfio_pci_fill_info fill = { 0 };
struct vfio_pci_dependent_device *devices = NULL;
bool slot = false;
int ret = 0;
minsz = offsetofend(struct vfio_pci_hot_reset_info, count);
if (copy_from_user(&hdr, (void __user *)arg, minsz))
return -EFAULT;
if (hdr.argsz < minsz)
return -EINVAL;
hdr.flags = 0;
/* Can we do a slot or bus reset or neither? */
if (!pci_probe_reset_slot(vdev->pdev->slot))
slot = true;
else if (pci_probe_reset_bus(vdev->pdev->bus))
return -ENODEV;
/* How many devices are affected? */
ret = vfio_pci_for_each_slot_or_bus(vdev->pdev,
vfio_pci_count_devs,
&fill.max, slot);
if (ret)
return ret;
WARN_ON(!fill.max); /* Should always be at least one */
/*
* If there's enough space, fill it now, otherwise return
* -ENOSPC and the number of devices affected.
*/
if (hdr.argsz < sizeof(hdr) + (fill.max * sizeof(*devices))) {
ret = -ENOSPC;
hdr.count = fill.max;
goto reset_info_exit;
}
devices = kcalloc(fill.max, sizeof(*devices), GFP_KERNEL);
if (!devices)
return -ENOMEM;
fill.devices = devices;
ret = vfio_pci_for_each_slot_or_bus(vdev->pdev,
vfio_pci_fill_devs,
&fill, slot);
/*
* If a device was removed between counting and filling,
* we may come up short of fill.max. If a device was
* added, we'll have a return of -EAGAIN above.
*/
if (!ret)
hdr.count = fill.cur;
reset_info_exit:
if (copy_to_user((void __user *)arg, &hdr, minsz))
ret = -EFAULT;
if (!ret) {
if (copy_to_user((void __user *)(arg + minsz), devices,
hdr.count * sizeof(*devices)))
ret = -EFAULT;
}
kfree(devices);
return ret;
} else if (cmd == VFIO_DEVICE_PCI_HOT_RESET) {
struct vfio_pci_hot_reset hdr;
int32_t *group_fds;
struct vfio_pci_group_entry *groups;
struct vfio_pci_group_info info;
bool slot = false;
int i, count = 0, ret = 0;
minsz = offsetofend(struct vfio_pci_hot_reset, count);
if (copy_from_user(&hdr, (void __user *)arg, minsz))
return -EFAULT;
if (hdr.argsz < minsz || hdr.flags)
return -EINVAL;
/* Can we do a slot or bus reset or neither? */
if (!pci_probe_reset_slot(vdev->pdev->slot))
slot = true;
else if (pci_probe_reset_bus(vdev->pdev->bus))
return -ENODEV;
/*
* We can't let userspace give us an arbitrarily large
* buffer to copy, so verify how many we think there
* could be. Note groups can have multiple devices so
* one group per device is the max.
*/
ret = vfio_pci_for_each_slot_or_bus(vdev->pdev,
vfio_pci_count_devs,
&count, slot);
if (ret)
return ret;
/* Somewhere between 1 and count is OK */
if (!hdr.count || hdr.count > count)
return -EINVAL;
group_fds = kcalloc(hdr.count, sizeof(*group_fds), GFP_KERNEL);
groups = kcalloc(hdr.count, sizeof(*groups), GFP_KERNEL);
if (!group_fds || !groups) {
kfree(group_fds);
kfree(groups);
return -ENOMEM;
}
if (copy_from_user(group_fds, (void __user *)(arg + minsz),
hdr.count * sizeof(*group_fds))) {
kfree(group_fds);
kfree(groups);
return -EFAULT;
}
/*
* For each group_fd, get the group through the vfio external
* user interface and store the group and iommu ID. This
* ensures the group is held across the reset.
*/
for (i = 0; i < hdr.count; i++) {
struct vfio_group *group;
struct fd f = fdget(group_fds[i]);
if (!f.file) {
ret = -EBADF;
break;
}
group = vfio_group_get_external_user(f.file);
fdput(f);
if (IS_ERR(group)) {
ret = PTR_ERR(group);
break;
}
groups[i].group = group;
groups[i].id = vfio_external_user_iommu_id(group);
}
kfree(group_fds);
/* release reference to groups on error */
if (ret)
goto hot_reset_release;
info.count = hdr.count;
info.groups = groups;
/*
* Test whether all the affected devices are contained
* by the set of groups provided by the user.
*/
ret = vfio_pci_for_each_slot_or_bus(vdev->pdev,
vfio_pci_validate_devs,
&info, slot);
if (!ret)
/* User has access, do the reset */
ret = slot ? pci_try_reset_slot(vdev->pdev->slot) :
pci_try_reset_bus(vdev->pdev->bus);
hot_reset_release:
for (i--; i >= 0; i--)
vfio_group_put_external_user(groups[i].group);
kfree(groups);
return ret;
}
return -ENOTTY;
}
| [[833, "\t\tint ret = 0;\n"], [845, "\t\tif (!(hdr.flags & VFIO_IRQ_SET_DATA_NONE)) {\n"], [846, "\t\t\tsize_t size;\n"], [847, "\t\t\tint max = vfio_pci_get_irq_count(vdev, hdr.index);\n"], [849, "\t\t\tif (hdr.flags & VFIO_IRQ_SET_DATA_BOOL)\n"], [850, "\t\t\t\tsize = sizeof(uint8_t);\n"], [851, "\t\t\telse if (hdr.flags & VFIO_IRQ_SET_DATA_EVENTFD)\n"], [852, "\t\t\t\tsize = sizeof(int32_t);\n"], [853, "\t\t\telse\n"], [854, "\t\t\t\treturn -EINVAL;\n"], [856, "\t\t\tif (hdr.argsz - minsz < hdr.count * size ||\n"], [857, "\t\t\t hdr.start >= max || hdr.start + hdr.count > max)\n"]] | [[833, "int ret = 0;"], [845, "if (!(hdr.flags & VFIO_IRQ_SET_DATA_NONE))"], [846, "size_t size;"], [847, "int max = vfio_pci_get_irq_count(vdev, hdr.index);"], [849, "if (hdr.flags & VFIO_IRQ_SET_DATA_BOOL)"], [850, "size = sizeof(uint8_t);"], [851, "else if (hdr.flags & VFIO_IRQ_SET_DATA_EVENTFD)"], [852, "size = sizeof(int32_t);"], [853, "else"], [854, "return -EINVAL;"], [856, "if (hdr.argsz - minsz < hdr.count * size ||\n\t\t\t hdr.start >= max || hdr.start + hdr.count > max)"]] | [
"CVE-2016-9083",
"CVE-2016-9084"
] | [
"CWE-190",
"CWE-119"
] | 8 | {
"Execution Environment": [],
"Explanation": null,
"External Function": [
"vfio_pci_get_irq_count"
],
"Function Argument": [
"device_data",
"cmd",
"arg"
],
"Globals": [
"VFIO_IRQ_SET_DATA_NONE",
"VFIO_IRQ_SET_DATA_BOOL",
"VFIO_IRQ_SET_DATA_EVENTFD",
"VFIO_IRQ_SET_DATA_TYPE_MASK"
],
"Type Execution Declaration": []
} |
8 | linux | https://github.com/torvalds/linux | drivers/vfio/pci/vfio_pci.c | 05692d7005a364add85c6e25a6c4447ce08f913a | vfio/pci: Fix integer overflows, bitmask check
The VFIO_DEVICE_SET_IRQS ioctl did not sufficiently sanitize
user-supplied integers, potentially allowing memory corruption. This
patch adds appropriate integer overflow checks, checks the range bounds
for VFIO_IRQ_SET_DATA_NONE, and also verifies that only single element
in the VFIO_IRQ_SET_DATA_TYPE_MASK bitmask is set.
VFIO_IRQ_SET_ACTION_TYPE_MASK is already correctly checked later in
vfio_pci_set_irqs_ioctl().
Furthermore, a kzalloc is changed to a kcalloc because the use of a
kzalloc with an integer multiplication allowed an integer overflow
condition to be reached without this patch. kcalloc checks for overflow
and should prevent a similar occurrence.
Signed-off-by: Vlad Tsyrklevich <vlad@tsyrklevich.net>
Signed-off-by: Alex Williamson <alex.williamson@redhat.com> | false | a868d284be988ce6b14e3074a510fdcf | vfio_pci_ioctl | static long vfio_pci_ioctl(void *device_data,
unsigned int cmd, unsigned long arg)
{
struct vfio_pci_device *vdev = device_data;
unsigned long minsz;
if (cmd == VFIO_DEVICE_GET_INFO) {
struct vfio_device_info info;
minsz = offsetofend(struct vfio_device_info, num_irqs);
if (copy_from_user(&info, (void __user *)arg, minsz))
return -EFAULT;
if (info.argsz < minsz)
return -EINVAL;
info.flags = VFIO_DEVICE_FLAGS_PCI;
if (vdev->reset_works)
info.flags |= VFIO_DEVICE_FLAGS_RESET;
info.num_regions = VFIO_PCI_NUM_REGIONS + vdev->num_regions;
info.num_irqs = VFIO_PCI_NUM_IRQS;
return copy_to_user((void __user *)arg, &info, minsz) ?
-EFAULT : 0;
} else if (cmd == VFIO_DEVICE_GET_REGION_INFO) {
struct pci_dev *pdev = vdev->pdev;
struct vfio_region_info info;
struct vfio_info_cap caps = { .buf = NULL, .size = 0 };
int i, ret;
minsz = offsetofend(struct vfio_region_info, offset);
if (copy_from_user(&info, (void __user *)arg, minsz))
return -EFAULT;
if (info.argsz < minsz)
return -EINVAL;
switch (info.index) {
case VFIO_PCI_CONFIG_REGION_INDEX:
info.offset = VFIO_PCI_INDEX_TO_OFFSET(info.index);
info.size = pdev->cfg_size;
info.flags = VFIO_REGION_INFO_FLAG_READ |
VFIO_REGION_INFO_FLAG_WRITE;
break;
case VFIO_PCI_BAR0_REGION_INDEX ... VFIO_PCI_BAR5_REGION_INDEX:
info.offset = VFIO_PCI_INDEX_TO_OFFSET(info.index);
info.size = pci_resource_len(pdev, info.index);
if (!info.size) {
info.flags = 0;
break;
}
info.flags = VFIO_REGION_INFO_FLAG_READ |
VFIO_REGION_INFO_FLAG_WRITE;
if (vdev->bar_mmap_supported[info.index]) {
info.flags |= VFIO_REGION_INFO_FLAG_MMAP;
if (info.index == vdev->msix_bar) {
ret = msix_sparse_mmap_cap(vdev, &caps);
if (ret)
return ret;
}
}
break;
case VFIO_PCI_ROM_REGION_INDEX:
{
void __iomem *io;
size_t size;
info.offset = VFIO_PCI_INDEX_TO_OFFSET(info.index);
info.flags = 0;
/* Report the BAR size, not the ROM size */
info.size = pci_resource_len(pdev, info.index);
if (!info.size) {
/* Shadow ROMs appear as PCI option ROMs */
if (pdev->resource[PCI_ROM_RESOURCE].flags &
IORESOURCE_ROM_SHADOW)
info.size = 0x20000;
else
break;
}
/* Is it really there? */
io = pci_map_rom(pdev, &size);
if (!io || !size) {
info.size = 0;
break;
}
pci_unmap_rom(pdev, io);
info.flags = VFIO_REGION_INFO_FLAG_READ;
break;
}
case VFIO_PCI_VGA_REGION_INDEX:
if (!vdev->has_vga)
return -EINVAL;
info.offset = VFIO_PCI_INDEX_TO_OFFSET(info.index);
info.size = 0xc0000;
info.flags = VFIO_REGION_INFO_FLAG_READ |
VFIO_REGION_INFO_FLAG_WRITE;
break;
default:
if (info.index >=
VFIO_PCI_NUM_REGIONS + vdev->num_regions)
return -EINVAL;
i = info.index - VFIO_PCI_NUM_REGIONS;
info.offset = VFIO_PCI_INDEX_TO_OFFSET(info.index);
info.size = vdev->region[i].size;
info.flags = vdev->region[i].flags;
ret = region_type_cap(vdev, &caps,
vdev->region[i].type,
vdev->region[i].subtype);
if (ret)
return ret;
}
if (caps.size) {
info.flags |= VFIO_REGION_INFO_FLAG_CAPS;
if (info.argsz < sizeof(info) + caps.size) {
info.argsz = sizeof(info) + caps.size;
info.cap_offset = 0;
} else {
vfio_info_cap_shift(&caps, sizeof(info));
if (copy_to_user((void __user *)arg +
sizeof(info), caps.buf,
caps.size)) {
kfree(caps.buf);
return -EFAULT;
}
info.cap_offset = sizeof(info);
}
kfree(caps.buf);
}
return copy_to_user((void __user *)arg, &info, minsz) ?
-EFAULT : 0;
} else if (cmd == VFIO_DEVICE_GET_IRQ_INFO) {
struct vfio_irq_info info;
minsz = offsetofend(struct vfio_irq_info, count);
if (copy_from_user(&info, (void __user *)arg, minsz))
return -EFAULT;
if (info.argsz < minsz || info.index >= VFIO_PCI_NUM_IRQS)
return -EINVAL;
switch (info.index) {
case VFIO_PCI_INTX_IRQ_INDEX ... VFIO_PCI_MSIX_IRQ_INDEX:
case VFIO_PCI_REQ_IRQ_INDEX:
break;
case VFIO_PCI_ERR_IRQ_INDEX:
if (pci_is_pcie(vdev->pdev))
break;
/* pass thru to return error */
default:
return -EINVAL;
}
info.flags = VFIO_IRQ_INFO_EVENTFD;
info.count = vfio_pci_get_irq_count(vdev, info.index);
if (info.index == VFIO_PCI_INTX_IRQ_INDEX)
info.flags |= (VFIO_IRQ_INFO_MASKABLE |
VFIO_IRQ_INFO_AUTOMASKED);
else
info.flags |= VFIO_IRQ_INFO_NORESIZE;
return copy_to_user((void __user *)arg, &info, minsz) ?
-EFAULT : 0;
} else if (cmd == VFIO_DEVICE_SET_IRQS) {
struct vfio_irq_set hdr;
size_t size;
u8 *data = NULL;
int max, ret = 0;
minsz = offsetofend(struct vfio_irq_set, count);
if (copy_from_user(&hdr, (void __user *)arg, minsz))
return -EFAULT;
if (hdr.argsz < minsz || hdr.index >= VFIO_PCI_NUM_IRQS ||
hdr.count >= (U32_MAX - hdr.start) ||
hdr.flags & ~(VFIO_IRQ_SET_DATA_TYPE_MASK |
VFIO_IRQ_SET_ACTION_TYPE_MASK))
return -EINVAL;
max = vfio_pci_get_irq_count(vdev, hdr.index);
if (hdr.start >= max || hdr.start + hdr.count > max)
return -EINVAL;
switch (hdr.flags & VFIO_IRQ_SET_DATA_TYPE_MASK) {
case VFIO_IRQ_SET_DATA_NONE:
size = 0;
break;
case VFIO_IRQ_SET_DATA_BOOL:
size = sizeof(uint8_t);
break;
case VFIO_IRQ_SET_DATA_EVENTFD:
size = sizeof(int32_t);
break;
default:
return -EINVAL;
}
if (size) {
if (hdr.argsz - minsz < hdr.count * size)
return -EINVAL;
data = memdup_user((void __user *)(arg + minsz),
hdr.count * size);
if (IS_ERR(data))
return PTR_ERR(data);
}
mutex_lock(&vdev->igate);
ret = vfio_pci_set_irqs_ioctl(vdev, hdr.flags, hdr.index,
hdr.start, hdr.count, data);
mutex_unlock(&vdev->igate);
kfree(data);
return ret;
} else if (cmd == VFIO_DEVICE_RESET) {
return vdev->reset_works ?
pci_try_reset_function(vdev->pdev) : -EINVAL;
} else if (cmd == VFIO_DEVICE_GET_PCI_HOT_RESET_INFO) {
struct vfio_pci_hot_reset_info hdr;
struct vfio_pci_fill_info fill = { 0 };
struct vfio_pci_dependent_device *devices = NULL;
bool slot = false;
int ret = 0;
minsz = offsetofend(struct vfio_pci_hot_reset_info, count);
if (copy_from_user(&hdr, (void __user *)arg, minsz))
return -EFAULT;
if (hdr.argsz < minsz)
return -EINVAL;
hdr.flags = 0;
/* Can we do a slot or bus reset or neither? */
if (!pci_probe_reset_slot(vdev->pdev->slot))
slot = true;
else if (pci_probe_reset_bus(vdev->pdev->bus))
return -ENODEV;
/* How many devices are affected? */
ret = vfio_pci_for_each_slot_or_bus(vdev->pdev,
vfio_pci_count_devs,
&fill.max, slot);
if (ret)
return ret;
WARN_ON(!fill.max); /* Should always be at least one */
/*
* If there's enough space, fill it now, otherwise return
* -ENOSPC and the number of devices affected.
*/
if (hdr.argsz < sizeof(hdr) + (fill.max * sizeof(*devices))) {
ret = -ENOSPC;
hdr.count = fill.max;
goto reset_info_exit;
}
devices = kcalloc(fill.max, sizeof(*devices), GFP_KERNEL);
if (!devices)
return -ENOMEM;
fill.devices = devices;
ret = vfio_pci_for_each_slot_or_bus(vdev->pdev,
vfio_pci_fill_devs,
&fill, slot);
/*
* If a device was removed between counting and filling,
* we may come up short of fill.max. If a device was
* added, we'll have a return of -EAGAIN above.
*/
if (!ret)
hdr.count = fill.cur;
reset_info_exit:
if (copy_to_user((void __user *)arg, &hdr, minsz))
ret = -EFAULT;
if (!ret) {
if (copy_to_user((void __user *)(arg + minsz), devices,
hdr.count * sizeof(*devices)))
ret = -EFAULT;
}
kfree(devices);
return ret;
} else if (cmd == VFIO_DEVICE_PCI_HOT_RESET) {
struct vfio_pci_hot_reset hdr;
int32_t *group_fds;
struct vfio_pci_group_entry *groups;
struct vfio_pci_group_info info;
bool slot = false;
int i, count = 0, ret = 0;
minsz = offsetofend(struct vfio_pci_hot_reset, count);
if (copy_from_user(&hdr, (void __user *)arg, minsz))
return -EFAULT;
if (hdr.argsz < minsz || hdr.flags)
return -EINVAL;
/* Can we do a slot or bus reset or neither? */
if (!pci_probe_reset_slot(vdev->pdev->slot))
slot = true;
else if (pci_probe_reset_bus(vdev->pdev->bus))
return -ENODEV;
/*
* We can't let userspace give us an arbitrarily large
* buffer to copy, so verify how many we think there
* could be. Note groups can have multiple devices so
* one group per device is the max.
*/
ret = vfio_pci_for_each_slot_or_bus(vdev->pdev,
vfio_pci_count_devs,
&count, slot);
if (ret)
return ret;
/* Somewhere between 1 and count is OK */
if (!hdr.count || hdr.count > count)
return -EINVAL;
group_fds = kcalloc(hdr.count, sizeof(*group_fds), GFP_KERNEL);
groups = kcalloc(hdr.count, sizeof(*groups), GFP_KERNEL);
if (!group_fds || !groups) {
kfree(group_fds);
kfree(groups);
return -ENOMEM;
}
if (copy_from_user(group_fds, (void __user *)(arg + minsz),
hdr.count * sizeof(*group_fds))) {
kfree(group_fds);
kfree(groups);
return -EFAULT;
}
/*
* For each group_fd, get the group through the vfio external
* user interface and store the group and iommu ID. This
* ensures the group is held across the reset.
*/
for (i = 0; i < hdr.count; i++) {
struct vfio_group *group;
struct fd f = fdget(group_fds[i]);
if (!f.file) {
ret = -EBADF;
break;
}
group = vfio_group_get_external_user(f.file);
fdput(f);
if (IS_ERR(group)) {
ret = PTR_ERR(group);
break;
}
groups[i].group = group;
groups[i].id = vfio_external_user_iommu_id(group);
}
kfree(group_fds);
/* release reference to groups on error */
if (ret)
goto hot_reset_release;
info.count = hdr.count;
info.groups = groups;
/*
* Test whether all the affected devices are contained
* by the set of groups provided by the user.
*/
ret = vfio_pci_for_each_slot_or_bus(vdev->pdev,
vfio_pci_validate_devs,
&info, slot);
if (!ret)
/* User has access, do the reset */
ret = slot ? pci_try_reset_slot(vdev->pdev->slot) :
pci_try_reset_bus(vdev->pdev->bus);
hot_reset_release:
for (i--; i >= 0; i--)
vfio_group_put_external_user(groups[i].group);
kfree(groups);
return ret;
}
return -ENOTTY;
}
| [[832, "\t\tsize_t size;\n"], [834, "\t\tint max, ret = 0;\n"], [842, "\t\t hdr.count >= (U32_MAX - hdr.start) ||\n"], [847, "\t\tmax = vfio_pci_get_irq_count(vdev, hdr.index);\n"], [848, "\t\tif (hdr.start >= max || hdr.start + hdr.count > max)\n"], [849, "\t\t\treturn -EINVAL;\n"], [851, "\t\tswitch (hdr.flags & VFIO_IRQ_SET_DATA_TYPE_MASK) {\n"], [852, "\t\tcase VFIO_IRQ_SET_DATA_NONE:\n"], [853, "\t\t\tsize = 0;\n"], [854, "\t\t\tbreak;\n"], [855, "\t\tcase VFIO_IRQ_SET_DATA_BOOL:\n"], [856, "\t\t\tsize = sizeof(uint8_t);\n"], [857, "\t\t\tbreak;\n"], [858, "\t\tcase VFIO_IRQ_SET_DATA_EVENTFD:\n"], [859, "\t\t\tsize = sizeof(int32_t);\n"], [860, "\t\t\tbreak;\n"], [861, "\t\tdefault:\n"], [862, "\t\t\treturn -EINVAL;\n"], [863, "\t\t}\n"], [865, "\t\tif (size) {\n"], [866, "\t\t\tif (hdr.argsz - minsz < hdr.count * size)\n"]] | [[832, "size_t size;"], [834, "int max, ret = 0;"], [841, "if (hdr.argsz < minsz || hdr.index >= VFIO_PCI_NUM_IRQS ||\n\t\t hdr.count >= (U32_MAX - hdr.start) ||\n\t\t hdr.flags & ~(VFIO_IRQ_SET_DATA_TYPE_MASK |\n\t\t\t\t VFIO_IRQ_SET_ACTION_TYPE_MASK))"], [847, "max = vfio_pci_get_irq_count(vdev, hdr.index);"], [848, "if (hdr.start >= max || hdr.start + hdr.count > max)"], [849, "return -EINVAL;"], [851, "switch (hdr.flags & VFIO_IRQ_SET_DATA_TYPE_MASK) {"], [852, "case VFIO_IRQ_SET_DATA_NONE:"], [853, "size = 0;"], [854, "break;"], [855, "case VFIO_IRQ_SET_DATA_BOOL:"], [856, "size = sizeof(uint8_t);"], [857, "break;"], [858, "case VFIO_IRQ_SET_DATA_EVENTFD:"], [859, "size = sizeof(int32_t);"], [860, "break;"], [861, "default:"], [862, "return -EINVAL;"], [863, "\t\t}\n"], [865, "if (size)"], [866, "if (hdr.argsz - minsz < hdr.count * size)"]] | [
"CVE-2016-9083",
"CVE-2016-9084"
] | [
"CWE-190",
"CWE-119"
] | 8 | {
"Execution Environment": [],
"Explanation": null,
"External Function": [
"vfio_pci_get_irq_count"
],
"Function Argument": [
"device_data",
"cmd",
"arg"
],
"Globals": [
"VFIO_IRQ_SET_DATA_NONE",
"VFIO_IRQ_SET_DATA_BOOL",
"VFIO_IRQ_SET_DATA_EVENTFD",
"VFIO_IRQ_SET_DATA_TYPE_MASK"
],
"Type Execution Declaration": []
} |
9 | linux | https://github.com/torvalds/linux | drivers/vfio/pci/vfio_pci_intrs.c | 05692d7005a364add85c6e25a6c4447ce08f913a | vfio/pci: Fix integer overflows, bitmask check
The VFIO_DEVICE_SET_IRQS ioctl did not sufficiently sanitize
user-supplied integers, potentially allowing memory corruption. This
patch adds appropriate integer overflow checks, checks the range bounds
for VFIO_IRQ_SET_DATA_NONE, and also verifies that only single element
in the VFIO_IRQ_SET_DATA_TYPE_MASK bitmask is set.
VFIO_IRQ_SET_ACTION_TYPE_MASK is already correctly checked later in
vfio_pci_set_irqs_ioctl().
Furthermore, a kzalloc is changed to a kcalloc because the use of a
kzalloc with an integer multiplication allowed an integer overflow
condition to be reached without this patch. kcalloc checks for overflow
and should prevent a similar occurrence.
Signed-off-by: Vlad Tsyrklevich <vlad@tsyrklevich.net>
Signed-off-by: Alex Williamson <alex.williamson@redhat.com> | true | 4e082208eaeda7cf04e95dca1377d293 | vfio_msi_enable | static int vfio_msi_enable(struct vfio_pci_device *vdev, int nvec, bool msix)
{
struct pci_dev *pdev = vdev->pdev;
unsigned int flag = msix ? PCI_IRQ_MSIX : PCI_IRQ_MSI;
int ret;
if (!is_irq_none(vdev))
return -EINVAL;
vdev->ctx = kzalloc(nvec * sizeof(struct vfio_pci_irq_ctx), GFP_KERNEL);
if (!vdev->ctx)
return -ENOMEM;
/* return the number of supported vectors if we can't get all: */
ret = pci_alloc_irq_vectors(pdev, 1, nvec, flag);
if (ret < nvec) {
if (ret > 0)
pci_free_irq_vectors(pdev);
kfree(vdev->ctx);
return ret;
}
vdev->num_ctx = nvec;
vdev->irq_type = msix ? VFIO_PCI_MSIX_IRQ_INDEX :
VFIO_PCI_MSI_IRQ_INDEX;
if (!msix) {
/*
* Compute the virtual hardware field for max msi vectors -
* it is the log base 2 of the number of vectors.
*/
vdev->msi_qmax = fls(nvec * 2 - 1) - 1;
}
return 0;
}
| [[259, "\tvdev->ctx = kzalloc(nvec * sizeof(struct vfio_pci_irq_ctx), GFP_KERNEL);\n"]] | [[259, "vdev->ctx = kzalloc(nvec * sizeof(struct vfio_pci_irq_ctx), GFP_KERNEL);"]] | [
"CVE-2016-9083",
"CVE-2016-9084"
] | [
"CWE-190",
"CWE-119"
] | 10 | {
"Execution Environment": [],
"Explanation": null,
"External Function": [],
"Function Argument": [
"nvec"
],
"Globals": [],
"Type Execution Declaration": []
} |
10 | linux | https://github.com/torvalds/linux | drivers/vfio/pci/vfio_pci_intrs.c | 05692d7005a364add85c6e25a6c4447ce08f913a | vfio/pci: Fix integer overflows, bitmask check
The VFIO_DEVICE_SET_IRQS ioctl did not sufficiently sanitize
user-supplied integers, potentially allowing memory corruption. This
patch adds appropriate integer overflow checks, checks the range bounds
for VFIO_IRQ_SET_DATA_NONE, and also verifies that only single element
in the VFIO_IRQ_SET_DATA_TYPE_MASK bitmask is set.
VFIO_IRQ_SET_ACTION_TYPE_MASK is already correctly checked later in
vfio_pci_set_irqs_ioctl().
Furthermore, a kzalloc is changed to a kcalloc because the use of a
kzalloc with an integer multiplication allowed an integer overflow
condition to be reached without this patch. kcalloc checks for overflow
and should prevent a similar occurrence.
Signed-off-by: Vlad Tsyrklevich <vlad@tsyrklevich.net>
Signed-off-by: Alex Williamson <alex.williamson@redhat.com> | false | 7e49e89feb2e97fdd27fd1dc9883d202 | vfio_msi_enable | static int vfio_msi_enable(struct vfio_pci_device *vdev, int nvec, bool msix)
{
struct pci_dev *pdev = vdev->pdev;
unsigned int flag = msix ? PCI_IRQ_MSIX : PCI_IRQ_MSI;
int ret;
if (!is_irq_none(vdev))
return -EINVAL;
vdev->ctx = kcalloc(nvec, sizeof(struct vfio_pci_irq_ctx), GFP_KERNEL);
if (!vdev->ctx)
return -ENOMEM;
/* return the number of supported vectors if we can't get all: */
ret = pci_alloc_irq_vectors(pdev, 1, nvec, flag);
if (ret < nvec) {
if (ret > 0)
pci_free_irq_vectors(pdev);
kfree(vdev->ctx);
return ret;
}
vdev->num_ctx = nvec;
vdev->irq_type = msix ? VFIO_PCI_MSIX_IRQ_INDEX :
VFIO_PCI_MSI_IRQ_INDEX;
if (!msix) {
/*
* Compute the virtual hardware field for max msi vectors -
* it is the log base 2 of the number of vectors.
*/
vdev->msi_qmax = fls(nvec * 2 - 1) - 1;
}
return 0;
}
| [[259, "\tvdev->ctx = kcalloc(nvec, sizeof(struct vfio_pci_irq_ctx), GFP_KERNEL);\n"]] | [[259, "vdev->ctx = kcalloc(nvec, sizeof(struct vfio_pci_irq_ctx), GFP_KERNEL);"]] | [
"CVE-2016-9083",
"CVE-2016-9084"
] | [
"CWE-190",
"CWE-119"
] | 10 | {
"Execution Environment": [],
"Explanation": null,
"External Function": [],
"Function Argument": [
"nvec"
],
"Globals": [],
"Type Execution Declaration": []
} |
11 | linux | https://github.com/torvalds/linux | drivers/infiniband/hw/mlx5/qp.c | 0625b4ba1a5d4703c7fb01c497bd6c156908af00 | IB/mlx5: Fix leaking stack memory to userspace
mlx5_ib_create_qp_resp was never initialized and only the first 4 bytes
were written.
Fixes: 41d902cb7c32 ("RDMA/mlx5: Fix definition of mlx5_ib_create_qp_resp")
Cc: <stable@vger.kernel.org>
Acked-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com> | true | 4f14403d1b00e2825b924f7bae6f799d | create_qp_common | static int create_qp_common(struct mlx5_ib_dev *dev, struct ib_pd *pd,
struct ib_qp_init_attr *init_attr,
struct ib_udata *udata, struct mlx5_ib_qp *qp)
{
struct mlx5_ib_resources *devr = &dev->devr;
int inlen = MLX5_ST_SZ_BYTES(create_qp_in);
struct mlx5_core_dev *mdev = dev->mdev;
struct mlx5_ib_create_qp_resp resp;
struct mlx5_ib_cq *send_cq;
struct mlx5_ib_cq *recv_cq;
unsigned long flags;
u32 uidx = MLX5_IB_DEFAULT_UIDX;
struct mlx5_ib_create_qp ucmd;
struct mlx5_ib_qp_base *base;
int mlx5_st;
void *qpc;
u32 *in;
int err;
mutex_init(&qp->mutex);
spin_lock_init(&qp->sq.lock);
spin_lock_init(&qp->rq.lock);
mlx5_st = to_mlx5_st(init_attr->qp_type);
if (mlx5_st < 0)
return -EINVAL;
if (init_attr->rwq_ind_tbl) {
if (!udata)
return -ENOSYS;
err = create_rss_raw_qp_tir(dev, qp, pd, init_attr, udata);
return err;
}
if (init_attr->create_flags & IB_QP_CREATE_BLOCK_MULTICAST_LOOPBACK) {
if (!MLX5_CAP_GEN(mdev, block_lb_mc)) {
mlx5_ib_dbg(dev, "block multicast loopback isn't supported\n");
return -EINVAL;
} else {
qp->flags |= MLX5_IB_QP_BLOCK_MULTICAST_LOOPBACK;
}
}
if (init_attr->create_flags &
(IB_QP_CREATE_CROSS_CHANNEL |
IB_QP_CREATE_MANAGED_SEND |
IB_QP_CREATE_MANAGED_RECV)) {
if (!MLX5_CAP_GEN(mdev, cd)) {
mlx5_ib_dbg(dev, "cross-channel isn't supported\n");
return -EINVAL;
}
if (init_attr->create_flags & IB_QP_CREATE_CROSS_CHANNEL)
qp->flags |= MLX5_IB_QP_CROSS_CHANNEL;
if (init_attr->create_flags & IB_QP_CREATE_MANAGED_SEND)
qp->flags |= MLX5_IB_QP_MANAGED_SEND;
if (init_attr->create_flags & IB_QP_CREATE_MANAGED_RECV)
qp->flags |= MLX5_IB_QP_MANAGED_RECV;
}
if (init_attr->qp_type == IB_QPT_UD &&
(init_attr->create_flags & IB_QP_CREATE_IPOIB_UD_LSO))
if (!MLX5_CAP_GEN(mdev, ipoib_basic_offloads)) {
mlx5_ib_dbg(dev, "ipoib UD lso qp isn't supported\n");
return -EOPNOTSUPP;
}
if (init_attr->create_flags & IB_QP_CREATE_SCATTER_FCS) {
if (init_attr->qp_type != IB_QPT_RAW_PACKET) {
mlx5_ib_dbg(dev, "Scatter FCS is supported only for Raw Packet QPs");
return -EOPNOTSUPP;
}
if (!MLX5_CAP_GEN(dev->mdev, eth_net_offloads) ||
!MLX5_CAP_ETH(dev->mdev, scatter_fcs)) {
mlx5_ib_dbg(dev, "Scatter FCS isn't supported\n");
return -EOPNOTSUPP;
}
qp->flags |= MLX5_IB_QP_CAP_SCATTER_FCS;
}
if (init_attr->sq_sig_type == IB_SIGNAL_ALL_WR)
qp->sq_signal_bits = MLX5_WQE_CTRL_CQ_UPDATE;
if (init_attr->create_flags & IB_QP_CREATE_CVLAN_STRIPPING) {
if (!(MLX5_CAP_GEN(dev->mdev, eth_net_offloads) &&
MLX5_CAP_ETH(dev->mdev, vlan_cap)) ||
(init_attr->qp_type != IB_QPT_RAW_PACKET))
return -EOPNOTSUPP;
qp->flags |= MLX5_IB_QP_CVLAN_STRIPPING;
}
if (pd && pd->uobject) {
if (ib_copy_from_udata(&ucmd, udata, sizeof(ucmd))) {
mlx5_ib_dbg(dev, "copy failed\n");
return -EFAULT;
}
err = get_qp_user_index(to_mucontext(pd->uobject->context),
&ucmd, udata->inlen, &uidx);
if (err)
return err;
qp->wq_sig = !!(ucmd.flags & MLX5_QP_FLAG_SIGNATURE);
qp->scat_cqe = !!(ucmd.flags & MLX5_QP_FLAG_SCATTER_CQE);
if (ucmd.flags & MLX5_QP_FLAG_TUNNEL_OFFLOADS) {
if (init_attr->qp_type != IB_QPT_RAW_PACKET ||
!tunnel_offload_supported(mdev)) {
mlx5_ib_dbg(dev, "Tunnel offload isn't supported\n");
return -EOPNOTSUPP;
}
qp->tunnel_offload_en = true;
}
if (init_attr->create_flags & IB_QP_CREATE_SOURCE_QPN) {
if (init_attr->qp_type != IB_QPT_UD ||
(MLX5_CAP_GEN(dev->mdev, port_type) !=
MLX5_CAP_PORT_TYPE_IB) ||
!mlx5_get_flow_namespace(dev->mdev, MLX5_FLOW_NAMESPACE_BYPASS)) {
mlx5_ib_dbg(dev, "Source QP option isn't supported\n");
return -EOPNOTSUPP;
}
qp->flags |= MLX5_IB_QP_UNDERLAY;
qp->underlay_qpn = init_attr->source_qpn;
}
} else {
qp->wq_sig = !!wq_signature;
}
base = (init_attr->qp_type == IB_QPT_RAW_PACKET ||
qp->flags & MLX5_IB_QP_UNDERLAY) ?
&qp->raw_packet_qp.rq.base :
&qp->trans_qp.base;
qp->has_rq = qp_has_rq(init_attr);
err = set_rq_size(dev, &init_attr->cap, qp->has_rq,
qp, (pd && pd->uobject) ? &ucmd : NULL);
if (err) {
mlx5_ib_dbg(dev, "err %d\n", err);
return err;
}
if (pd) {
if (pd->uobject) {
__u32 max_wqes =
1 << MLX5_CAP_GEN(mdev, log_max_qp_sz);
mlx5_ib_dbg(dev, "requested sq_wqe_count (%d)\n", ucmd.sq_wqe_count);
if (ucmd.rq_wqe_shift != qp->rq.wqe_shift ||
ucmd.rq_wqe_count != qp->rq.wqe_cnt) {
mlx5_ib_dbg(dev, "invalid rq params\n");
return -EINVAL;
}
if (ucmd.sq_wqe_count > max_wqes) {
mlx5_ib_dbg(dev, "requested sq_wqe_count (%d) > max allowed (%d)\n",
ucmd.sq_wqe_count, max_wqes);
return -EINVAL;
}
if (init_attr->create_flags &
mlx5_ib_create_qp_sqpn_qp1()) {
mlx5_ib_dbg(dev, "user-space is not allowed to create UD QPs spoofing as QP1\n");
return -EINVAL;
}
err = create_user_qp(dev, pd, qp, udata, init_attr, &in,
&resp, &inlen, base);
if (err)
mlx5_ib_dbg(dev, "err %d\n", err);
} else {
err = create_kernel_qp(dev, init_attr, qp, &in, &inlen,
base);
if (err)
mlx5_ib_dbg(dev, "err %d\n", err);
}
if (err)
return err;
} else {
in = kvzalloc(inlen, GFP_KERNEL);
if (!in)
return -ENOMEM;
qp->create_type = MLX5_QP_EMPTY;
}
if (is_sqp(init_attr->qp_type))
qp->port = init_attr->port_num;
qpc = MLX5_ADDR_OF(create_qp_in, in, qpc);
MLX5_SET(qpc, qpc, st, mlx5_st);
MLX5_SET(qpc, qpc, pm_state, MLX5_QP_PM_MIGRATED);
if (init_attr->qp_type != MLX5_IB_QPT_REG_UMR)
MLX5_SET(qpc, qpc, pd, to_mpd(pd ? pd : devr->p0)->pdn);
else
MLX5_SET(qpc, qpc, latency_sensitive, 1);
if (qp->wq_sig)
MLX5_SET(qpc, qpc, wq_signature, 1);
if (qp->flags & MLX5_IB_QP_BLOCK_MULTICAST_LOOPBACK)
MLX5_SET(qpc, qpc, block_lb_mc, 1);
if (qp->flags & MLX5_IB_QP_CROSS_CHANNEL)
MLX5_SET(qpc, qpc, cd_master, 1);
if (qp->flags & MLX5_IB_QP_MANAGED_SEND)
MLX5_SET(qpc, qpc, cd_slave_send, 1);
if (qp->flags & MLX5_IB_QP_MANAGED_RECV)
MLX5_SET(qpc, qpc, cd_slave_receive, 1);
if (qp->scat_cqe && is_connected(init_attr->qp_type)) {
int rcqe_sz;
int scqe_sz;
rcqe_sz = mlx5_ib_get_cqe_size(dev, init_attr->recv_cq);
scqe_sz = mlx5_ib_get_cqe_size(dev, init_attr->send_cq);
if (rcqe_sz == 128)
MLX5_SET(qpc, qpc, cs_res, MLX5_RES_SCAT_DATA64_CQE);
else
MLX5_SET(qpc, qpc, cs_res, MLX5_RES_SCAT_DATA32_CQE);
if (init_attr->sq_sig_type == IB_SIGNAL_ALL_WR) {
if (scqe_sz == 128)
MLX5_SET(qpc, qpc, cs_req, MLX5_REQ_SCAT_DATA64_CQE);
else
MLX5_SET(qpc, qpc, cs_req, MLX5_REQ_SCAT_DATA32_CQE);
}
}
if (qp->rq.wqe_cnt) {
MLX5_SET(qpc, qpc, log_rq_stride, qp->rq.wqe_shift - 4);
MLX5_SET(qpc, qpc, log_rq_size, ilog2(qp->rq.wqe_cnt));
}
MLX5_SET(qpc, qpc, rq_type, get_rx_type(qp, init_attr));
if (qp->sq.wqe_cnt) {
MLX5_SET(qpc, qpc, log_sq_size, ilog2(qp->sq.wqe_cnt));
} else {
MLX5_SET(qpc, qpc, no_sq, 1);
if (init_attr->srq &&
init_attr->srq->srq_type == IB_SRQT_TM)
MLX5_SET(qpc, qpc, offload_type,
MLX5_QPC_OFFLOAD_TYPE_RNDV);
}
/* Set default resources */
switch (init_attr->qp_type) {
case IB_QPT_XRC_TGT:
MLX5_SET(qpc, qpc, cqn_rcv, to_mcq(devr->c0)->mcq.cqn);
MLX5_SET(qpc, qpc, cqn_snd, to_mcq(devr->c0)->mcq.cqn);
MLX5_SET(qpc, qpc, srqn_rmpn_xrqn, to_msrq(devr->s0)->msrq.srqn);
MLX5_SET(qpc, qpc, xrcd, to_mxrcd(init_attr->xrcd)->xrcdn);
break;
case IB_QPT_XRC_INI:
MLX5_SET(qpc, qpc, cqn_rcv, to_mcq(devr->c0)->mcq.cqn);
MLX5_SET(qpc, qpc, xrcd, to_mxrcd(devr->x1)->xrcdn);
MLX5_SET(qpc, qpc, srqn_rmpn_xrqn, to_msrq(devr->s0)->msrq.srqn);
break;
default:
if (init_attr->srq) {
MLX5_SET(qpc, qpc, xrcd, to_mxrcd(devr->x0)->xrcdn);
MLX5_SET(qpc, qpc, srqn_rmpn_xrqn, to_msrq(init_attr->srq)->msrq.srqn);
} else {
MLX5_SET(qpc, qpc, xrcd, to_mxrcd(devr->x1)->xrcdn);
MLX5_SET(qpc, qpc, srqn_rmpn_xrqn, to_msrq(devr->s1)->msrq.srqn);
}
}
if (init_attr->send_cq)
MLX5_SET(qpc, qpc, cqn_snd, to_mcq(init_attr->send_cq)->mcq.cqn);
if (init_attr->recv_cq)
MLX5_SET(qpc, qpc, cqn_rcv, to_mcq(init_attr->recv_cq)->mcq.cqn);
MLX5_SET64(qpc, qpc, dbr_addr, qp->db.dma);
/* 0xffffff means we ask to work with cqe version 0 */
if (MLX5_CAP_GEN(mdev, cqe_version) == MLX5_CQE_VERSION_V1)
MLX5_SET(qpc, qpc, user_index, uidx);
/* we use IB_QP_CREATE_IPOIB_UD_LSO to indicates ipoib qp */
if (init_attr->qp_type == IB_QPT_UD &&
(init_attr->create_flags & IB_QP_CREATE_IPOIB_UD_LSO)) {
MLX5_SET(qpc, qpc, ulp_stateless_offload_mode, 1);
qp->flags |= MLX5_IB_QP_LSO;
}
if (init_attr->create_flags & IB_QP_CREATE_PCI_WRITE_END_PADDING) {
if (!MLX5_CAP_GEN(dev->mdev, end_pad)) {
mlx5_ib_dbg(dev, "scatter end padding is not supported\n");
err = -EOPNOTSUPP;
goto err;
} else if (init_attr->qp_type != IB_QPT_RAW_PACKET) {
MLX5_SET(qpc, qpc, end_padding_mode,
MLX5_WQ_END_PAD_MODE_ALIGN);
} else {
qp->flags |= MLX5_IB_QP_PCI_WRITE_END_PADDING;
}
}
if (inlen < 0) {
err = -EINVAL;
goto err;
}
if (init_attr->qp_type == IB_QPT_RAW_PACKET ||
qp->flags & MLX5_IB_QP_UNDERLAY) {
qp->raw_packet_qp.sq.ubuffer.buf_addr = ucmd.sq_buf_addr;
raw_packet_qp_copy_info(qp, &qp->raw_packet_qp);
err = create_raw_packet_qp(dev, qp, in, inlen, pd);
} else {
err = mlx5_core_create_qp(dev->mdev, &base->mqp, in, inlen);
}
if (err) {
mlx5_ib_dbg(dev, "create qp failed\n");
goto err_create;
}
kvfree(in);
base->container_mibqp = qp;
base->mqp.event = mlx5_ib_qp_event;
get_cqs(init_attr->qp_type, init_attr->send_cq, init_attr->recv_cq,
&send_cq, &recv_cq);
spin_lock_irqsave(&dev->reset_flow_resource_lock, flags);
mlx5_ib_lock_cqs(send_cq, recv_cq);
/* Maintain device to QPs access, needed for further handling via reset
* flow
*/
list_add_tail(&qp->qps_list, &dev->qp_list);
/* Maintain CQ to QPs access, needed for further handling via reset flow
*/
if (send_cq)
list_add_tail(&qp->cq_send_list, &send_cq->list_send_qp);
if (recv_cq)
list_add_tail(&qp->cq_recv_list, &recv_cq->list_recv_qp);
mlx5_ib_unlock_cqs(send_cq, recv_cq);
spin_unlock_irqrestore(&dev->reset_flow_resource_lock, flags);
return 0;
err_create:
if (qp->create_type == MLX5_QP_USER)
destroy_qp_user(dev, pd, qp, base);
else if (qp->create_type == MLX5_QP_KERNEL)
destroy_qp_kernel(dev, qp);
err:
kvfree(in);
return err;
}
| [[1610, "\tstruct mlx5_ib_create_qp_resp resp;\n"]] | [[1610, "struct mlx5_ib_create_qp_resp resp;"]] | [
"CVE-2018-20855"
] | [
"CWE-119"
] | 12 | {
"Execution Environment": [],
"Explanation": null,
"External Function": [],
"Function Argument": [],
"Globals": [],
"Type Execution Declaration": [
"struct mlx5_ib_create_qp_resp"
]
} |
12 | linux | https://github.com/torvalds/linux | drivers/infiniband/hw/mlx5/qp.c | 0625b4ba1a5d4703c7fb01c497bd6c156908af00 | IB/mlx5: Fix leaking stack memory to userspace
mlx5_ib_create_qp_resp was never initialized and only the first 4 bytes
were written.
Fixes: 41d902cb7c32 ("RDMA/mlx5: Fix definition of mlx5_ib_create_qp_resp")
Cc: <stable@vger.kernel.org>
Acked-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com> | false | 0e311cfeb93cc5603699cbf680d9734f | create_qp_common | static int create_qp_common(struct mlx5_ib_dev *dev, struct ib_pd *pd,
struct ib_qp_init_attr *init_attr,
struct ib_udata *udata, struct mlx5_ib_qp *qp)
{
struct mlx5_ib_resources *devr = &dev->devr;
int inlen = MLX5_ST_SZ_BYTES(create_qp_in);
struct mlx5_core_dev *mdev = dev->mdev;
struct mlx5_ib_create_qp_resp resp = {};
struct mlx5_ib_cq *send_cq;
struct mlx5_ib_cq *recv_cq;
unsigned long flags;
u32 uidx = MLX5_IB_DEFAULT_UIDX;
struct mlx5_ib_create_qp ucmd;
struct mlx5_ib_qp_base *base;
int mlx5_st;
void *qpc;
u32 *in;
int err;
mutex_init(&qp->mutex);
spin_lock_init(&qp->sq.lock);
spin_lock_init(&qp->rq.lock);
mlx5_st = to_mlx5_st(init_attr->qp_type);
if (mlx5_st < 0)
return -EINVAL;
if (init_attr->rwq_ind_tbl) {
if (!udata)
return -ENOSYS;
err = create_rss_raw_qp_tir(dev, qp, pd, init_attr, udata);
return err;
}
if (init_attr->create_flags & IB_QP_CREATE_BLOCK_MULTICAST_LOOPBACK) {
if (!MLX5_CAP_GEN(mdev, block_lb_mc)) {
mlx5_ib_dbg(dev, "block multicast loopback isn't supported\n");
return -EINVAL;
} else {
qp->flags |= MLX5_IB_QP_BLOCK_MULTICAST_LOOPBACK;
}
}
if (init_attr->create_flags &
(IB_QP_CREATE_CROSS_CHANNEL |
IB_QP_CREATE_MANAGED_SEND |
IB_QP_CREATE_MANAGED_RECV)) {
if (!MLX5_CAP_GEN(mdev, cd)) {
mlx5_ib_dbg(dev, "cross-channel isn't supported\n");
return -EINVAL;
}
if (init_attr->create_flags & IB_QP_CREATE_CROSS_CHANNEL)
qp->flags |= MLX5_IB_QP_CROSS_CHANNEL;
if (init_attr->create_flags & IB_QP_CREATE_MANAGED_SEND)
qp->flags |= MLX5_IB_QP_MANAGED_SEND;
if (init_attr->create_flags & IB_QP_CREATE_MANAGED_RECV)
qp->flags |= MLX5_IB_QP_MANAGED_RECV;
}
if (init_attr->qp_type == IB_QPT_UD &&
(init_attr->create_flags & IB_QP_CREATE_IPOIB_UD_LSO))
if (!MLX5_CAP_GEN(mdev, ipoib_basic_offloads)) {
mlx5_ib_dbg(dev, "ipoib UD lso qp isn't supported\n");
return -EOPNOTSUPP;
}
if (init_attr->create_flags & IB_QP_CREATE_SCATTER_FCS) {
if (init_attr->qp_type != IB_QPT_RAW_PACKET) {
mlx5_ib_dbg(dev, "Scatter FCS is supported only for Raw Packet QPs");
return -EOPNOTSUPP;
}
if (!MLX5_CAP_GEN(dev->mdev, eth_net_offloads) ||
!MLX5_CAP_ETH(dev->mdev, scatter_fcs)) {
mlx5_ib_dbg(dev, "Scatter FCS isn't supported\n");
return -EOPNOTSUPP;
}
qp->flags |= MLX5_IB_QP_CAP_SCATTER_FCS;
}
if (init_attr->sq_sig_type == IB_SIGNAL_ALL_WR)
qp->sq_signal_bits = MLX5_WQE_CTRL_CQ_UPDATE;
if (init_attr->create_flags & IB_QP_CREATE_CVLAN_STRIPPING) {
if (!(MLX5_CAP_GEN(dev->mdev, eth_net_offloads) &&
MLX5_CAP_ETH(dev->mdev, vlan_cap)) ||
(init_attr->qp_type != IB_QPT_RAW_PACKET))
return -EOPNOTSUPP;
qp->flags |= MLX5_IB_QP_CVLAN_STRIPPING;
}
if (pd && pd->uobject) {
if (ib_copy_from_udata(&ucmd, udata, sizeof(ucmd))) {
mlx5_ib_dbg(dev, "copy failed\n");
return -EFAULT;
}
err = get_qp_user_index(to_mucontext(pd->uobject->context),
&ucmd, udata->inlen, &uidx);
if (err)
return err;
qp->wq_sig = !!(ucmd.flags & MLX5_QP_FLAG_SIGNATURE);
qp->scat_cqe = !!(ucmd.flags & MLX5_QP_FLAG_SCATTER_CQE);
if (ucmd.flags & MLX5_QP_FLAG_TUNNEL_OFFLOADS) {
if (init_attr->qp_type != IB_QPT_RAW_PACKET ||
!tunnel_offload_supported(mdev)) {
mlx5_ib_dbg(dev, "Tunnel offload isn't supported\n");
return -EOPNOTSUPP;
}
qp->tunnel_offload_en = true;
}
if (init_attr->create_flags & IB_QP_CREATE_SOURCE_QPN) {
if (init_attr->qp_type != IB_QPT_UD ||
(MLX5_CAP_GEN(dev->mdev, port_type) !=
MLX5_CAP_PORT_TYPE_IB) ||
!mlx5_get_flow_namespace(dev->mdev, MLX5_FLOW_NAMESPACE_BYPASS)) {
mlx5_ib_dbg(dev, "Source QP option isn't supported\n");
return -EOPNOTSUPP;
}
qp->flags |= MLX5_IB_QP_UNDERLAY;
qp->underlay_qpn = init_attr->source_qpn;
}
} else {
qp->wq_sig = !!wq_signature;
}
base = (init_attr->qp_type == IB_QPT_RAW_PACKET ||
qp->flags & MLX5_IB_QP_UNDERLAY) ?
&qp->raw_packet_qp.rq.base :
&qp->trans_qp.base;
qp->has_rq = qp_has_rq(init_attr);
err = set_rq_size(dev, &init_attr->cap, qp->has_rq,
qp, (pd && pd->uobject) ? &ucmd : NULL);
if (err) {
mlx5_ib_dbg(dev, "err %d\n", err);
return err;
}
if (pd) {
if (pd->uobject) {
__u32 max_wqes =
1 << MLX5_CAP_GEN(mdev, log_max_qp_sz);
mlx5_ib_dbg(dev, "requested sq_wqe_count (%d)\n", ucmd.sq_wqe_count);
if (ucmd.rq_wqe_shift != qp->rq.wqe_shift ||
ucmd.rq_wqe_count != qp->rq.wqe_cnt) {
mlx5_ib_dbg(dev, "invalid rq params\n");
return -EINVAL;
}
if (ucmd.sq_wqe_count > max_wqes) {
mlx5_ib_dbg(dev, "requested sq_wqe_count (%d) > max allowed (%d)\n",
ucmd.sq_wqe_count, max_wqes);
return -EINVAL;
}
if (init_attr->create_flags &
mlx5_ib_create_qp_sqpn_qp1()) {
mlx5_ib_dbg(dev, "user-space is not allowed to create UD QPs spoofing as QP1\n");
return -EINVAL;
}
err = create_user_qp(dev, pd, qp, udata, init_attr, &in,
&resp, &inlen, base);
if (err)
mlx5_ib_dbg(dev, "err %d\n", err);
} else {
err = create_kernel_qp(dev, init_attr, qp, &in, &inlen,
base);
if (err)
mlx5_ib_dbg(dev, "err %d\n", err);
}
if (err)
return err;
} else {
in = kvzalloc(inlen, GFP_KERNEL);
if (!in)
return -ENOMEM;
qp->create_type = MLX5_QP_EMPTY;
}
if (is_sqp(init_attr->qp_type))
qp->port = init_attr->port_num;
qpc = MLX5_ADDR_OF(create_qp_in, in, qpc);
MLX5_SET(qpc, qpc, st, mlx5_st);
MLX5_SET(qpc, qpc, pm_state, MLX5_QP_PM_MIGRATED);
if (init_attr->qp_type != MLX5_IB_QPT_REG_UMR)
MLX5_SET(qpc, qpc, pd, to_mpd(pd ? pd : devr->p0)->pdn);
else
MLX5_SET(qpc, qpc, latency_sensitive, 1);
if (qp->wq_sig)
MLX5_SET(qpc, qpc, wq_signature, 1);
if (qp->flags & MLX5_IB_QP_BLOCK_MULTICAST_LOOPBACK)
MLX5_SET(qpc, qpc, block_lb_mc, 1);
if (qp->flags & MLX5_IB_QP_CROSS_CHANNEL)
MLX5_SET(qpc, qpc, cd_master, 1);
if (qp->flags & MLX5_IB_QP_MANAGED_SEND)
MLX5_SET(qpc, qpc, cd_slave_send, 1);
if (qp->flags & MLX5_IB_QP_MANAGED_RECV)
MLX5_SET(qpc, qpc, cd_slave_receive, 1);
if (qp->scat_cqe && is_connected(init_attr->qp_type)) {
int rcqe_sz;
int scqe_sz;
rcqe_sz = mlx5_ib_get_cqe_size(dev, init_attr->recv_cq);
scqe_sz = mlx5_ib_get_cqe_size(dev, init_attr->send_cq);
if (rcqe_sz == 128)
MLX5_SET(qpc, qpc, cs_res, MLX5_RES_SCAT_DATA64_CQE);
else
MLX5_SET(qpc, qpc, cs_res, MLX5_RES_SCAT_DATA32_CQE);
if (init_attr->sq_sig_type == IB_SIGNAL_ALL_WR) {
if (scqe_sz == 128)
MLX5_SET(qpc, qpc, cs_req, MLX5_REQ_SCAT_DATA64_CQE);
else
MLX5_SET(qpc, qpc, cs_req, MLX5_REQ_SCAT_DATA32_CQE);
}
}
if (qp->rq.wqe_cnt) {
MLX5_SET(qpc, qpc, log_rq_stride, qp->rq.wqe_shift - 4);
MLX5_SET(qpc, qpc, log_rq_size, ilog2(qp->rq.wqe_cnt));
}
MLX5_SET(qpc, qpc, rq_type, get_rx_type(qp, init_attr));
if (qp->sq.wqe_cnt) {
MLX5_SET(qpc, qpc, log_sq_size, ilog2(qp->sq.wqe_cnt));
} else {
MLX5_SET(qpc, qpc, no_sq, 1);
if (init_attr->srq &&
init_attr->srq->srq_type == IB_SRQT_TM)
MLX5_SET(qpc, qpc, offload_type,
MLX5_QPC_OFFLOAD_TYPE_RNDV);
}
/* Set default resources */
switch (init_attr->qp_type) {
case IB_QPT_XRC_TGT:
MLX5_SET(qpc, qpc, cqn_rcv, to_mcq(devr->c0)->mcq.cqn);
MLX5_SET(qpc, qpc, cqn_snd, to_mcq(devr->c0)->mcq.cqn);
MLX5_SET(qpc, qpc, srqn_rmpn_xrqn, to_msrq(devr->s0)->msrq.srqn);
MLX5_SET(qpc, qpc, xrcd, to_mxrcd(init_attr->xrcd)->xrcdn);
break;
case IB_QPT_XRC_INI:
MLX5_SET(qpc, qpc, cqn_rcv, to_mcq(devr->c0)->mcq.cqn);
MLX5_SET(qpc, qpc, xrcd, to_mxrcd(devr->x1)->xrcdn);
MLX5_SET(qpc, qpc, srqn_rmpn_xrqn, to_msrq(devr->s0)->msrq.srqn);
break;
default:
if (init_attr->srq) {
MLX5_SET(qpc, qpc, xrcd, to_mxrcd(devr->x0)->xrcdn);
MLX5_SET(qpc, qpc, srqn_rmpn_xrqn, to_msrq(init_attr->srq)->msrq.srqn);
} else {
MLX5_SET(qpc, qpc, xrcd, to_mxrcd(devr->x1)->xrcdn);
MLX5_SET(qpc, qpc, srqn_rmpn_xrqn, to_msrq(devr->s1)->msrq.srqn);
}
}
if (init_attr->send_cq)
MLX5_SET(qpc, qpc, cqn_snd, to_mcq(init_attr->send_cq)->mcq.cqn);
if (init_attr->recv_cq)
MLX5_SET(qpc, qpc, cqn_rcv, to_mcq(init_attr->recv_cq)->mcq.cqn);
MLX5_SET64(qpc, qpc, dbr_addr, qp->db.dma);
/* 0xffffff means we ask to work with cqe version 0 */
if (MLX5_CAP_GEN(mdev, cqe_version) == MLX5_CQE_VERSION_V1)
MLX5_SET(qpc, qpc, user_index, uidx);
/* we use IB_QP_CREATE_IPOIB_UD_LSO to indicates ipoib qp */
if (init_attr->qp_type == IB_QPT_UD &&
(init_attr->create_flags & IB_QP_CREATE_IPOIB_UD_LSO)) {
MLX5_SET(qpc, qpc, ulp_stateless_offload_mode, 1);
qp->flags |= MLX5_IB_QP_LSO;
}
if (init_attr->create_flags & IB_QP_CREATE_PCI_WRITE_END_PADDING) {
if (!MLX5_CAP_GEN(dev->mdev, end_pad)) {
mlx5_ib_dbg(dev, "scatter end padding is not supported\n");
err = -EOPNOTSUPP;
goto err;
} else if (init_attr->qp_type != IB_QPT_RAW_PACKET) {
MLX5_SET(qpc, qpc, end_padding_mode,
MLX5_WQ_END_PAD_MODE_ALIGN);
} else {
qp->flags |= MLX5_IB_QP_PCI_WRITE_END_PADDING;
}
}
if (inlen < 0) {
err = -EINVAL;
goto err;
}
if (init_attr->qp_type == IB_QPT_RAW_PACKET ||
qp->flags & MLX5_IB_QP_UNDERLAY) {
qp->raw_packet_qp.sq.ubuffer.buf_addr = ucmd.sq_buf_addr;
raw_packet_qp_copy_info(qp, &qp->raw_packet_qp);
err = create_raw_packet_qp(dev, qp, in, inlen, pd);
} else {
err = mlx5_core_create_qp(dev->mdev, &base->mqp, in, inlen);
}
if (err) {
mlx5_ib_dbg(dev, "create qp failed\n");
goto err_create;
}
kvfree(in);
base->container_mibqp = qp;
base->mqp.event = mlx5_ib_qp_event;
get_cqs(init_attr->qp_type, init_attr->send_cq, init_attr->recv_cq,
&send_cq, &recv_cq);
spin_lock_irqsave(&dev->reset_flow_resource_lock, flags);
mlx5_ib_lock_cqs(send_cq, recv_cq);
/* Maintain device to QPs access, needed for further handling via reset
* flow
*/
list_add_tail(&qp->qps_list, &dev->qp_list);
/* Maintain CQ to QPs access, needed for further handling via reset flow
*/
if (send_cq)
list_add_tail(&qp->cq_send_list, &send_cq->list_send_qp);
if (recv_cq)
list_add_tail(&qp->cq_recv_list, &recv_cq->list_recv_qp);
mlx5_ib_unlock_cqs(send_cq, recv_cq);
spin_unlock_irqrestore(&dev->reset_flow_resource_lock, flags);
return 0;
err_create:
if (qp->create_type == MLX5_QP_USER)
destroy_qp_user(dev, pd, qp, base);
else if (qp->create_type == MLX5_QP_KERNEL)
destroy_qp_kernel(dev, qp);
err:
kvfree(in);
return err;
}
| [[1610, "\tstruct mlx5_ib_create_qp_resp resp = {};\n"]] | [[1610, "struct mlx5_ib_create_qp_resp resp = {};"]] | [
"CVE-2018-20855"
] | [
"CWE-119"
] | 12 | {
"Execution Environment": [],
"Explanation": null,
"External Function": [],
"Function Argument": [],
"Globals": [],
"Type Execution Declaration": [
"struct mlx5_ib_create_qp_resp"
]
} |
13 | linux | https://github.com/torvalds/linux | fs/cifs/smbencrypt.c | 06deeec77a5a689cc94b21a8a91a76e42176685d | cifs: Fix smbencrypt() to stop pointing a scatterlist at the stack
smbencrypt() points a scatterlist to the stack, which is breaks if
CONFIG_VMAP_STACK=y.
Fix it by switching to crypto_cipher_encrypt_one(). The new code
should be considerably faster as an added benefit.
This code is nearly identical to some code that Eric Biggers
suggested.
Cc: stable@vger.kernel.org # 4.9 only
Reported-by: Eric Biggers <ebiggers3@gmail.com>
Signed-off-by: Andy Lutomirski <luto@kernel.org>
Acked-by: Jeff Layton <jlayton@redhat.com>
Signed-off-by: Steve French <smfrench@gmail.com> | true | a89228b9e70e6c1f4dc50bcf2932a2f1 | smbhash | static int
smbhash(unsigned char *out, const unsigned char *in, unsigned char *key)
{
int rc;
unsigned char key2[8];
struct crypto_skcipher *tfm_des;
struct scatterlist sgin, sgout;
struct skcipher_request *req;
str_to_key(key, key2);
tfm_des = crypto_alloc_skcipher("ecb(des)", 0, CRYPTO_ALG_ASYNC);
if (IS_ERR(tfm_des)) {
rc = PTR_ERR(tfm_des);
cifs_dbg(VFS, "could not allocate des crypto API\n");
goto smbhash_err;
}
req = skcipher_request_alloc(tfm_des, GFP_KERNEL);
if (!req) {
rc = -ENOMEM;
cifs_dbg(VFS, "could not allocate des crypto API\n");
goto smbhash_free_skcipher;
}
crypto_skcipher_setkey(tfm_des, key2, 8);
sg_init_one(&sgin, in, 8);
sg_init_one(&sgout, out, 8);
skcipher_request_set_callback(req, 0, NULL, NULL);
skcipher_request_set_crypt(req, &sgin, &sgout, 8, NULL);
rc = crypto_skcipher_encrypt(req);
if (rc)
cifs_dbg(VFS, "could not encrypt crypt key rc: %d\n", rc);
skcipher_request_free(req);
smbhash_free_skcipher:
crypto_free_skcipher(tfm_des);
smbhash_err:
return rc;
}
| [[72, "\tint rc;\n"], [74, "\tstruct crypto_skcipher *tfm_des;\n"], [75, "\tstruct scatterlist sgin, sgout;\n"], [76, "\tstruct skcipher_request *req;\n"], [80, "\ttfm_des = crypto_alloc_skcipher(\"ecb(des)\", 0, CRYPTO_ALG_ASYNC);\n"], [82, "\t\trc = PTR_ERR(tfm_des);\n"], [83, "\t\tcifs_dbg(VFS, \"could not allocate des crypto API\\n\");\n"], [84, "\t\tgoto smbhash_err;\n"], [85, "\t}\n"], [86, "\n"], [87, "\treq = skcipher_request_alloc(tfm_des, GFP_KERNEL);\n"], [88, "\tif (!req) {\n"], [89, "\t\trc = -ENOMEM;\n"], [91, "\t\tgoto smbhash_free_skcipher;\n"], [94, "\tcrypto_skcipher_setkey(tfm_des, key2, 8);\n"], [95, "\n"], [96, "\tsg_init_one(&sgin, in, 8);\n"], [97, "\tsg_init_one(&sgout, out, 8);\n"], [99, "\tskcipher_request_set_callback(req, 0, NULL, NULL);\n"], [100, "\tskcipher_request_set_crypt(req, &sgin, &sgout, 8, NULL);\n"], [101, "\n"], [102, "\trc = crypto_skcipher_encrypt(req);\n"], [103, "\tif (rc)\n"], [104, "\t\tcifs_dbg(VFS, \"could not encrypt crypt key rc: %d\\n\", rc);\n"], [105, "\n"], [106, "\tskcipher_request_free(req);\n"], [107, "\n"], [108, "smbhash_free_skcipher:\n"], [109, "\tcrypto_free_skcipher(tfm_des);\n"], [110, "smbhash_err:\n"], [111, "\treturn rc;\n"]] | [[72, "int rc;"], [74, "struct crypto_skcipher *tfm_des;"], [75, "struct scatterlist sgin, sgout;"], [76, "struct skcipher_request *req;"], [80, "tfm_des = crypto_alloc_skcipher(\"ecb(des)\", 0, CRYPTO_ALG_ASYNC);"], [82, "rc = PTR_ERR(tfm_des);"], [83, "cifs_dbg(VFS, \"could not allocate des crypto API\\n\");"], [84, "goto smbhash_err;"], [85, "\t}\n"], [86, "\n"], [87, "req = skcipher_request_alloc(tfm_des, GFP_KERNEL);"], [88, "if (!req)"], [89, "rc = -ENOMEM;"], [91, "goto smbhash_free_skcipher;"], [94, "crypto_skcipher_setkey(tfm_des, key2, 8);"], [95, "\n"], [96, "sg_init_one(&sgin, in, 8);"], [97, "sg_init_one(&sgout, out, 8);"], [99, "skcipher_request_set_callback(req, 0, NULL, NULL);"], [100, "skcipher_request_set_crypt(req, &sgin, &sgout, 8, NULL);"], [101, "\n"], [102, "rc = crypto_skcipher_encrypt(req);"], [103, "if (rc)"], [104, "cifs_dbg(VFS, \"could not encrypt crypt key rc: %d\\n\", rc);"], [105, "\n"], [106, "skcipher_request_free(req);"], [107, "\n"], [108, "smbhash_free_skcipher:"], [109, "crypto_free_skcipher(tfm_des);"], [110, "smbhash_err:"], [111, "return rc;"]] | [
"CVE-2016-10154"
] | [
"CWE-119"
] | 14 | {
"Execution Environment": [
"CONFIG_VMAP_STACK"
],
"Explanation": null,
"External Function": [
"sg_init_one"
],
"Function Argument": [],
"Globals": [],
"Type Execution Declaration": []
} |
14 | linux | https://github.com/torvalds/linux | fs/cifs/smbencrypt.c | 06deeec77a5a689cc94b21a8a91a76e42176685d | cifs: Fix smbencrypt() to stop pointing a scatterlist at the stack
smbencrypt() points a scatterlist to the stack, which is breaks if
CONFIG_VMAP_STACK=y.
Fix it by switching to crypto_cipher_encrypt_one(). The new code
should be considerably faster as an added benefit.
This code is nearly identical to some code that Eric Biggers
suggested.
Cc: stable@vger.kernel.org # 4.9 only
Reported-by: Eric Biggers <ebiggers3@gmail.com>
Signed-off-by: Andy Lutomirski <luto@kernel.org>
Acked-by: Jeff Layton <jlayton@redhat.com>
Signed-off-by: Steve French <smfrench@gmail.com> | false | e8cc2fd6bec0fed306d2adce2c782f6f | smbhash | static int
smbhash(unsigned char *out, const unsigned char *in, unsigned char *key)
{
unsigned char key2[8];
struct crypto_cipher *tfm_des;
str_to_key(key, key2);
tfm_des = crypto_alloc_cipher("des", 0, 0);
if (IS_ERR(tfm_des)) {
cifs_dbg(VFS, "could not allocate des crypto API\n");
return PTR_ERR(tfm_des);
}
crypto_cipher_setkey(tfm_des, key2, 8);
crypto_cipher_encrypt_one(tfm_des, out, in);
crypto_free_cipher(tfm_des);
return 0;
}
| [[73, "\tstruct crypto_cipher *tfm_des;\n"], [77, "\ttfm_des = crypto_alloc_cipher(\"des\", 0, 0);\n"], [80, "\t\treturn PTR_ERR(tfm_des);\n"], [83, "\tcrypto_cipher_setkey(tfm_des, key2, 8);\n"], [84, "\tcrypto_cipher_encrypt_one(tfm_des, out, in);\n"], [85, "\tcrypto_free_cipher(tfm_des);\n"], [87, "\treturn 0;\n"]] | [[73, "struct crypto_cipher *tfm_des;"], [77, "tfm_des = crypto_alloc_cipher(\"des\", 0, 0);"], [80, "return PTR_ERR(tfm_des);"], [83, "crypto_cipher_setkey(tfm_des, key2, 8);"], [84, "crypto_cipher_encrypt_one(tfm_des, out, in);"], [85, "crypto_free_cipher(tfm_des);"], [87, "return 0;"]] | [
"CVE-2016-10154"
] | [
"CWE-119"
] | 14 | {
"Execution Environment": [
"CONFIG_VMAP_STACK"
],
"Explanation": null,
"External Function": [
"sg_init_one"
],
"Function Argument": [],
"Globals": [],
"Type Execution Declaration": []
} |
15 | linux | https://github.com/torvalds/linux | drivers/net/mlx4/port.c | 0926f91083f34d047abc74f1ca4fa6a9c161f7db | mlx4_en: Fix out of bounds array access
When searching for a free entry in either mlx4_register_vlan() or
mlx4_register_mac(), and there is no free entry, the loop terminates without
updating the local variable free thus causing out of array bounds access. Fix
this by adding a proper check outside the loop.
Signed-off-by: Eli Cohen <eli@mellanox.co.il>
Signed-off-by: David S. Miller <davem@davemloft.net> | true | 0bb0fd17cb0d592e6c0241114adba3b5 | mlx4_register_mac | int mlx4_register_mac(struct mlx4_dev *dev, u8 port, u64 mac, int *index)
{
struct mlx4_mac_table *table = &mlx4_priv(dev)->port[port].mac_table;
int i, err = 0;
int free = -1;
mlx4_dbg(dev, "Registering MAC: 0x%llx\n", (unsigned long long) mac);
mutex_lock(&table->mutex);
for (i = 0; i < MLX4_MAX_MAC_NUM - 1; i++) {
if (free < 0 && !table->refs[i]) {
free = i;
continue;
}
if (mac == (MLX4_MAC_MASK & be64_to_cpu(table->entries[i]))) {
/* MAC already registered, increase refernce count */
*index = i;
++table->refs[i];
goto out;
}
}
mlx4_dbg(dev, "Free MAC index is %d\n", free);
if (table->total == table->max) {
/* No free mac entries */
err = -ENOSPC;
goto out;
}
/* Register new MAC */
table->refs[free] = 1;
table->entries[free] = cpu_to_be64(mac | MLX4_MAC_VALID);
err = mlx4_set_port_mac_table(dev, port, table->entries);
if (unlikely(err)) {
mlx4_err(dev, "Failed adding MAC: 0x%llx\n", (unsigned long long) mac);
table->refs[free] = 0;
table->entries[free] = 0;
goto out;
}
*index = free;
++table->total;
out:
mutex_unlock(&table->mutex);
return err;
}
| [] | [] | [
"CVE-2010-5332"
] | [
"CWE-119"
] | 17 | {
"Execution Environment": [],
"Explanation": null,
"External Function": [
"mlx4_priv"
],
"Function Argument": [
"dev",
"port"
],
"Globals": [
"MLX4_MAX_MAC_NUM"
],
"Type Execution Declaration": []
} |
16 | linux | https://github.com/torvalds/linux | drivers/net/mlx4/port.c | 0926f91083f34d047abc74f1ca4fa6a9c161f7db | mlx4_en: Fix out of bounds array access
When searching for a free entry in either mlx4_register_vlan() or
mlx4_register_mac(), and there is no free entry, the loop terminates without
updating the local variable free thus causing out of array bounds access. Fix
this by adding a proper check outside the loop.
Signed-off-by: Eli Cohen <eli@mellanox.co.il>
Signed-off-by: David S. Miller <davem@davemloft.net> | true | b34d08283adbc51bc2a8ac9fef1d47d8 | mlx4_register_vlan | int mlx4_register_vlan(struct mlx4_dev *dev, u8 port, u16 vlan, int *index)
{
struct mlx4_vlan_table *table = &mlx4_priv(dev)->port[port].vlan_table;
int i, err = 0;
int free = -1;
mutex_lock(&table->mutex);
for (i = MLX4_VLAN_REGULAR; i < MLX4_MAX_VLAN_NUM; i++) {
if (free < 0 && (table->refs[i] == 0)) {
free = i;
continue;
}
if (table->refs[i] &&
(vlan == (MLX4_VLAN_MASK &
be32_to_cpu(table->entries[i])))) {
/* Vlan already registered, increase refernce count */
*index = i;
++table->refs[i];
goto out;
}
}
if (table->total == table->max) {
/* No free vlan entries */
err = -ENOSPC;
goto out;
}
/* Register new MAC */
table->refs[free] = 1;
table->entries[free] = cpu_to_be32(vlan | MLX4_VLAN_VALID);
err = mlx4_set_port_vlan_table(dev, port, table->entries);
if (unlikely(err)) {
mlx4_warn(dev, "Failed adding vlan: %u\n", vlan);
table->refs[free] = 0;
table->entries[free] = 0;
goto out;
}
*index = free;
++table->total;
out:
mutex_unlock(&table->mutex);
return err;
}
| [] | [] | [
"CVE-2010-5332"
] | [
"CWE-119"
] | 18 | {
"Execution Environment": [],
"Explanation": null,
"External Function": [
"mlx4_priv",
"be32_to_cpu",
"cpu_to_be32",
"mlx4_set_port_vlan_table",
"mlx4_warn"
],
"Function Argument": [
"dev",
"port",
"vlan",
"index"
],
"Globals": [
"MLX4_VLAN_REGULAR",
"MLX4_MAX_VLAN_NUM",
"MLX4_VLAN_MASK",
"MLX4_VLAN_VALID",
"ENOSPC",
"ENOMEM"
],
"Type Execution Declaration": []
} |
17 | linux | https://github.com/torvalds/linux | drivers/net/mlx4/port.c | 0926f91083f34d047abc74f1ca4fa6a9c161f7db | mlx4_en: Fix out of bounds array access
When searching for a free entry in either mlx4_register_vlan() or
mlx4_register_mac(), and there is no free entry, the loop terminates without
updating the local variable free thus causing out of array bounds access. Fix
this by adding a proper check outside the loop.
Signed-off-by: Eli Cohen <eli@mellanox.co.il>
Signed-off-by: David S. Miller <davem@davemloft.net> | false | 5e384db2b5bcf28504943915f4e0feff | mlx4_register_mac | int mlx4_register_mac(struct mlx4_dev *dev, u8 port, u64 mac, int *index)
{
struct mlx4_mac_table *table = &mlx4_priv(dev)->port[port].mac_table;
int i, err = 0;
int free = -1;
mlx4_dbg(dev, "Registering MAC: 0x%llx\n", (unsigned long long) mac);
mutex_lock(&table->mutex);
for (i = 0; i < MLX4_MAX_MAC_NUM - 1; i++) {
if (free < 0 && !table->refs[i]) {
free = i;
continue;
}
if (mac == (MLX4_MAC_MASK & be64_to_cpu(table->entries[i]))) {
/* MAC already registered, increase refernce count */
*index = i;
++table->refs[i];
goto out;
}
}
if (free < 0) {
err = -ENOMEM;
goto out;
}
mlx4_dbg(dev, "Free MAC index is %d\n", free);
if (table->total == table->max) {
/* No free mac entries */
err = -ENOSPC;
goto out;
}
/* Register new MAC */
table->refs[free] = 1;
table->entries[free] = cpu_to_be64(mac | MLX4_MAC_VALID);
err = mlx4_set_port_mac_table(dev, port, table->entries);
if (unlikely(err)) {
mlx4_err(dev, "Failed adding MAC: 0x%llx\n", (unsigned long long) mac);
table->refs[free] = 0;
table->entries[free] = 0;
goto out;
}
*index = free;
++table->total;
out:
mutex_unlock(&table->mutex);
return err;
}
| [[114, "\n"], [115, "\tif (free < 0) {\n"], [116, "\t\terr = -ENOMEM;\n"], [117, "\t\tgoto out;\n"], [118, "\t}\n"], [119, "\n"]] | [[114, "\n"], [115, "if (free < 0)"], [116, "err = -ENOMEM;"], [117, "goto out;"], [118, "\t}\n"], [119, "\n"]] | [
"CVE-2010-5332"
] | [
"CWE-119"
] | 17 | {
"Execution Environment": [],
"Explanation": null,
"External Function": [
"mlx4_priv"
],
"Function Argument": [
"dev",
"port"
],
"Globals": [
"MLX4_MAX_MAC_NUM"
],
"Type Execution Declaration": []
} |
18 | linux | https://github.com/torvalds/linux | drivers/net/mlx4/port.c | 0926f91083f34d047abc74f1ca4fa6a9c161f7db | mlx4_en: Fix out of bounds array access
When searching for a free entry in either mlx4_register_vlan() or
mlx4_register_mac(), and there is no free entry, the loop terminates without
updating the local variable free thus causing out of array bounds access. Fix
this by adding a proper check outside the loop.
Signed-off-by: Eli Cohen <eli@mellanox.co.il>
Signed-off-by: David S. Miller <davem@davemloft.net> | false | d2d908a7f0431e5675baba44caaafa0c | mlx4_register_vlan | int mlx4_register_vlan(struct mlx4_dev *dev, u8 port, u16 vlan, int *index)
{
struct mlx4_vlan_table *table = &mlx4_priv(dev)->port[port].vlan_table;
int i, err = 0;
int free = -1;
mutex_lock(&table->mutex);
for (i = MLX4_VLAN_REGULAR; i < MLX4_MAX_VLAN_NUM; i++) {
if (free < 0 && (table->refs[i] == 0)) {
free = i;
continue;
}
if (table->refs[i] &&
(vlan == (MLX4_VLAN_MASK &
be32_to_cpu(table->entries[i])))) {
/* Vlan already registered, increase refernce count */
*index = i;
++table->refs[i];
goto out;
}
}
if (free < 0) {
err = -ENOMEM;
goto out;
}
if (table->total == table->max) {
/* No free vlan entries */
err = -ENOSPC;
goto out;
}
/* Register new MAC */
table->refs[free] = 1;
table->entries[free] = cpu_to_be32(vlan | MLX4_VLAN_VALID);
err = mlx4_set_port_vlan_table(dev, port, table->entries);
if (unlikely(err)) {
mlx4_warn(dev, "Failed adding vlan: %u\n", vlan);
table->refs[free] = 0;
table->entries[free] = 0;
goto out;
}
*index = free;
++table->total;
out:
mutex_unlock(&table->mutex);
return err;
}
| [[214, "\tif (free < 0) {\n"], [215, "\t\terr = -ENOMEM;\n"], [216, "\t\tgoto out;\n"], [217, "\t}\n"], [218, "\n"]] | [[214, "if (free < 0)"], [215, "err = -ENOMEM;"], [216, "goto out;"], [217, "\t}\n"], [218, "\n"]] | [
"CVE-2010-5332"
] | [
"CWE-119"
] | 18 | {
"Execution Environment": [],
"Explanation": null,
"External Function": [
"mlx4_priv",
"be32_to_cpu",
"cpu_to_be32",
"mlx4_set_port_vlan_table",
"mlx4_warn"
],
"Function Argument": [
"dev",
"port",
"vlan",
"index"
],
"Globals": [
"MLX4_VLAN_REGULAR",
"MLX4_MAX_VLAN_NUM",
"MLX4_VLAN_MASK",
"MLX4_VLAN_VALID",
"ENOSPC",
"ENOMEM"
],
"Type Execution Declaration": []
} |
19 | linux | https://github.com/torvalds/linux | drivers/nvme/target/fc.c | 0c319d3a144d4b8f1ea2047fd614d2149b68f889 | nvmet-fc: ensure target queue id within range.
When searching for queue id's ensure they are within the expected range.
Signed-off-by: James Smart <james.smart@broadcom.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk> | true | c05ae774ee57b0ad442f8c36f38916df | nvmet_fc_find_target_queue | static struct nvmet_fc_tgt_queue *
nvmet_fc_find_target_queue(struct nvmet_fc_tgtport *tgtport,
u64 connection_id)
{
struct nvmet_fc_tgt_assoc *assoc;
struct nvmet_fc_tgt_queue *queue;
u64 association_id = nvmet_fc_getassociationid(connection_id);
u16 qid = nvmet_fc_getqueueid(connection_id);
unsigned long flags;
spin_lock_irqsave(&tgtport->lock, flags);
list_for_each_entry(assoc, &tgtport->assoc_list, a_list) {
if (association_id == assoc->association_id) {
queue = assoc->queues[qid];
if (queue &&
(!atomic_read(&queue->connected) ||
!nvmet_fc_tgt_q_get(queue)))
queue = NULL;
spin_unlock_irqrestore(&tgtport->lock, flags);
return queue;
}
}
spin_unlock_irqrestore(&tgtport->lock, flags);
return NULL;
}
| [] | [] | [
"CVE-2017-18379"
] | [
"CWE-119"
] | 20 | {
"Execution Environment": [],
"Explanation": null,
"External Function": [
"nvmet_fc_getqueueid"
],
"Function Argument": [],
"Globals": [
"NVMET_NR_QUEUES"
],
"Type Execution Declaration": [
"struct nvmet_fc_tgt_assoc",
"struct nvmet_fc_tgt_queue"
]
} |
20 | linux | https://github.com/torvalds/linux | drivers/nvme/target/fc.c | 0c319d3a144d4b8f1ea2047fd614d2149b68f889 | nvmet-fc: ensure target queue id within range.
When searching for queue id's ensure they are within the expected range.
Signed-off-by: James Smart <james.smart@broadcom.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk> | false | ae5fb548fe28f4fb4cdb0151e82cf8e2 | nvmet_fc_find_target_queue | static struct nvmet_fc_tgt_queue *
nvmet_fc_find_target_queue(struct nvmet_fc_tgtport *tgtport,
u64 connection_id)
{
struct nvmet_fc_tgt_assoc *assoc;
struct nvmet_fc_tgt_queue *queue;
u64 association_id = nvmet_fc_getassociationid(connection_id);
u16 qid = nvmet_fc_getqueueid(connection_id);
unsigned long flags;
if (qid > NVMET_NR_QUEUES)
return NULL;
spin_lock_irqsave(&tgtport->lock, flags);
list_for_each_entry(assoc, &tgtport->assoc_list, a_list) {
if (association_id == assoc->association_id) {
queue = assoc->queues[qid];
if (queue &&
(!atomic_read(&queue->connected) ||
!nvmet_fc_tgt_q_get(queue)))
queue = NULL;
spin_unlock_irqrestore(&tgtport->lock, flags);
return queue;
}
}
spin_unlock_irqrestore(&tgtport->lock, flags);
return NULL;
}
| [[786, "\tif (qid > NVMET_NR_QUEUES)\n"], [787, "\t\treturn NULL;\n"], [788, "\n"]] | [[786, "if (qid > NVMET_NR_QUEUES)"], [787, "return NULL;"], [788, "\n"]] | [
"CVE-2017-18379"
] | [
"CWE-119"
] | 20 | {
"Execution Environment": [],
"Explanation": null,
"External Function": [
"nvmet_fc_getqueueid"
],
"Function Argument": [],
"Globals": [
"NVMET_NR_QUEUES"
],
"Type Execution Declaration": [
"struct nvmet_fc_tgt_assoc",
"struct nvmet_fc_tgt_queue"
]
} |
21 | linux | https://github.com/torvalds/linux | fs/ioctl.c | 10eec60ce79187686e052092e5383c99b4420a20 | vfs: ioctl: prevent double-fetch in dedupe ioctl
This prevents a double-fetch from user space that can lead to to an
undersized allocation and heap overflow.
Fixes: 54dbc1517237 ("vfs: hoist the btrfs deduplication ioctl to the vfs")
Signed-off-by: Scott Bauer <sbauer@plzdonthack.me>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> | true | 88ed0b07f2f93f508fcb10ea0454b747 | ioctl_file_dedupe_range | static long ioctl_file_dedupe_range(struct file *file, void __user *arg)
{
struct file_dedupe_range __user *argp = arg;
struct file_dedupe_range *same = NULL;
int ret;
unsigned long size;
u16 count;
if (get_user(count, &argp->dest_count)) {
ret = -EFAULT;
goto out;
}
size = offsetof(struct file_dedupe_range __user, info[count]);
same = memdup_user(argp, size);
if (IS_ERR(same)) {
ret = PTR_ERR(same);
same = NULL;
goto out;
}
ret = vfs_dedupe_file_range(file, same);
if (ret)
goto out;
ret = copy_to_user(argp, same, size);
if (ret)
ret = -EFAULT;
out:
kfree(same);
return ret;
}
| [] | [] | [
"CVE-2016-6516"
] | [
"CWE-362",
"CWE-119"
] | 22 | {
"Execution Environment": [],
"Explanation": null,
"External Function": [
"get_user",
"memdup_user",
"copy_to_user"
],
"Function Argument": [
"arg"
],
"Globals": [],
"Type Execution Declaration": [
"struct file_dedupe_range __user",
"struct file_dedupe_range"
]
} |
22 | linux | https://github.com/torvalds/linux | fs/ioctl.c | 10eec60ce79187686e052092e5383c99b4420a20 | vfs: ioctl: prevent double-fetch in dedupe ioctl
This prevents a double-fetch from user space that can lead to to an
undersized allocation and heap overflow.
Fixes: 54dbc1517237 ("vfs: hoist the btrfs deduplication ioctl to the vfs")
Signed-off-by: Scott Bauer <sbauer@plzdonthack.me>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> | false | ad737bed6069921506d9ce434712bad7 | ioctl_file_dedupe_range | static long ioctl_file_dedupe_range(struct file *file, void __user *arg)
{
struct file_dedupe_range __user *argp = arg;
struct file_dedupe_range *same = NULL;
int ret;
unsigned long size;
u16 count;
if (get_user(count, &argp->dest_count)) {
ret = -EFAULT;
goto out;
}
size = offsetof(struct file_dedupe_range __user, info[count]);
same = memdup_user(argp, size);
if (IS_ERR(same)) {
ret = PTR_ERR(same);
same = NULL;
goto out;
}
same->dest_count = count;
ret = vfs_dedupe_file_range(file, same);
if (ret)
goto out;
ret = copy_to_user(argp, same, size);
if (ret)
ret = -EFAULT;
out:
kfree(same);
return ret;
}
| [[593, "\tsame->dest_count = count;\n"]] | [[593, "same->dest_count = count;"]] | [
"CVE-2016-6516"
] | [
"CWE-362",
"CWE-119"
] | 22 | {
"Execution Environment": [],
"Explanation": null,
"External Function": [
"get_user",
"memdup_user",
"copy_to_user"
],
"Function Argument": [
"arg"
],
"Globals": [],
"Type Execution Declaration": [
"struct file_dedupe_range __user",
"struct file_dedupe_range"
]
} |
23 | linux | https://github.com/torvalds/linux | drivers/net/ethernet/sunplus/spl2sw_driver.c | 12aece8b01507a2d357a1861f470e83621fbb6f2 | eth: sp7021: fix use after free bug in spl2sw_nvmem_get_mac_address
This frees "mac" and tries to display its address as part of the error
message on the next line. Swap the order.
Fixes: fd3040b9394c ("net: ethernet: Add driver for Sunplus SP7021")
Signed-off-by: Zheng Wang <zyytlz.wz@163.com>
Signed-off-by: David S. Miller <davem@davemloft.net> | true | edf98bb67e48bc2765e89283212bb2a6 | spl2sw_nvmem_get_mac_address | static int spl2sw_nvmem_get_mac_address(struct device *dev, struct device_node *np,
void *addrbuf)
{
struct nvmem_cell *cell;
ssize_t len;
u8 *mac;
/* Get nvmem cell of mac-address from dts. */
cell = of_nvmem_cell_get(np, "mac-address");
if (IS_ERR(cell))
return PTR_ERR(cell);
/* Read mac address from nvmem cell. */
mac = nvmem_cell_read(cell, &len);
nvmem_cell_put(cell);
if (IS_ERR(mac))
return PTR_ERR(mac);
if (len != ETH_ALEN) {
kfree(mac);
dev_info(dev, "Invalid length of mac address in nvmem!\n");
return -EINVAL;
}
/* Byte order of some samples are reversed.
* Convert byte order here.
*/
spl2sw_check_mac_vendor_id_and_convert(mac);
/* Check if mac address is valid */
if (!is_valid_ether_addr(mac)) {
kfree(mac);
dev_info(dev, "Invalid mac address in nvmem (%pM)!\n", mac);
return -EINVAL;
}
ether_addr_copy(addrbuf, mac);
kfree(mac);
return 0;
}
| [[251, "\t\tkfree(mac);\n"]] | [[251, "kfree(mac);"]] | [
"CVE-2022-3541"
] | [
"CWE-119"
] | 24 | {
"Execution Environment": [],
"Explanation": null,
"External Function": [
"of_nvmem_cell_get",
"nvmem_cell_read",
"nvmem_cell_put",
"is_valid_ether_addr",
"dev_info"
],
"Function Argument": [],
"Globals": [],
"Type Execution Declaration": []
} |
24 | linux | https://github.com/torvalds/linux | drivers/net/ethernet/sunplus/spl2sw_driver.c | 12aece8b01507a2d357a1861f470e83621fbb6f2 | eth: sp7021: fix use after free bug in spl2sw_nvmem_get_mac_address
This frees "mac" and tries to display its address as part of the error
message on the next line. Swap the order.
Fixes: fd3040b9394c ("net: ethernet: Add driver for Sunplus SP7021")
Signed-off-by: Zheng Wang <zyytlz.wz@163.com>
Signed-off-by: David S. Miller <davem@davemloft.net> | false | b9422cbb5d31e8e69bf752bd7a6c6ed5 | spl2sw_nvmem_get_mac_address | static int spl2sw_nvmem_get_mac_address(struct device *dev, struct device_node *np,
void *addrbuf)
{
struct nvmem_cell *cell;
ssize_t len;
u8 *mac;
/* Get nvmem cell of mac-address from dts. */
cell = of_nvmem_cell_get(np, "mac-address");
if (IS_ERR(cell))
return PTR_ERR(cell);
/* Read mac address from nvmem cell. */
mac = nvmem_cell_read(cell, &len);
nvmem_cell_put(cell);
if (IS_ERR(mac))
return PTR_ERR(mac);
if (len != ETH_ALEN) {
kfree(mac);
dev_info(dev, "Invalid length of mac address in nvmem!\n");
return -EINVAL;
}
/* Byte order of some samples are reversed.
* Convert byte order here.
*/
spl2sw_check_mac_vendor_id_and_convert(mac);
/* Check if mac address is valid */
if (!is_valid_ether_addr(mac)) {
dev_info(dev, "Invalid mac address in nvmem (%pM)!\n", mac);
kfree(mac);
return -EINVAL;
}
ether_addr_copy(addrbuf, mac);
kfree(mac);
return 0;
}
| [[252, "\t\tkfree(mac);\n"]] | [[252, "kfree(mac);"]] | [
"CVE-2022-3541"
] | [
"CWE-119"
] | 24 | {
"Execution Environment": [],
"Explanation": null,
"External Function": [
"of_nvmem_cell_get",
"nvmem_cell_read",
"nvmem_cell_put",
"is_valid_ether_addr",
"dev_info"
],
"Function Argument": [],
"Globals": [],
"Type Execution Declaration": []
} |
25 | linux | https://github.com/torvalds/linux | drivers/target/loopback/tcm_loop.c | 12f09ccb4612734a53e47ed5302e0479c10a50f8 | loopback: off by one in tcm_loop_make_naa_tpg()
This is an off by one 'tgpt' check in tcm_loop_make_naa_tpg() that could result
in memory corruption.
Signed-off-by: Dan Carpenter <error27@gmail.com>
Signed-off-by: Nicholas A. Bellinger <nab@linux-iscsi.org> | true | bb10218e4db9527815edd9343f459cff | tcm_loop_make_naa_tpg | struct se_portal_group *tcm_loop_make_naa_tpg(
struct se_wwn *wwn,
struct config_group *group,
const char *name)
{
struct tcm_loop_hba *tl_hba = container_of(wwn,
struct tcm_loop_hba, tl_hba_wwn);
struct tcm_loop_tpg *tl_tpg;
char *tpgt_str, *end_ptr;
int ret;
unsigned short int tpgt;
tpgt_str = strstr(name, "tpgt_");
if (!tpgt_str) {
printk(KERN_ERR "Unable to locate \"tpgt_#\" directory"
" group\n");
return ERR_PTR(-EINVAL);
}
tpgt_str += 5; /* Skip ahead of "tpgt_" */
tpgt = (unsigned short int) simple_strtoul(tpgt_str, &end_ptr, 0);
if (tpgt > TL_TPGS_PER_HBA) {
printk(KERN_ERR "Passed tpgt: %hu exceeds TL_TPGS_PER_HBA:"
" %u\n", tpgt, TL_TPGS_PER_HBA);
return ERR_PTR(-EINVAL);
}
tl_tpg = &tl_hba->tl_hba_tpgs[tpgt];
tl_tpg->tl_hba = tl_hba;
tl_tpg->tl_tpgt = tpgt;
/*
* Register the tl_tpg as a emulated SAS TCM Target Endpoint
*/
ret = core_tpg_register(&tcm_loop_fabric_configfs->tf_ops,
wwn, &tl_tpg->tl_se_tpg, tl_tpg,
TRANSPORT_TPG_TYPE_NORMAL);
if (ret < 0)
return ERR_PTR(-ENOMEM);
printk(KERN_INFO "TCM_Loop_ConfigFS: Allocated Emulated %s"
" Target Port %s,t,0x%04x\n", tcm_loop_dump_proto_id(tl_hba),
config_item_name(&wwn->wwn_group.cg_item), tpgt);
return &tl_tpg->tl_se_tpg;
}
| [[1208, "\tif (tpgt > TL_TPGS_PER_HBA) {\n"]] | [[1208, "if (tpgt > TL_TPGS_PER_HBA)"]] | [
"CVE-2011-5327"
] | [
"CWE-119"
] | 26 | {
"Execution Environment": [],
"Explanation": null,
"External Function": [],
"Function Argument": [],
"Globals": [
"TL_TPGS_PER_HBA"
],
"Type Execution Declaration": [
"struct tcm_loop_hba",
"struct tcm_loop_tpg"
]
} |
26 | linux | https://github.com/torvalds/linux | drivers/target/loopback/tcm_loop.c | 12f09ccb4612734a53e47ed5302e0479c10a50f8 | loopback: off by one in tcm_loop_make_naa_tpg()
This is an off by one 'tgpt' check in tcm_loop_make_naa_tpg() that could result
in memory corruption.
Signed-off-by: Dan Carpenter <error27@gmail.com>
Signed-off-by: Nicholas A. Bellinger <nab@linux-iscsi.org> | false | ba9488c551927f9c924adb9bcb8fce29 | tcm_loop_make_naa_tpg | struct se_portal_group *tcm_loop_make_naa_tpg(
struct se_wwn *wwn,
struct config_group *group,
const char *name)
{
struct tcm_loop_hba *tl_hba = container_of(wwn,
struct tcm_loop_hba, tl_hba_wwn);
struct tcm_loop_tpg *tl_tpg;
char *tpgt_str, *end_ptr;
int ret;
unsigned short int tpgt;
tpgt_str = strstr(name, "tpgt_");
if (!tpgt_str) {
printk(KERN_ERR "Unable to locate \"tpgt_#\" directory"
" group\n");
return ERR_PTR(-EINVAL);
}
tpgt_str += 5; /* Skip ahead of "tpgt_" */
tpgt = (unsigned short int) simple_strtoul(tpgt_str, &end_ptr, 0);
if (tpgt >= TL_TPGS_PER_HBA) {
printk(KERN_ERR "Passed tpgt: %hu exceeds TL_TPGS_PER_HBA:"
" %u\n", tpgt, TL_TPGS_PER_HBA);
return ERR_PTR(-EINVAL);
}
tl_tpg = &tl_hba->tl_hba_tpgs[tpgt];
tl_tpg->tl_hba = tl_hba;
tl_tpg->tl_tpgt = tpgt;
/*
* Register the tl_tpg as a emulated SAS TCM Target Endpoint
*/
ret = core_tpg_register(&tcm_loop_fabric_configfs->tf_ops,
wwn, &tl_tpg->tl_se_tpg, tl_tpg,
TRANSPORT_TPG_TYPE_NORMAL);
if (ret < 0)
return ERR_PTR(-ENOMEM);
printk(KERN_INFO "TCM_Loop_ConfigFS: Allocated Emulated %s"
" Target Port %s,t,0x%04x\n", tcm_loop_dump_proto_id(tl_hba),
config_item_name(&wwn->wwn_group.cg_item), tpgt);
return &tl_tpg->tl_se_tpg;
}
| [[1208, "\tif (tpgt >= TL_TPGS_PER_HBA) {\n"]] | [[1208, "if (tpgt >= TL_TPGS_PER_HBA)"]] | [
"CVE-2011-5327"
] | [
"CWE-119"
] | 26 | {
"Execution Environment": [],
"Explanation": null,
"External Function": [],
"Function Argument": [],
"Globals": [
"TL_TPGS_PER_HBA"
],
"Type Execution Declaration": [
"struct tcm_loop_hba",
"struct tcm_loop_tpg"
]
} |
27 | linux | https://github.com/torvalds/linux | fs/nfsd/nfsxdr.c | 13bf9fbff0e5e099e2b6f003a0ab8ae145436309 | nfsd: stricter decoding of write-like NFSv2/v3 ops
The NFSv2/v3 code does not systematically check whether we decode past
the end of the buffer. This generally appears to be harmless, but there
are a few places where we do arithmetic on the pointers involved and
don't account for the possibility that a length could be negative. Add
checks to catch these.
Reported-by: Tuomas Haanpää <thaan@synopsys.com>
Reported-by: Ari Kauppi <ari@synopsys.com>
Reviewed-by: NeilBrown <neilb@suse.com>
Cc: stable@vger.kernel.org
Signed-off-by: J. Bruce Fields <bfields@redhat.com> | true | 312254bef4649bf3181c0060b93bc490 | nfssvc_decode_writeargs | int
nfssvc_decode_writeargs(struct svc_rqst *rqstp, __be32 *p,
struct nfsd_writeargs *args)
{
unsigned int len, hdr, dlen;
struct kvec *head = rqstp->rq_arg.head;
int v;
p = decode_fh(p, &args->fh);
if (!p)
return 0;
p++; /* beginoffset */
args->offset = ntohl(*p++); /* offset */
p++; /* totalcount */
len = args->len = ntohl(*p++);
/*
* The protocol specifies a maximum of 8192 bytes.
*/
if (len > NFSSVC_MAXBLKSIZE_V2)
return 0;
/*
* Check to make sure that we got the right number of
* bytes.
*/
hdr = (void*)p - head->iov_base;
dlen = head->iov_len + rqstp->rq_arg.page_len - hdr;
/*
* Round the length of the data which was specified up to
* the next multiple of XDR units and then compare that
* against the length which was actually received.
* Note that when RPCSEC/GSS (for example) is used, the
* data buffer can be padded so dlen might be larger
* than required. It must never be smaller.
*/
if (dlen < XDR_QUADLEN(len)*4)
return 0;
rqstp->rq_vec[0].iov_base = (void*)p;
rqstp->rq_vec[0].iov_len = head->iov_len - hdr;
v = 0;
while (len > rqstp->rq_vec[v].iov_len) {
len -= rqstp->rq_vec[v].iov_len;
v++;
rqstp->rq_vec[v].iov_base = page_address(rqstp->rq_pages[v]);
rqstp->rq_vec[v].iov_len = PAGE_SIZE;
}
rqstp->rq_vec[v].iov_len = len;
args->vlen = v + 1;
return 1;
}
| [] | [] | [
"CVE-2017-7895"
] | [
"CWE-119"
] | 28 | {
"Execution Environment": [],
"Explanation": null,
"External Function": [
"decode_fh",
"page_address"
],
"Function Argument": [
"rqstp"
],
"Globals": [
"NFSSVC_MAXBLKSIZE_V2",
"PAGE_SIZE",
"XDR_QUADLEN"
],
"Type Execution Declaration": []
} |
28 | linux | https://github.com/torvalds/linux | fs/nfsd/nfsxdr.c | 13bf9fbff0e5e099e2b6f003a0ab8ae145436309 | nfsd: stricter decoding of write-like NFSv2/v3 ops
The NFSv2/v3 code does not systematically check whether we decode past
the end of the buffer. This generally appears to be harmless, but there
are a few places where we do arithmetic on the pointers involved and
don't account for the possibility that a length could be negative. Add
checks to catch these.
Reported-by: Tuomas Haanpää <thaan@synopsys.com>
Reported-by: Ari Kauppi <ari@synopsys.com>
Reviewed-by: NeilBrown <neilb@suse.com>
Cc: stable@vger.kernel.org
Signed-off-by: J. Bruce Fields <bfields@redhat.com> | false | e0616d42a55dbbc6ea9361716131ed66 | nfssvc_decode_writeargs | int
nfssvc_decode_writeargs(struct svc_rqst *rqstp, __be32 *p,
struct nfsd_writeargs *args)
{
unsigned int len, hdr, dlen;
struct kvec *head = rqstp->rq_arg.head;
int v;
p = decode_fh(p, &args->fh);
if (!p)
return 0;
p++; /* beginoffset */
args->offset = ntohl(*p++); /* offset */
p++; /* totalcount */
len = args->len = ntohl(*p++);
/*
* The protocol specifies a maximum of 8192 bytes.
*/
if (len > NFSSVC_MAXBLKSIZE_V2)
return 0;
/*
* Check to make sure that we got the right number of
* bytes.
*/
hdr = (void*)p - head->iov_base;
if (hdr > head->iov_len)
return 0;
dlen = head->iov_len + rqstp->rq_arg.page_len - hdr;
/*
* Round the length of the data which was specified up to
* the next multiple of XDR units and then compare that
* against the length which was actually received.
* Note that when RPCSEC/GSS (for example) is used, the
* data buffer can be padded so dlen might be larger
* than required. It must never be smaller.
*/
if (dlen < XDR_QUADLEN(len)*4)
return 0;
rqstp->rq_vec[0].iov_base = (void*)p;
rqstp->rq_vec[0].iov_len = head->iov_len - hdr;
v = 0;
while (len > rqstp->rq_vec[v].iov_len) {
len -= rqstp->rq_vec[v].iov_len;
v++;
rqstp->rq_vec[v].iov_base = page_address(rqstp->rq_pages[v]);
rqstp->rq_vec[v].iov_len = PAGE_SIZE;
}
rqstp->rq_vec[v].iov_len = len;
args->vlen = v + 1;
return 1;
}
| [[305, "\tif (hdr > head->iov_len)\n"], [306, "\t\treturn 0;\n"]] | [[305, "if (hdr > head->iov_len)"], [306, "return 0;"]] | [
"CVE-2017-7895"
] | [
"CWE-119"
] | 28 | {
"Execution Environment": [],
"Explanation": null,
"External Function": [
"decode_fh",
"page_address"
],
"Function Argument": [
"rqstp"
],
"Globals": [
"NFSSVC_MAXBLKSIZE_V2",
"PAGE_SIZE",
"XDR_QUADLEN"
],
"Type Execution Declaration": []
} |
29 | linux | https://github.com/torvalds/linux | fs/jbd2/transaction.c | 15291164b22a357cb211b618adfef4fa82fc0de3 | jbd2: clear BH_Delay & BH_Unwritten in journal_unmap_buffer
journal_unmap_buffer()'s zap_buffer: code clears a lot of buffer head
state ala discard_buffer(), but does not touch _Delay or _Unwritten as
discard_buffer() does.
This can be problematic in some areas of the ext4 code which assume
that if they have found a buffer marked unwritten or delay, then it's
a live one. Perhaps those spots should check whether it is mapped
as well, but if jbd2 is going to tear down a buffer, let's really
tear it down completely.
Without this I get some fsx failures on sub-page-block filesystems
up until v3.2, at which point 4e96b2dbbf1d7e81f22047a50f862555a6cb87cb
and 189e868fa8fdca702eb9db9d8afc46b5cb9144c9 make the failures go
away, because buried within that large change is some more flag
clearing. I still think it's worth doing in jbd2, since
->invalidatepage leads here directly, and it's the right place
to clear away these flags.
Signed-off-by: Eric Sandeen <sandeen@redhat.com>
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
Cc: stable@vger.kernel.org | true | 1708a3552db849780aa5aa7df12f8b7a | journal_unmap_buffer | static int journal_unmap_buffer(journal_t *journal, struct buffer_head *bh)
{
transaction_t *transaction;
struct journal_head *jh;
int may_free = 1;
int ret;
BUFFER_TRACE(bh, "entry");
/*
* It is safe to proceed here without the j_list_lock because the
* buffers cannot be stolen by try_to_free_buffers as long as we are
* holding the page lock. --sct
*/
if (!buffer_jbd(bh))
goto zap_buffer_unlocked;
/* OK, we have data buffer in journaled mode */
write_lock(&journal->j_state_lock);
jbd_lock_bh_state(bh);
spin_lock(&journal->j_list_lock);
jh = jbd2_journal_grab_journal_head(bh);
if (!jh)
goto zap_buffer_no_jh;
/*
* We cannot remove the buffer from checkpoint lists until the
* transaction adding inode to orphan list (let's call it T)
* is committed. Otherwise if the transaction changing the
* buffer would be cleaned from the journal before T is
* committed, a crash will cause that the correct contents of
* the buffer will be lost. On the other hand we have to
* clear the buffer dirty bit at latest at the moment when the
* transaction marking the buffer as freed in the filesystem
* structures is committed because from that moment on the
* buffer can be reallocated and used by a different page.
* Since the block hasn't been freed yet but the inode has
* already been added to orphan list, it is safe for us to add
* the buffer to BJ_Forget list of the newest transaction.
*/
transaction = jh->b_transaction;
if (transaction == NULL) {
/* First case: not on any transaction. If it
* has no checkpoint link, then we can zap it:
* it's a writeback-mode buffer so we don't care
* if it hits disk safely. */
if (!jh->b_cp_transaction) {
JBUFFER_TRACE(jh, "not on any transaction: zap");
goto zap_buffer;
}
if (!buffer_dirty(bh)) {
/* bdflush has written it. We can drop it now */
goto zap_buffer;
}
/* OK, it must be in the journal but still not
* written fully to disk: it's metadata or
* journaled data... */
if (journal->j_running_transaction) {
/* ... and once the current transaction has
* committed, the buffer won't be needed any
* longer. */
JBUFFER_TRACE(jh, "checkpointed: add to BJ_Forget");
ret = __dispose_buffer(jh,
journal->j_running_transaction);
jbd2_journal_put_journal_head(jh);
spin_unlock(&journal->j_list_lock);
jbd_unlock_bh_state(bh);
write_unlock(&journal->j_state_lock);
return ret;
} else {
/* There is no currently-running transaction. So the
* orphan record which we wrote for this file must have
* passed into commit. We must attach this buffer to
* the committing transaction, if it exists. */
if (journal->j_committing_transaction) {
JBUFFER_TRACE(jh, "give to committing trans");
ret = __dispose_buffer(jh,
journal->j_committing_transaction);
jbd2_journal_put_journal_head(jh);
spin_unlock(&journal->j_list_lock);
jbd_unlock_bh_state(bh);
write_unlock(&journal->j_state_lock);
return ret;
} else {
/* The orphan record's transaction has
* committed. We can cleanse this buffer */
clear_buffer_jbddirty(bh);
goto zap_buffer;
}
}
} else if (transaction == journal->j_committing_transaction) {
JBUFFER_TRACE(jh, "on committing transaction");
/*
* The buffer is committing, we simply cannot touch
* it. So we just set j_next_transaction to the
* running transaction (if there is one) and mark
* buffer as freed so that commit code knows it should
* clear dirty bits when it is done with the buffer.
*/
set_buffer_freed(bh);
if (journal->j_running_transaction && buffer_jbddirty(bh))
jh->b_next_transaction = journal->j_running_transaction;
jbd2_journal_put_journal_head(jh);
spin_unlock(&journal->j_list_lock);
jbd_unlock_bh_state(bh);
write_unlock(&journal->j_state_lock);
return 0;
} else {
/* Good, the buffer belongs to the running transaction.
* We are writing our own transaction's data, not any
* previous one's, so it is safe to throw it away
* (remember that we expect the filesystem to have set
* i_size already for this truncate so recovery will not
* expose the disk blocks we are discarding here.) */
J_ASSERT_JH(jh, transaction == journal->j_running_transaction);
JBUFFER_TRACE(jh, "on running transaction");
may_free = __dispose_buffer(jh, transaction);
}
zap_buffer:
jbd2_journal_put_journal_head(jh);
zap_buffer_no_jh:
spin_unlock(&journal->j_list_lock);
jbd_unlock_bh_state(bh);
write_unlock(&journal->j_state_lock);
zap_buffer_unlocked:
clear_buffer_dirty(bh);
J_ASSERT_BH(bh, !buffer_jbddirty(bh));
clear_buffer_mapped(bh);
clear_buffer_req(bh);
clear_buffer_new(bh);
bh->b_bdev = NULL;
return may_free;
}
| [] | [] | [
"CVE-2011-4086"
] | [
"CWE-119"
] | 30 | {
"Execution Environment": [
"the ext4 filesystem must be mounted with a journal"
],
"Explanation": null,
"External Function": [],
"Function Argument": [],
"Globals": [],
"Type Execution Declaration": []
} |
30 | linux | https://github.com/torvalds/linux | fs/jbd2/transaction.c | 15291164b22a357cb211b618adfef4fa82fc0de3 | jbd2: clear BH_Delay & BH_Unwritten in journal_unmap_buffer
journal_unmap_buffer()'s zap_buffer: code clears a lot of buffer head
state ala discard_buffer(), but does not touch _Delay or _Unwritten as
discard_buffer() does.
This can be problematic in some areas of the ext4 code which assume
that if they have found a buffer marked unwritten or delay, then it's
a live one. Perhaps those spots should check whether it is mapped
as well, but if jbd2 is going to tear down a buffer, let's really
tear it down completely.
Without this I get some fsx failures on sub-page-block filesystems
up until v3.2, at which point 4e96b2dbbf1d7e81f22047a50f862555a6cb87cb
and 189e868fa8fdca702eb9db9d8afc46b5cb9144c9 make the failures go
away, because buried within that large change is some more flag
clearing. I still think it's worth doing in jbd2, since
->invalidatepage leads here directly, and it's the right place
to clear away these flags.
Signed-off-by: Eric Sandeen <sandeen@redhat.com>
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
Cc: stable@vger.kernel.org | false | e85e99d5d65ba40f87a83bf83e2677ee | journal_unmap_buffer | static int journal_unmap_buffer(journal_t *journal, struct buffer_head *bh)
{
transaction_t *transaction;
struct journal_head *jh;
int may_free = 1;
int ret;
BUFFER_TRACE(bh, "entry");
/*
* It is safe to proceed here without the j_list_lock because the
* buffers cannot be stolen by try_to_free_buffers as long as we are
* holding the page lock. --sct
*/
if (!buffer_jbd(bh))
goto zap_buffer_unlocked;
/* OK, we have data buffer in journaled mode */
write_lock(&journal->j_state_lock);
jbd_lock_bh_state(bh);
spin_lock(&journal->j_list_lock);
jh = jbd2_journal_grab_journal_head(bh);
if (!jh)
goto zap_buffer_no_jh;
/*
* We cannot remove the buffer from checkpoint lists until the
* transaction adding inode to orphan list (let's call it T)
* is committed. Otherwise if the transaction changing the
* buffer would be cleaned from the journal before T is
* committed, a crash will cause that the correct contents of
* the buffer will be lost. On the other hand we have to
* clear the buffer dirty bit at latest at the moment when the
* transaction marking the buffer as freed in the filesystem
* structures is committed because from that moment on the
* buffer can be reallocated and used by a different page.
* Since the block hasn't been freed yet but the inode has
* already been added to orphan list, it is safe for us to add
* the buffer to BJ_Forget list of the newest transaction.
*/
transaction = jh->b_transaction;
if (transaction == NULL) {
/* First case: not on any transaction. If it
* has no checkpoint link, then we can zap it:
* it's a writeback-mode buffer so we don't care
* if it hits disk safely. */
if (!jh->b_cp_transaction) {
JBUFFER_TRACE(jh, "not on any transaction: zap");
goto zap_buffer;
}
if (!buffer_dirty(bh)) {
/* bdflush has written it. We can drop it now */
goto zap_buffer;
}
/* OK, it must be in the journal but still not
* written fully to disk: it's metadata or
* journaled data... */
if (journal->j_running_transaction) {
/* ... and once the current transaction has
* committed, the buffer won't be needed any
* longer. */
JBUFFER_TRACE(jh, "checkpointed: add to BJ_Forget");
ret = __dispose_buffer(jh,
journal->j_running_transaction);
jbd2_journal_put_journal_head(jh);
spin_unlock(&journal->j_list_lock);
jbd_unlock_bh_state(bh);
write_unlock(&journal->j_state_lock);
return ret;
} else {
/* There is no currently-running transaction. So the
* orphan record which we wrote for this file must have
* passed into commit. We must attach this buffer to
* the committing transaction, if it exists. */
if (journal->j_committing_transaction) {
JBUFFER_TRACE(jh, "give to committing trans");
ret = __dispose_buffer(jh,
journal->j_committing_transaction);
jbd2_journal_put_journal_head(jh);
spin_unlock(&journal->j_list_lock);
jbd_unlock_bh_state(bh);
write_unlock(&journal->j_state_lock);
return ret;
} else {
/* The orphan record's transaction has
* committed. We can cleanse this buffer */
clear_buffer_jbddirty(bh);
goto zap_buffer;
}
}
} else if (transaction == journal->j_committing_transaction) {
JBUFFER_TRACE(jh, "on committing transaction");
/*
* The buffer is committing, we simply cannot touch
* it. So we just set j_next_transaction to the
* running transaction (if there is one) and mark
* buffer as freed so that commit code knows it should
* clear dirty bits when it is done with the buffer.
*/
set_buffer_freed(bh);
if (journal->j_running_transaction && buffer_jbddirty(bh))
jh->b_next_transaction = journal->j_running_transaction;
jbd2_journal_put_journal_head(jh);
spin_unlock(&journal->j_list_lock);
jbd_unlock_bh_state(bh);
write_unlock(&journal->j_state_lock);
return 0;
} else {
/* Good, the buffer belongs to the running transaction.
* We are writing our own transaction's data, not any
* previous one's, so it is safe to throw it away
* (remember that we expect the filesystem to have set
* i_size already for this truncate so recovery will not
* expose the disk blocks we are discarding here.) */
J_ASSERT_JH(jh, transaction == journal->j_running_transaction);
JBUFFER_TRACE(jh, "on running transaction");
may_free = __dispose_buffer(jh, transaction);
}
zap_buffer:
jbd2_journal_put_journal_head(jh);
zap_buffer_no_jh:
spin_unlock(&journal->j_list_lock);
jbd_unlock_bh_state(bh);
write_unlock(&journal->j_state_lock);
zap_buffer_unlocked:
clear_buffer_dirty(bh);
J_ASSERT_BH(bh, !buffer_jbddirty(bh));
clear_buffer_mapped(bh);
clear_buffer_req(bh);
clear_buffer_new(bh);
clear_buffer_delay(bh);
clear_buffer_unwritten(bh);
bh->b_bdev = NULL;
return may_free;
}
| [[1952, "\tclear_buffer_delay(bh);\n"], [1953, "\tclear_buffer_unwritten(bh);\n"]] | [[1952, "clear_buffer_delay(bh);"], [1953, "clear_buffer_unwritten(bh);"]] | [
"CVE-2011-4086"
] | [
"CWE-119"
] | 30 | {
"Execution Environment": [
"the ext4 filesystem must be mounted with a journal"
],
"Explanation": null,
"External Function": [],
"Function Argument": [],
"Globals": [],
"Type Execution Declaration": []
} |
31 | linux | https://github.com/torvalds/linux | mm/memory.c | 16ce101db85db694a91380aa4c89b25530871d33 | mm/memory.c: fix race when faulting a device private page
Patch series "Fix several device private page reference counting issues",
v2
This series aims to fix a number of page reference counting issues in
drivers dealing with device private ZONE_DEVICE pages. These result in
use-after-free type bugs, either from accessing a struct page which no
longer exists because it has been removed or accessing fields within the
struct page which are no longer valid because the page has been freed.
During normal usage it is unlikely these will cause any problems. However
without these fixes it is possible to crash the kernel from userspace.
These crashes can be triggered either by unloading the kernel module or
unbinding the device from the driver prior to a userspace task exiting.
In modules such as Nouveau it is also possible to trigger some of these
issues by explicitly closing the device file-descriptor prior to the task
exiting and then accessing device private memory.
This involves some minor changes to both PowerPC and AMD GPU code.
Unfortunately I lack hardware to test either of those so any help there
would be appreciated. The changes mimic what is done in for both Nouveau
and hmm-tests though so I doubt they will cause problems.
This patch (of 8):
When the CPU tries to access a device private page the migrate_to_ram()
callback associated with the pgmap for the page is called. However no
reference is taken on the faulting page. Therefore a concurrent migration
of the device private page can free the page and possibly the underlying
pgmap. This results in a race which can crash the kernel due to the
migrate_to_ram() function pointer becoming invalid. It also means drivers
can't reliably read the zone_device_data field because the page may have
been freed with memunmap_pages().
Close the race by getting a reference on the page while holding the ptl to
ensure it has not been freed. Unfortunately the elevated reference count
will cause the migration required to handle the fault to fail. To avoid
this failure pass the faulting page into the migrate_vma functions so that
if an elevated reference count is found it can be checked to see if it's
expected or not.
[mpe@ellerman.id.au: fix build]
Link: https://lkml.kernel.org/r/87fsgbf3gh.fsf@mpe.ellerman.id.au
Link: https://lkml.kernel.org/r/cover.60659b549d8509ddecafad4f498ee7f03bb23c69.1664366292.git-series.apopple@nvidia.com
Link: https://lkml.kernel.org/r/d3e813178a59e565e8d78d9b9a4e2562f6494f90.1664366292.git-series.apopple@nvidia.com
Signed-off-by: Alistair Popple <apopple@nvidia.com>
Acked-by: Felix Kuehling <Felix.Kuehling@amd.com>
Cc: Jason Gunthorpe <jgg@nvidia.com>
Cc: John Hubbard <jhubbard@nvidia.com>
Cc: Ralph Campbell <rcampbell@nvidia.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Lyude Paul <lyude@redhat.com>
Cc: Alex Deucher <alexander.deucher@amd.com>
Cc: Alex Sierra <alex.sierra@amd.com>
Cc: Ben Skeggs <bskeggs@redhat.com>
Cc: Christian König <christian.koenig@amd.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: "Huang, Ying" <ying.huang@intel.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Yang Shi <shy828301@gmail.com>
Cc: Zi Yan <ziy@nvidia.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org> | true | 6c3324ccd98e2b9019df7554fdfc4e67 | do_swap_page | vm_fault_t do_swap_page(struct vm_fault *vmf)
{
struct vm_area_struct *vma = vmf->vma;
struct folio *swapcache, *folio = NULL;
struct page *page;
struct swap_info_struct *si = NULL;
rmap_t rmap_flags = RMAP_NONE;
bool exclusive = false;
swp_entry_t entry;
pte_t pte;
int locked;
vm_fault_t ret = 0;
void *shadow = NULL;
if (!pte_unmap_same(vmf))
goto out;
entry = pte_to_swp_entry(vmf->orig_pte);
if (unlikely(non_swap_entry(entry))) {
if (is_migration_entry(entry)) {
migration_entry_wait(vma->vm_mm, vmf->pmd,
vmf->address);
} else if (is_device_exclusive_entry(entry)) {
vmf->page = pfn_swap_entry_to_page(entry);
ret = remove_device_exclusive_entry(vmf);
} else if (is_device_private_entry(entry)) {
vmf->page = pfn_swap_entry_to_page(entry);
ret = vmf->page->pgmap->ops->migrate_to_ram(vmf);
} else if (is_hwpoison_entry(entry)) {
ret = VM_FAULT_HWPOISON;
} else if (is_swapin_error_entry(entry)) {
ret = VM_FAULT_SIGBUS;
} else if (is_pte_marker_entry(entry)) {
ret = handle_pte_marker(vmf);
} else {
print_bad_pte(vma, vmf->address, vmf->orig_pte, NULL);
ret = VM_FAULT_SIGBUS;
}
goto out;
}
/* Prevent swapoff from happening to us. */
si = get_swap_device(entry);
if (unlikely(!si))
goto out;
folio = swap_cache_get_folio(entry, vma, vmf->address);
if (folio)
page = folio_file_page(folio, swp_offset(entry));
swapcache = folio;
if (!folio) {
if (data_race(si->flags & SWP_SYNCHRONOUS_IO) &&
__swap_count(entry) == 1) {
/* skip swapcache */
folio = vma_alloc_folio(GFP_HIGHUSER_MOVABLE, 0,
vma, vmf->address, false);
page = &folio->page;
if (folio) {
__folio_set_locked(folio);
__folio_set_swapbacked(folio);
if (mem_cgroup_swapin_charge_folio(folio,
vma->vm_mm, GFP_KERNEL,
entry)) {
ret = VM_FAULT_OOM;
goto out_page;
}
mem_cgroup_swapin_uncharge_swap(entry);
shadow = get_shadow_from_swap_cache(entry);
if (shadow)
workingset_refault(folio, shadow);
folio_add_lru(folio);
/* To provide entry to swap_readpage() */
folio_set_swap_entry(folio, entry);
swap_readpage(page, true, NULL);
folio->private = NULL;
}
} else {
page = swapin_readahead(entry, GFP_HIGHUSER_MOVABLE,
vmf);
if (page)
folio = page_folio(page);
swapcache = folio;
}
if (!folio) {
/*
* Back out if somebody else faulted in this pte
* while we released the pte lock.
*/
vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd,
vmf->address, &vmf->ptl);
if (likely(pte_same(*vmf->pte, vmf->orig_pte)))
ret = VM_FAULT_OOM;
goto unlock;
}
/* Had to read the page from swap area: Major fault */
ret = VM_FAULT_MAJOR;
count_vm_event(PGMAJFAULT);
count_memcg_event_mm(vma->vm_mm, PGMAJFAULT);
} else if (PageHWPoison(page)) {
/*
* hwpoisoned dirty swapcache pages are kept for killing
* owner processes (which may be unknown at hwpoison time)
*/
ret = VM_FAULT_HWPOISON;
goto out_release;
}
locked = folio_lock_or_retry(folio, vma->vm_mm, vmf->flags);
if (!locked) {
ret |= VM_FAULT_RETRY;
goto out_release;
}
if (swapcache) {
/*
* Make sure folio_free_swap() or swapoff did not release the
* swapcache from under us. The page pin, and pte_same test
* below, are not enough to exclude that. Even if it is still
* swapcache, we need to check that the page's swap has not
* changed.
*/
if (unlikely(!folio_test_swapcache(folio) ||
page_private(page) != entry.val))
goto out_page;
/*
* KSM sometimes has to copy on read faults, for example, if
* page->index of !PageKSM() pages would be nonlinear inside the
* anon VMA -- PageKSM() is lost on actual swapout.
*/
page = ksm_might_need_to_copy(page, vma, vmf->address);
if (unlikely(!page)) {
ret = VM_FAULT_OOM;
goto out_page;
}
folio = page_folio(page);
/*
* If we want to map a page that's in the swapcache writable, we
* have to detect via the refcount if we're really the exclusive
* owner. Try removing the extra reference from the local LRU
* pagevecs if required.
*/
if ((vmf->flags & FAULT_FLAG_WRITE) && folio == swapcache &&
!folio_test_ksm(folio) && !folio_test_lru(folio))
lru_add_drain();
}
cgroup_throttle_swaprate(page, GFP_KERNEL);
/*
* Back out if somebody else already faulted in this pte.
*/
vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd, vmf->address,
&vmf->ptl);
if (unlikely(!pte_same(*vmf->pte, vmf->orig_pte)))
goto out_nomap;
if (unlikely(!folio_test_uptodate(folio))) {
ret = VM_FAULT_SIGBUS;
goto out_nomap;
}
/*
* PG_anon_exclusive reuses PG_mappedtodisk for anon pages. A swap pte
* must never point at an anonymous page in the swapcache that is
* PG_anon_exclusive. Sanity check that this holds and especially, that
* no filesystem set PG_mappedtodisk on a page in the swapcache. Sanity
* check after taking the PT lock and making sure that nobody
* concurrently faulted in this page and set PG_anon_exclusive.
*/
BUG_ON(!folio_test_anon(folio) && folio_test_mappedtodisk(folio));
BUG_ON(folio_test_anon(folio) && PageAnonExclusive(page));
/*
* Check under PT lock (to protect against concurrent fork() sharing
* the swap entry concurrently) for certainly exclusive pages.
*/
if (!folio_test_ksm(folio)) {
/*
* Note that pte_swp_exclusive() == false for architectures
* without __HAVE_ARCH_PTE_SWP_EXCLUSIVE.
*/
exclusive = pte_swp_exclusive(vmf->orig_pte);
if (folio != swapcache) {
/*
* We have a fresh page that is not exposed to the
* swapcache -> certainly exclusive.
*/
exclusive = true;
} else if (exclusive && folio_test_writeback(folio) &&
data_race(si->flags & SWP_STABLE_WRITES)) {
/*
* This is tricky: not all swap backends support
* concurrent page modifications while under writeback.
*
* So if we stumble over such a page in the swapcache
* we must not set the page exclusive, otherwise we can
* map it writable without further checks and modify it
* while still under writeback.
*
* For these problematic swap backends, simply drop the
* exclusive marker: this is perfectly fine as we start
* writeback only if we fully unmapped the page and
* there are no unexpected references on the page after
* unmapping succeeded. After fully unmapped, no
* further GUP references (FOLL_GET and FOLL_PIN) can
* appear, so dropping the exclusive marker and mapping
* it only R/O is fine.
*/
exclusive = false;
}
}
/*
* Remove the swap entry and conditionally try to free up the swapcache.
* We're already holding a reference on the page but haven't mapped it
* yet.
*/
swap_free(entry);
if (should_try_to_free_swap(folio, vma, vmf->flags))
folio_free_swap(folio);
inc_mm_counter_fast(vma->vm_mm, MM_ANONPAGES);
dec_mm_counter_fast(vma->vm_mm, MM_SWAPENTS);
pte = mk_pte(page, vma->vm_page_prot);
/*
* Same logic as in do_wp_page(); however, optimize for pages that are
* certainly not shared either because we just allocated them without
* exposing them to the swapcache or because the swap entry indicates
* exclusivity.
*/
if (!folio_test_ksm(folio) &&
(exclusive || folio_ref_count(folio) == 1)) {
if (vmf->flags & FAULT_FLAG_WRITE) {
pte = maybe_mkwrite(pte_mkdirty(pte), vma);
vmf->flags &= ~FAULT_FLAG_WRITE;
ret |= VM_FAULT_WRITE;
}
rmap_flags |= RMAP_EXCLUSIVE;
}
flush_icache_page(vma, page);
if (pte_swp_soft_dirty(vmf->orig_pte))
pte = pte_mksoft_dirty(pte);
if (pte_swp_uffd_wp(vmf->orig_pte)) {
pte = pte_mkuffd_wp(pte);
pte = pte_wrprotect(pte);
}
vmf->orig_pte = pte;
/* ksm created a completely new copy */
if (unlikely(folio != swapcache && swapcache)) {
page_add_new_anon_rmap(page, vma, vmf->address);
folio_add_lru_vma(folio, vma);
} else {
page_add_anon_rmap(page, vma, vmf->address, rmap_flags);
}
VM_BUG_ON(!folio_test_anon(folio) ||
(pte_write(pte) && !PageAnonExclusive(page)));
set_pte_at(vma->vm_mm, vmf->address, vmf->pte, pte);
arch_do_swap_page(vma->vm_mm, vma, vmf->address, pte, vmf->orig_pte);
folio_unlock(folio);
if (folio != swapcache && swapcache) {
/*
* Hold the lock to avoid the swap entry to be reused
* until we take the PT lock for the pte_same() check
* (to avoid false positives from pte_same). For
* further safety release the lock after the swap_free
* so that the swap count won't change under a
* parallel locked swapcache.
*/
folio_unlock(swapcache);
folio_put(swapcache);
}
if (vmf->flags & FAULT_FLAG_WRITE) {
ret |= do_wp_page(vmf);
if (ret & VM_FAULT_ERROR)
ret &= VM_FAULT_ERROR;
goto out;
}
/* No need to invalidate - it was non-present before */
update_mmu_cache(vma, vmf->address, vmf->pte);
unlock:
pte_unmap_unlock(vmf->pte, vmf->ptl);
out:
if (si)
put_swap_device(si);
return ret;
out_nomap:
pte_unmap_unlock(vmf->pte, vmf->ptl);
out_page:
folio_unlock(folio);
out_release:
folio_put(folio);
if (folio != swapcache && swapcache) {
folio_unlock(swapcache);
folio_put(swapcache);
}
if (si)
put_swap_device(si);
return ret;
}
| [[3753, "\t\t\tret = vmf->page->pgmap->ops->migrate_to_ram(vmf);\n"]] | [[3753, "ret = vmf->page->pgmap->ops->migrate_to_ram(vmf);"]] | [
"CVE-2022-3523"
] | [
"CWE-416",
"CWE-119"
] | 32 | {
"Execution Environment": [],
"Explanation": null,
"External Function": [
"vmf->page->pgmap->ops->migrate_to_ram"
],
"Function Argument": [],
"Globals": [],
"Type Execution Declaration": [
"struct vm_fault",
"struct page",
"struct dev_pagemap_ops"
]
} |
32 | linux | https://github.com/torvalds/linux | mm/memory.c | 16ce101db85db694a91380aa4c89b25530871d33 | mm/memory.c: fix race when faulting a device private page
Patch series "Fix several device private page reference counting issues",
v2
This series aims to fix a number of page reference counting issues in
drivers dealing with device private ZONE_DEVICE pages. These result in
use-after-free type bugs, either from accessing a struct page which no
longer exists because it has been removed or accessing fields within the
struct page which are no longer valid because the page has been freed.
During normal usage it is unlikely these will cause any problems. However
without these fixes it is possible to crash the kernel from userspace.
These crashes can be triggered either by unloading the kernel module or
unbinding the device from the driver prior to a userspace task exiting.
In modules such as Nouveau it is also possible to trigger some of these
issues by explicitly closing the device file-descriptor prior to the task
exiting and then accessing device private memory.
This involves some minor changes to both PowerPC and AMD GPU code.
Unfortunately I lack hardware to test either of those so any help there
would be appreciated. The changes mimic what is done in for both Nouveau
and hmm-tests though so I doubt they will cause problems.
This patch (of 8):
When the CPU tries to access a device private page the migrate_to_ram()
callback associated with the pgmap for the page is called. However no
reference is taken on the faulting page. Therefore a concurrent migration
of the device private page can free the page and possibly the underlying
pgmap. This results in a race which can crash the kernel due to the
migrate_to_ram() function pointer becoming invalid. It also means drivers
can't reliably read the zone_device_data field because the page may have
been freed with memunmap_pages().
Close the race by getting a reference on the page while holding the ptl to
ensure it has not been freed. Unfortunately the elevated reference count
will cause the migration required to handle the fault to fail. To avoid
this failure pass the faulting page into the migrate_vma functions so that
if an elevated reference count is found it can be checked to see if it's
expected or not.
[mpe@ellerman.id.au: fix build]
Link: https://lkml.kernel.org/r/87fsgbf3gh.fsf@mpe.ellerman.id.au
Link: https://lkml.kernel.org/r/cover.60659b549d8509ddecafad4f498ee7f03bb23c69.1664366292.git-series.apopple@nvidia.com
Link: https://lkml.kernel.org/r/d3e813178a59e565e8d78d9b9a4e2562f6494f90.1664366292.git-series.apopple@nvidia.com
Signed-off-by: Alistair Popple <apopple@nvidia.com>
Acked-by: Felix Kuehling <Felix.Kuehling@amd.com>
Cc: Jason Gunthorpe <jgg@nvidia.com>
Cc: John Hubbard <jhubbard@nvidia.com>
Cc: Ralph Campbell <rcampbell@nvidia.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Lyude Paul <lyude@redhat.com>
Cc: Alex Deucher <alexander.deucher@amd.com>
Cc: Alex Sierra <alex.sierra@amd.com>
Cc: Ben Skeggs <bskeggs@redhat.com>
Cc: Christian König <christian.koenig@amd.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: "Huang, Ying" <ying.huang@intel.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Yang Shi <shy828301@gmail.com>
Cc: Zi Yan <ziy@nvidia.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org> | false | b2a6dcd0780f4ef35d80208cafe62d51 | do_swap_page | vm_fault_t do_swap_page(struct vm_fault *vmf)
{
struct vm_area_struct *vma = vmf->vma;
struct folio *swapcache, *folio = NULL;
struct page *page;
struct swap_info_struct *si = NULL;
rmap_t rmap_flags = RMAP_NONE;
bool exclusive = false;
swp_entry_t entry;
pte_t pte;
int locked;
vm_fault_t ret = 0;
void *shadow = NULL;
if (!pte_unmap_same(vmf))
goto out;
entry = pte_to_swp_entry(vmf->orig_pte);
if (unlikely(non_swap_entry(entry))) {
if (is_migration_entry(entry)) {
migration_entry_wait(vma->vm_mm, vmf->pmd,
vmf->address);
} else if (is_device_exclusive_entry(entry)) {
vmf->page = pfn_swap_entry_to_page(entry);
ret = remove_device_exclusive_entry(vmf);
} else if (is_device_private_entry(entry)) {
vmf->page = pfn_swap_entry_to_page(entry);
vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd,
vmf->address, &vmf->ptl);
if (unlikely(!pte_same(*vmf->pte, vmf->orig_pte))) {
spin_unlock(vmf->ptl);
goto out;
}
/*
* Get a page reference while we know the page can't be
* freed.
*/
get_page(vmf->page);
pte_unmap_unlock(vmf->pte, vmf->ptl);
vmf->page->pgmap->ops->migrate_to_ram(vmf);
put_page(vmf->page);
} else if (is_hwpoison_entry(entry)) {
ret = VM_FAULT_HWPOISON;
} else if (is_swapin_error_entry(entry)) {
ret = VM_FAULT_SIGBUS;
} else if (is_pte_marker_entry(entry)) {
ret = handle_pte_marker(vmf);
} else {
print_bad_pte(vma, vmf->address, vmf->orig_pte, NULL);
ret = VM_FAULT_SIGBUS;
}
goto out;
}
/* Prevent swapoff from happening to us. */
si = get_swap_device(entry);
if (unlikely(!si))
goto out;
folio = swap_cache_get_folio(entry, vma, vmf->address);
if (folio)
page = folio_file_page(folio, swp_offset(entry));
swapcache = folio;
if (!folio) {
if (data_race(si->flags & SWP_SYNCHRONOUS_IO) &&
__swap_count(entry) == 1) {
/* skip swapcache */
folio = vma_alloc_folio(GFP_HIGHUSER_MOVABLE, 0,
vma, vmf->address, false);
page = &folio->page;
if (folio) {
__folio_set_locked(folio);
__folio_set_swapbacked(folio);
if (mem_cgroup_swapin_charge_folio(folio,
vma->vm_mm, GFP_KERNEL,
entry)) {
ret = VM_FAULT_OOM;
goto out_page;
}
mem_cgroup_swapin_uncharge_swap(entry);
shadow = get_shadow_from_swap_cache(entry);
if (shadow)
workingset_refault(folio, shadow);
folio_add_lru(folio);
/* To provide entry to swap_readpage() */
folio_set_swap_entry(folio, entry);
swap_readpage(page, true, NULL);
folio->private = NULL;
}
} else {
page = swapin_readahead(entry, GFP_HIGHUSER_MOVABLE,
vmf);
if (page)
folio = page_folio(page);
swapcache = folio;
}
if (!folio) {
/*
* Back out if somebody else faulted in this pte
* while we released the pte lock.
*/
vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd,
vmf->address, &vmf->ptl);
if (likely(pte_same(*vmf->pte, vmf->orig_pte)))
ret = VM_FAULT_OOM;
goto unlock;
}
/* Had to read the page from swap area: Major fault */
ret = VM_FAULT_MAJOR;
count_vm_event(PGMAJFAULT);
count_memcg_event_mm(vma->vm_mm, PGMAJFAULT);
} else if (PageHWPoison(page)) {
/*
* hwpoisoned dirty swapcache pages are kept for killing
* owner processes (which may be unknown at hwpoison time)
*/
ret = VM_FAULT_HWPOISON;
goto out_release;
}
locked = folio_lock_or_retry(folio, vma->vm_mm, vmf->flags);
if (!locked) {
ret |= VM_FAULT_RETRY;
goto out_release;
}
if (swapcache) {
/*
* Make sure folio_free_swap() or swapoff did not release the
* swapcache from under us. The page pin, and pte_same test
* below, are not enough to exclude that. Even if it is still
* swapcache, we need to check that the page's swap has not
* changed.
*/
if (unlikely(!folio_test_swapcache(folio) ||
page_private(page) != entry.val))
goto out_page;
/*
* KSM sometimes has to copy on read faults, for example, if
* page->index of !PageKSM() pages would be nonlinear inside the
* anon VMA -- PageKSM() is lost on actual swapout.
*/
page = ksm_might_need_to_copy(page, vma, vmf->address);
if (unlikely(!page)) {
ret = VM_FAULT_OOM;
goto out_page;
}
folio = page_folio(page);
/*
* If we want to map a page that's in the swapcache writable, we
* have to detect via the refcount if we're really the exclusive
* owner. Try removing the extra reference from the local LRU
* pagevecs if required.
*/
if ((vmf->flags & FAULT_FLAG_WRITE) && folio == swapcache &&
!folio_test_ksm(folio) && !folio_test_lru(folio))
lru_add_drain();
}
cgroup_throttle_swaprate(page, GFP_KERNEL);
/*
* Back out if somebody else already faulted in this pte.
*/
vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd, vmf->address,
&vmf->ptl);
if (unlikely(!pte_same(*vmf->pte, vmf->orig_pte)))
goto out_nomap;
if (unlikely(!folio_test_uptodate(folio))) {
ret = VM_FAULT_SIGBUS;
goto out_nomap;
}
/*
* PG_anon_exclusive reuses PG_mappedtodisk for anon pages. A swap pte
* must never point at an anonymous page in the swapcache that is
* PG_anon_exclusive. Sanity check that this holds and especially, that
* no filesystem set PG_mappedtodisk on a page in the swapcache. Sanity
* check after taking the PT lock and making sure that nobody
* concurrently faulted in this page and set PG_anon_exclusive.
*/
BUG_ON(!folio_test_anon(folio) && folio_test_mappedtodisk(folio));
BUG_ON(folio_test_anon(folio) && PageAnonExclusive(page));
/*
* Check under PT lock (to protect against concurrent fork() sharing
* the swap entry concurrently) for certainly exclusive pages.
*/
if (!folio_test_ksm(folio)) {
/*
* Note that pte_swp_exclusive() == false for architectures
* without __HAVE_ARCH_PTE_SWP_EXCLUSIVE.
*/
exclusive = pte_swp_exclusive(vmf->orig_pte);
if (folio != swapcache) {
/*
* We have a fresh page that is not exposed to the
* swapcache -> certainly exclusive.
*/
exclusive = true;
} else if (exclusive && folio_test_writeback(folio) &&
data_race(si->flags & SWP_STABLE_WRITES)) {
/*
* This is tricky: not all swap backends support
* concurrent page modifications while under writeback.
*
* So if we stumble over such a page in the swapcache
* we must not set the page exclusive, otherwise we can
* map it writable without further checks and modify it
* while still under writeback.
*
* For these problematic swap backends, simply drop the
* exclusive marker: this is perfectly fine as we start
* writeback only if we fully unmapped the page and
* there are no unexpected references on the page after
* unmapping succeeded. After fully unmapped, no
* further GUP references (FOLL_GET and FOLL_PIN) can
* appear, so dropping the exclusive marker and mapping
* it only R/O is fine.
*/
exclusive = false;
}
}
/*
* Remove the swap entry and conditionally try to free up the swapcache.
* We're already holding a reference on the page but haven't mapped it
* yet.
*/
swap_free(entry);
if (should_try_to_free_swap(folio, vma, vmf->flags))
folio_free_swap(folio);
inc_mm_counter_fast(vma->vm_mm, MM_ANONPAGES);
dec_mm_counter_fast(vma->vm_mm, MM_SWAPENTS);
pte = mk_pte(page, vma->vm_page_prot);
/*
* Same logic as in do_wp_page(); however, optimize for pages that are
* certainly not shared either because we just allocated them without
* exposing them to the swapcache or because the swap entry indicates
* exclusivity.
*/
if (!folio_test_ksm(folio) &&
(exclusive || folio_ref_count(folio) == 1)) {
if (vmf->flags & FAULT_FLAG_WRITE) {
pte = maybe_mkwrite(pte_mkdirty(pte), vma);
vmf->flags &= ~FAULT_FLAG_WRITE;
ret |= VM_FAULT_WRITE;
}
rmap_flags |= RMAP_EXCLUSIVE;
}
flush_icache_page(vma, page);
if (pte_swp_soft_dirty(vmf->orig_pte))
pte = pte_mksoft_dirty(pte);
if (pte_swp_uffd_wp(vmf->orig_pte)) {
pte = pte_mkuffd_wp(pte);
pte = pte_wrprotect(pte);
}
vmf->orig_pte = pte;
/* ksm created a completely new copy */
if (unlikely(folio != swapcache && swapcache)) {
page_add_new_anon_rmap(page, vma, vmf->address);
folio_add_lru_vma(folio, vma);
} else {
page_add_anon_rmap(page, vma, vmf->address, rmap_flags);
}
VM_BUG_ON(!folio_test_anon(folio) ||
(pte_write(pte) && !PageAnonExclusive(page)));
set_pte_at(vma->vm_mm, vmf->address, vmf->pte, pte);
arch_do_swap_page(vma->vm_mm, vma, vmf->address, pte, vmf->orig_pte);
folio_unlock(folio);
if (folio != swapcache && swapcache) {
/*
* Hold the lock to avoid the swap entry to be reused
* until we take the PT lock for the pte_same() check
* (to avoid false positives from pte_same). For
* further safety release the lock after the swap_free
* so that the swap count won't change under a
* parallel locked swapcache.
*/
folio_unlock(swapcache);
folio_put(swapcache);
}
if (vmf->flags & FAULT_FLAG_WRITE) {
ret |= do_wp_page(vmf);
if (ret & VM_FAULT_ERROR)
ret &= VM_FAULT_ERROR;
goto out;
}
/* No need to invalidate - it was non-present before */
update_mmu_cache(vma, vmf->address, vmf->pte);
unlock:
pte_unmap_unlock(vmf->pte, vmf->ptl);
out:
if (si)
put_swap_device(si);
return ret;
out_nomap:
pte_unmap_unlock(vmf->pte, vmf->ptl);
out_page:
folio_unlock(folio);
out_release:
folio_put(folio);
if (folio != swapcache && swapcache) {
folio_unlock(swapcache);
folio_put(swapcache);
}
if (si)
put_swap_device(si);
return ret;
}
| [[3753, "\t\t\tvmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd,\n"], [3754, "\t\t\t\t\tvmf->address, &vmf->ptl);\n"], [3755, "\t\t\tif (unlikely(!pte_same(*vmf->pte, vmf->orig_pte))) {\n"], [3756, "\t\t\t\tspin_unlock(vmf->ptl);\n"], [3757, "\t\t\t\tgoto out;\n"], [3758, "\t\t\t}\n"], [3759, "\n"], [3760, "\t\t\t/*\n"], [3761, "\t\t\t * Get a page reference while we know the page can't be\n"], [3762, "\t\t\t * freed.\n"], [3763, "\t\t\t */\n"], [3764, "\t\t\tget_page(vmf->page);\n"], [3765, "\t\t\tpte_unmap_unlock(vmf->pte, vmf->ptl);\n"], [3766, "\t\t\tvmf->page->pgmap->ops->migrate_to_ram(vmf);\n"], [3767, "\t\t\tput_page(vmf->page);\n"]] | [[3753, "vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd,\n\t\t\t\t\tvmf->address, &vmf->ptl);"], [3755, "if (unlikely(!pte_same(*vmf->pte, vmf->orig_pte)))"], [3756, "spin_unlock(vmf->ptl);"], [3757, "goto out;"], [3758, "\t\t\t}\n"], [3759, "\n"], [3760, "/*\n\t\t\t * Get a page reference while we know the page can't be\n\t\t\t * freed.\n\t\t\t */"], [3764, "get_page(vmf->page);"], [3765, "pte_unmap_unlock(vmf->pte, vmf->ptl);"], [3766, "vmf->page->pgmap->ops->migrate_to_ram(vmf);"], [3767, "put_page(vmf->page);"]] | [
"CVE-2022-3523"
] | [
"CWE-416",
"CWE-119"
] | 32 | {
"Execution Environment": [],
"Explanation": null,
"External Function": [
"vmf->page->pgmap->ops->migrate_to_ram"
],
"Function Argument": [],
"Globals": [],
"Type Execution Declaration": [
"struct vm_fault",
"struct page",
"struct dev_pagemap_ops"
]
} |
33 | linux | https://github.com/torvalds/linux | kernel/bpf/verifier.c | 179d1c5602997fef5a940c6ddcf31212cbfebd14 | bpf: don't prune branches when a scalar is replaced with a pointer
This could be made safe by passing through a reference to env and checking
for env->allow_ptr_leaks, but it would only work one way and is probably
not worth the hassle - not doing it will not directly lead to program
rejection.
Fixes: f1174f77b50c ("bpf/verifier: rework value tracking")
Signed-off-by: Jann Horn <jannh@google.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> | true | efa1225f12ed754b1dcae072082ed9ca | regsafe | static bool regsafe(struct bpf_reg_state *rold, struct bpf_reg_state *rcur,
struct idpair *idmap)
{
if (!(rold->live & REG_LIVE_READ))
/* explored state didn't use this */
return true;
if (memcmp(rold, rcur, offsetof(struct bpf_reg_state, live)) == 0)
return true;
if (rold->type == NOT_INIT)
/* explored state can't have used this */
return true;
if (rcur->type == NOT_INIT)
return false;
switch (rold->type) {
case SCALAR_VALUE:
if (rcur->type == SCALAR_VALUE) {
/* new val must satisfy old val knowledge */
return range_within(rold, rcur) &&
tnum_in(rold->var_off, rcur->var_off);
} else {
/* if we knew anything about the old value, we're not
* equal, because we can't know anything about the
* scalar value of the pointer in the new value.
*/
return rold->umin_value == 0 &&
rold->umax_value == U64_MAX &&
rold->smin_value == S64_MIN &&
rold->smax_value == S64_MAX &&
tnum_is_unknown(rold->var_off);
}
case PTR_TO_MAP_VALUE:
/* If the new min/max/var_off satisfy the old ones and
* everything else matches, we are OK.
* We don't care about the 'id' value, because nothing
* uses it for PTR_TO_MAP_VALUE (only for ..._OR_NULL)
*/
return memcmp(rold, rcur, offsetof(struct bpf_reg_state, id)) == 0 &&
range_within(rold, rcur) &&
tnum_in(rold->var_off, rcur->var_off);
case PTR_TO_MAP_VALUE_OR_NULL:
/* a PTR_TO_MAP_VALUE could be safe to use as a
* PTR_TO_MAP_VALUE_OR_NULL into the same map.
* However, if the old PTR_TO_MAP_VALUE_OR_NULL then got NULL-
* checked, doing so could have affected others with the same
* id, and we can't check for that because we lost the id when
* we converted to a PTR_TO_MAP_VALUE.
*/
if (rcur->type != PTR_TO_MAP_VALUE_OR_NULL)
return false;
if (memcmp(rold, rcur, offsetof(struct bpf_reg_state, id)))
return false;
/* Check our ids match any regs they're supposed to */
return check_ids(rold->id, rcur->id, idmap);
case PTR_TO_PACKET_META:
case PTR_TO_PACKET:
if (rcur->type != rold->type)
return false;
/* We must have at least as much range as the old ptr
* did, so that any accesses which were safe before are
* still safe. This is true even if old range < old off,
* since someone could have accessed through (ptr - k), or
* even done ptr -= k in a register, to get a safe access.
*/
if (rold->range > rcur->range)
return false;
/* If the offsets don't match, we can't trust our alignment;
* nor can we be sure that we won't fall out of range.
*/
if (rold->off != rcur->off)
return false;
/* id relations must be preserved */
if (rold->id && !check_ids(rold->id, rcur->id, idmap))
return false;
/* new val must satisfy old val knowledge */
return range_within(rold, rcur) &&
tnum_in(rold->var_off, rcur->var_off);
case PTR_TO_CTX:
case CONST_PTR_TO_MAP:
case PTR_TO_STACK:
case PTR_TO_PACKET_END:
/* Only valid matches are exact, which memcmp() above
* would have accepted
*/
default:
/* Don't know what's going on, just say it's not safe */
return false;
}
/* Shouldn't get here; if we do, say it's not safe */
WARN_ON_ONCE(1);
return false;
}
| [[3470, "\t\t\t/* if we knew anything about the old value, we're not\n"], [3471, "\t\t\t * equal, because we can't know anything about the\n"], [3472, "\t\t\t * scalar value of the pointer in the new value.\n"], [3474, "\t\t\treturn rold->umin_value == 0 &&\n"], [3475, "\t\t\t rold->umax_value == U64_MAX &&\n"], [3476, "\t\t\t rold->smin_value == S64_MIN &&\n"], [3477, "\t\t\t rold->smax_value == S64_MAX &&\n"], [3478, "\t\t\t tnum_is_unknown(rold->var_off);\n"]] | [[3470, "/* if we knew anything about the old value, we're not\n\t\t\t * equal, because we can't know anything about the\n\t\t\t * scalar value of the pointer in the new value.\n\t\t\t */"], [3474, "return rold->umin_value == 0 &&\n\t\t\t rold->umax_value == U64_MAX &&\n\t\t\t rold->smin_value == S64_MIN &&\n\t\t\t rold->smax_value == S64_MAX &&\n\t\t\t tnum_is_unknown(rold->var_off);"]] | [
"CVE-2017-17855"
] | [
"CWE-119"
] | 34 | {
"Execution Environment": [],
"Explanation": null,
"External Function": [
"tnum_is_unknown"
],
"Function Argument": [],
"Globals": [],
"Type Execution Declaration": [
"U64_MAX",
"S64_MIN"
]
} |
34 | linux | https://github.com/torvalds/linux | kernel/bpf/verifier.c | 179d1c5602997fef5a940c6ddcf31212cbfebd14 | bpf: don't prune branches when a scalar is replaced with a pointer
This could be made safe by passing through a reference to env and checking
for env->allow_ptr_leaks, but it would only work one way and is probably
not worth the hassle - not doing it will not directly lead to program
rejection.
Fixes: f1174f77b50c ("bpf/verifier: rework value tracking")
Signed-off-by: Jann Horn <jannh@google.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> | false | 5f6d745208838c9654712c9620472a78 | regsafe | static bool regsafe(struct bpf_reg_state *rold, struct bpf_reg_state *rcur,
struct idpair *idmap)
{
if (!(rold->live & REG_LIVE_READ))
/* explored state didn't use this */
return true;
if (memcmp(rold, rcur, offsetof(struct bpf_reg_state, live)) == 0)
return true;
if (rold->type == NOT_INIT)
/* explored state can't have used this */
return true;
if (rcur->type == NOT_INIT)
return false;
switch (rold->type) {
case SCALAR_VALUE:
if (rcur->type == SCALAR_VALUE) {
/* new val must satisfy old val knowledge */
return range_within(rold, rcur) &&
tnum_in(rold->var_off, rcur->var_off);
} else {
/* We're trying to use a pointer in place of a scalar.
* Even if the scalar was unbounded, this could lead to
* pointer leaks because scalars are allowed to leak
* while pointers are not. We could make this safe in
* special cases if root is calling us, but it's
* probably not worth the hassle.
*/
return false;
}
case PTR_TO_MAP_VALUE:
/* If the new min/max/var_off satisfy the old ones and
* everything else matches, we are OK.
* We don't care about the 'id' value, because nothing
* uses it for PTR_TO_MAP_VALUE (only for ..._OR_NULL)
*/
return memcmp(rold, rcur, offsetof(struct bpf_reg_state, id)) == 0 &&
range_within(rold, rcur) &&
tnum_in(rold->var_off, rcur->var_off);
case PTR_TO_MAP_VALUE_OR_NULL:
/* a PTR_TO_MAP_VALUE could be safe to use as a
* PTR_TO_MAP_VALUE_OR_NULL into the same map.
* However, if the old PTR_TO_MAP_VALUE_OR_NULL then got NULL-
* checked, doing so could have affected others with the same
* id, and we can't check for that because we lost the id when
* we converted to a PTR_TO_MAP_VALUE.
*/
if (rcur->type != PTR_TO_MAP_VALUE_OR_NULL)
return false;
if (memcmp(rold, rcur, offsetof(struct bpf_reg_state, id)))
return false;
/* Check our ids match any regs they're supposed to */
return check_ids(rold->id, rcur->id, idmap);
case PTR_TO_PACKET_META:
case PTR_TO_PACKET:
if (rcur->type != rold->type)
return false;
/* We must have at least as much range as the old ptr
* did, so that any accesses which were safe before are
* still safe. This is true even if old range < old off,
* since someone could have accessed through (ptr - k), or
* even done ptr -= k in a register, to get a safe access.
*/
if (rold->range > rcur->range)
return false;
/* If the offsets don't match, we can't trust our alignment;
* nor can we be sure that we won't fall out of range.
*/
if (rold->off != rcur->off)
return false;
/* id relations must be preserved */
if (rold->id && !check_ids(rold->id, rcur->id, idmap))
return false;
/* new val must satisfy old val knowledge */
return range_within(rold, rcur) &&
tnum_in(rold->var_off, rcur->var_off);
case PTR_TO_CTX:
case CONST_PTR_TO_MAP:
case PTR_TO_STACK:
case PTR_TO_PACKET_END:
/* Only valid matches are exact, which memcmp() above
* would have accepted
*/
default:
/* Don't know what's going on, just say it's not safe */
return false;
}
/* Shouldn't get here; if we do, say it's not safe */
WARN_ON_ONCE(1);
return false;
}
| [[3470, "\t\t\t/* We're trying to use a pointer in place of a scalar.\n"], [3471, "\t\t\t * Even if the scalar was unbounded, this could lead to\n"], [3472, "\t\t\t * pointer leaks because scalars are allowed to leak\n"], [3473, "\t\t\t * while pointers are not. We could make this safe in\n"], [3474, "\t\t\t * special cases if root is calling us, but it's\n"], [3475, "\t\t\t * probably not worth the hassle.\n"], [3477, "\t\t\treturn false;\n"]] | [[3470, "/* We're trying to use a pointer in place of a scalar.\n\t\t\t * Even if the scalar was unbounded, this could lead to\n\t\t\t * pointer leaks because scalars are allowed to leak\n\t\t\t * while pointers are not. We could make this safe in\n\t\t\t * special cases if root is calling us, but it's\n\t\t\t * probably not worth the hassle.\n\t\t\t */"], [3477, "return false;"]] | [
"CVE-2017-17855"
] | [
"CWE-119"
] | 34 | {
"Execution Environment": [],
"Explanation": null,
"External Function": [
"tnum_is_unknown"
],
"Function Argument": [],
"Globals": [],
"Type Execution Declaration": [
"U64_MAX",
"S64_MIN"
]
} |
35 | linux | https://github.com/torvalds/linux | drivers/net/ethernet/mediatek/mtk_ppe.c | 17a5f6a78dc7b8db385de346092d7d9f9dc24df6 | net: ethernet: mtk_eth_soc: use after free in __mtk_ppe_check_skb()
The __mtk_foe_entry_clear() function frees "entry" so we have to use
the _safe() version of hlist_for_each_entry() to prevent a use after
free.
Fixes: 33fc42de3327 ("net: ethernet: mtk_eth_soc: support creating mac address based offload entries")
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net> | true | 854ecd98b68ec4fcd7d7f1340b7fa1f0 | __mtk_ppe_check_skb | void __mtk_ppe_check_skb(struct mtk_ppe *ppe, struct sk_buff *skb, u16 hash)
{
struct hlist_head *head = &ppe->foe_flow[hash / 2];
struct mtk_foe_entry *hwe = &ppe->foe_table[hash];
struct mtk_flow_entry *entry;
struct mtk_foe_bridge key = {};
struct ethhdr *eh;
bool found = false;
u8 *tag;
spin_lock_bh(&ppe_lock);
if (FIELD_GET(MTK_FOE_IB1_STATE, hwe->ib1) == MTK_FOE_STATE_BIND)
goto out;
hlist_for_each_entry(entry, head, list) {
if (entry->type == MTK_FLOW_TYPE_L2_SUBFLOW) {
if (unlikely(FIELD_GET(MTK_FOE_IB1_STATE, hwe->ib1) ==
MTK_FOE_STATE_BIND))
continue;
entry->hash = 0xffff;
__mtk_foe_entry_clear(ppe, entry);
continue;
}
if (found || !mtk_flow_entry_match(entry, hwe)) {
if (entry->hash != 0xffff)
entry->hash = 0xffff;
continue;
}
entry->hash = hash;
__mtk_foe_entry_commit(ppe, &entry->data, hash);
found = true;
}
if (found)
goto out;
eh = eth_hdr(skb);
ether_addr_copy(key.dest_mac, eh->h_dest);
ether_addr_copy(key.src_mac, eh->h_source);
tag = skb->data - 2;
key.vlan = 0;
switch (skb->protocol) {
// #if IS_ENABLED(CONFIG_NET_DSA)
case htons(ETH_P_XDSA):
if (!netdev_uses_dsa(skb->dev) ||
skb->dev->dsa_ptr->tag_ops->proto != DSA_TAG_PROTO_MTK)
goto out;
tag += 4;
if (get_unaligned_be16(tag) != ETH_P_8021Q)
break;
fallthrough;
#endif
case htons(ETH_P_8021Q):
key.vlan = get_unaligned_be16(tag + 2) & VLAN_VID_MASK;
break;
default:
break;
}
entry = rhashtable_lookup_fast(&ppe->l2_flows, &key, mtk_flow_l2_ht_params);
if (!entry)
goto out;
mtk_foe_entry_commit_subflow(ppe, entry, hash);
out:
spin_unlock_bh(&ppe_lock);
}
| [[612, "\thlist_for_each_entry(entry, head, list) {\n"]] | [[612, "\thlist_for_each_entry(entry, head, list) {\n"]] | [
"CVE-2022-3636"
] | [
"CWE-416",
"CWE-119"
] | 36 | {
"Execution Environment": [],
"Explanation": null,
"External Function": [
"__mtk_foe_entry_clear"
],
"Function Argument": [],
"Globals": [],
"Type Execution Declaration": []
} |
36 | linux | https://github.com/torvalds/linux | drivers/net/ethernet/mediatek/mtk_ppe.c | 17a5f6a78dc7b8db385de346092d7d9f9dc24df6 | net: ethernet: mtk_eth_soc: use after free in __mtk_ppe_check_skb()
The __mtk_foe_entry_clear() function frees "entry" so we have to use
the _safe() version of hlist_for_each_entry() to prevent a use after
free.
Fixes: 33fc42de3327 ("net: ethernet: mtk_eth_soc: support creating mac address based offload entries")
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net> | false | 0eab16ac49e9ae77017812443475b94a | __mtk_ppe_check_skb | void __mtk_ppe_check_skb(struct mtk_ppe *ppe, struct sk_buff *skb, u16 hash)
{
struct hlist_head *head = &ppe->foe_flow[hash / 2];
struct mtk_foe_entry *hwe = &ppe->foe_table[hash];
struct mtk_flow_entry *entry;
struct mtk_foe_bridge key = {};
struct hlist_node *n;
struct ethhdr *eh;
bool found = false;
u8 *tag;
spin_lock_bh(&ppe_lock);
if (FIELD_GET(MTK_FOE_IB1_STATE, hwe->ib1) == MTK_FOE_STATE_BIND)
goto out;
hlist_for_each_entry_safe(entry, n, head, list) {
if (entry->type == MTK_FLOW_TYPE_L2_SUBFLOW) {
if (unlikely(FIELD_GET(MTK_FOE_IB1_STATE, hwe->ib1) ==
MTK_FOE_STATE_BIND))
continue;
entry->hash = 0xffff;
__mtk_foe_entry_clear(ppe, entry);
continue;
}
if (found || !mtk_flow_entry_match(entry, hwe)) {
if (entry->hash != 0xffff)
entry->hash = 0xffff;
continue;
}
entry->hash = hash;
__mtk_foe_entry_commit(ppe, &entry->data, hash);
found = true;
}
if (found)
goto out;
eh = eth_hdr(skb);
ether_addr_copy(key.dest_mac, eh->h_dest);
ether_addr_copy(key.src_mac, eh->h_source);
tag = skb->data - 2;
key.vlan = 0;
switch (skb->protocol) {
// #if IS_ENABLED(CONFIG_NET_DSA)
case htons(ETH_P_XDSA):
if (!netdev_uses_dsa(skb->dev) ||
skb->dev->dsa_ptr->tag_ops->proto != DSA_TAG_PROTO_MTK)
goto out;
tag += 4;
if (get_unaligned_be16(tag) != ETH_P_8021Q)
break;
fallthrough;
#endif
case htons(ETH_P_8021Q):
key.vlan = get_unaligned_be16(tag + 2) & VLAN_VID_MASK;
break;
default:
break;
}
entry = rhashtable_lookup_fast(&ppe->l2_flows, &key, mtk_flow_l2_ht_params);
if (!entry)
goto out;
mtk_foe_entry_commit_subflow(ppe, entry, hash);
out:
spin_unlock_bh(&ppe_lock);
}
| [[603, "\tstruct hlist_node *n;\n"], [613, "\thlist_for_each_entry_safe(entry, n, head, list) {\n"]] | [[603, "struct hlist_node *n;"], [613, "\thlist_for_each_entry_safe(entry, n, head, list) {\n"]] | [
"CVE-2022-3636"
] | [
"CWE-416",
"CWE-119"
] | 36 | {
"Execution Environment": [],
"Explanation": null,
"External Function": [
"__mtk_foe_entry_clear"
],
"Function Argument": [],
"Globals": [],
"Type Execution Declaration": []
} |
37 | linux | https://github.com/torvalds/linux | drivers/mtd/spi-nor/cadence-quadspi.c | 193e87143c290ec16838f5368adc0e0bc94eb931 | mtd: spi-nor: Off by one in cqspi_setup_flash()
There are CQSPI_MAX_CHIPSELECT elements in the ->f_pdata array so the >
should be >=.
Fixes: 140623410536 ('mtd: spi-nor: Add driver for Cadence Quad SPI Flash Controller')
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Reviewed-by: Marek Vasut <marex@denx.de>
Signed-off-by: Cyrille Pitchen <cyrille.pitchen@atmel.com> | true | 0b477c06c2026d64a183015547046464 | cqspi_setup_flash | static int cqspi_setup_flash(struct cqspi_st *cqspi, struct device_node *np)
{
struct platform_device *pdev = cqspi->pdev;
struct device *dev = &pdev->dev;
struct cqspi_flash_pdata *f_pdata;
struct spi_nor *nor;
struct mtd_info *mtd;
unsigned int cs;
int i, ret;
/* Get flash device data */
for_each_available_child_of_node(dev->of_node, np) {
if (of_property_read_u32(np, "reg", &cs)) {
dev_err(dev, "Couldn't determine chip select.\n");
goto err;
}
if (cs > CQSPI_MAX_CHIPSELECT) {
dev_err(dev, "Chip select %d out of range.\n", cs);
goto err;
}
f_pdata = &cqspi->f_pdata[cs];
f_pdata->cqspi = cqspi;
f_pdata->cs = cs;
ret = cqspi_of_get_flash_pdata(pdev, f_pdata, np);
if (ret)
goto err;
nor = &f_pdata->nor;
mtd = &nor->mtd;
mtd->priv = nor;
nor->dev = dev;
spi_nor_set_flash_node(nor, np);
nor->priv = f_pdata;
nor->read_reg = cqspi_read_reg;
nor->write_reg = cqspi_write_reg;
nor->read = cqspi_read;
nor->write = cqspi_write;
nor->erase = cqspi_erase;
nor->prepare = cqspi_prep;
nor->unprepare = cqspi_unprep;
mtd->name = devm_kasprintf(dev, GFP_KERNEL, "%s.%d",
dev_name(dev), cs);
if (!mtd->name) {
ret = -ENOMEM;
goto err;
}
ret = spi_nor_scan(nor, NULL, SPI_NOR_QUAD);
if (ret)
goto err;
ret = mtd_device_register(mtd, NULL, 0);
if (ret)
goto err;
f_pdata->registered = true;
}
return 0;
err:
for (i = 0; i < CQSPI_MAX_CHIPSELECT; i++)
if (cqspi->f_pdata[i].registered)
mtd_device_unregister(&cqspi->f_pdata[i].nor.mtd);
return ret;
}
| [[1085, "\t\tif (cs > CQSPI_MAX_CHIPSELECT) {\n"]] | [[1085, "if (cs > CQSPI_MAX_CHIPSELECT)"]] | [
"CVE-2016-10764"
] | [
"CWE-119"
] | 38 | {
"Execution Environment": [],
"Explanation": null,
"External Function": [],
"Function Argument": [],
"Globals": [
"CQSPI_MAX_CHIPSELECT"
],
"Type Execution Declaration": [
"struct cqspi_st"
]
} |
Dataset Card for Dataset Name
SecVulEval is a collection of real-world C/C++ vulnerabilities.
Dataset Details
Dataset Description
The dataset is curated by collecting C/C++ vulnerability from NVD. It features statement-level vulnerable information, context information for vulnerable functions
(is_vulnerable=True), and other metadata such as CVE, CWE, commit information. The dataset contains vulnerable and non-vulnerable function samples.
Dataset Sources
The vulnerabilities (CVEs) are collected from NVD (https://nvd.nist.gov). Then, the corresponding patches to the vulnerabilities are collected from their respective git repositories.
Uses
The dataset comprises both vulnerable (43.23%) and non-vulnerable (56.77%) functions, with a total collection of 25,440 function. This large collection of functions make it suitable for training vulnerability detection model. The statement-level info, along with contextual information can make context-aware detection at finer-grained level possible. The dataset can also be used to evaluate C/C++ vulnerability detection models.
Dataset Structure
The dataset has 15 different fields.
- The
project_urlcolumn has 735 different values while theprojectcolumn has 707 unique values. This is because forproject == "Android", there are multiple different repositories. - The
changed_linesandchanged_statementscolumns include the changes in made in the patch as a list of (line, code) pair. Vulnerable functions include the deleted lines/statements and the non-vulnerable functions has the added lines/statements. - Some functions/vulnerabilities can be assigned to more than one CVE/CWE which is why
cve_listandcwe_listare given as lists, although in most cases there would be only one CVE and CWE id. - The
fixed_func_idincludes theidxnumber (first field in the dataset) of the corresponding fixed patch of a vulnerable function. This helps to easily pair the vulnerable functions with their fixing code. For non-vulnerable code it doesn't make sense for a "fixed" version and thefixed_func_idis just itself. - The
contextfield includes contextual information for vulnerable functions according to the five categories as discussed in the paper. It is added as the list of symbols and an explanation as generated by the LLM.
Other fields are self-explanatory.
- Downloads last month
- 116