Buckets:

|
download
raw
5.7 kB
# Kernels API Reference
## Main Functions
### get_kernel[[kernels.get_kernel]]
#### kernels.get_kernel[[kernels.get_kernel]]
[Source](https://github.com/huggingface/kernels/blob/vr_496/kernels/src/kernels/utils.py#L301)
Load a kernel from the kernel hub.
This function downloads a kernel to the local Hugging Face Hub cache directory (if it was not downloaded before)
and then loads the kernel.
Example:
```python
import torch
from kernels import get_kernel
activation = get_kernel("kernels-community/relu", version=1)
x = torch.randn(10, 20, device="cuda")
out = torch.empty_like(x)
result = activation.relu(out, x)
```
**Parameters:**
repo_id (`str`) : The Hub repository containing the kernel.
revision (`str`, *optional*, defaults to `"main"`) : The specific revision (branch, tag, or commit) to download. Cannot be used together with `version`.
version (`int`, *optional*) : The kernel version to download. Cannot be used together with `revision`.
backend (`str`, *optional*) : The backend to load the kernel for. Can only be `cpu` or the backend that Torch is compiled for. The backend will be detected automatically if not provided.
user_agent (`Union[str, dict]`, *optional*) : The `user_agent` info to pass to `snapshot_download()` for internal telemetry.
**Returns:**
``ModuleType``
The imported kernel module.
### get_local_kernel[[kernels.get_local_kernel]]
#### kernels.get_local_kernel[[kernels.get_local_kernel]]
[Source](https://github.com/huggingface/kernels/blob/vr_496/kernels/src/kernels/utils.py#L360)
Import a kernel from a local kernel repository path.
**Parameters:**
repo_path (`Path`) : The local path to the kernel repository.
package_name (`str`) : The name of the package to import from the repository.
backend (`str`, *optional*) : The backend to load the kernel for. Can only be `cpu` or the backend that Torch is compiled for. The backend will be detected automatically if not provided.
**Returns:**
``ModuleType``
The imported kernel module.
### has_kernel[[kernels.has_kernel]]
#### kernels.has_kernel[[kernels.has_kernel]]
[Source](https://github.com/huggingface/kernels/blob/vr_496/kernels/src/kernels/utils.py#L396)
Check whether a kernel build exists for the current environment (Torch version and compute framework).
**Parameters:**
repo_id (`str`) : The Hub repository containing the kernel.
revision (`str`, *optional*, defaults to `"main"`) : The specific revision (branch, tag, or commit) to download. Cannot be used together with `version`.
version (`int`, *optional*) : The kernel version to download. Cannot be used together with `revision`.
backend (`str`, *optional*) : The backend to load the kernel for. Can only be `cpu` or the backend that Torch is compiled for. The backend will be detected automatically if not provided.
**Returns:**
``bool``
`True` if a kernel is available for the current environment.
### get_loaded_kernels[[kernels.get_loaded_kernels]]
#### kernels.get_loaded_kernels[[kernels.get_loaded_kernels]]
[Source](https://github.com/huggingface/kernels/blob/vr_496/kernels/src/kernels/utils.py#L54)
Return a snapshot of every kernel that has been loaded into the current process.
Each entry is a `kernels.utils.LoadedKernel` dataclass with fields:
- `kernel_id` (`str`): unique identifier used as the `sys.modules` key
for this variant (either `metadata.id` or a hash-suffixed module name).
- `module` (`ModuleType`): the imported kernel module.
- `module_name` (`str`): the kernel's module name.
- `repo_infos` (`kernels.utils.RepoInfos | None`): populated only for
kernels loaded via `get_kernel`. Loaders that work from a local path
(`get_local_kernel`) or a lockfile (`get_locked_kernel`, `load_kernel`)
leave this as `None`.
`RepoInfos` has `repo_id`, `revision`, and `backend` fields. `backend`
reflects the value passed by the caller — it is `None` when the caller
relied on backend auto-detection.
The returned list is a new list; mutating it does not affect the registry.
> [!NOTE]
> These arguments might be renamed / changed a bit.
Example:
```python
from kernels import get_kernel, get_loaded_kernels
get_kernel("kernels-community/activation", version=1)
for loaded in get_loaded_kernels():
print(loaded.module_name, loaded.repo_infos)
```
**Returns:**
``list[LoadedKernel]``
one entry per distinct kernel variant path
loaded in this process.
## Loading locked kernels
### load_kernel[[kernels.load_kernel]]
#### kernels.load_kernel[[kernels.load_kernel]]
[Source](https://github.com/huggingface/kernels/blob/vr_496/kernels/src/kernels/utils.py#L441)
Get a pre-downloaded, locked kernel.
If `lockfile` is not specified, the lockfile will be loaded from the caller's package metadata.
**Parameters:**
repo_id (`str`) : The Hub repository containing the kernel.
lockfile (`Path`, *optional*) : Path to the lockfile. If not provided, the lockfile will be loaded from the caller's package metadata.
backend (`str`, *optional*) : The backend to load the kernel for. Can only be `cpu` or the backend that Torch is compiled for. The backend will be detected automatically if not provided.
**Returns:**
``ModuleType``
The imported kernel module.
### get_locked_kernel[[kernels.get_locked_kernel]]
#### kernels.get_locked_kernel[[kernels.get_locked_kernel]]
[Source](https://github.com/huggingface/kernels/blob/vr_496/kernels/src/kernels/utils.py#L514)
Get a kernel using a lock file.
**Parameters:**
repo_id (`str`) : The Hub repository containing the kernel.
local_files_only (`bool`, *optional*, defaults to `False`) : Whether to only use local files and not download from the Hub.
**Returns:**
``ModuleType``
The imported kernel module.

Xet Storage Details

Size:
5.7 kB
·
Xet hash:
e00939bf7cc7cbcc018b47d6c6d5fe4da4ec71303873acd6a54e39172d95f665

Xet efficiently stores files, intelligently splitting them into unique chunks and accelerating uploads and downloads. More info.