text stringlengths 9 39.2M | dir stringlengths 25 226 | lang stringclasses 163 values | created_date timestamp[s] | updated_date timestamp[s] | repo_name stringclasses 751 values | repo_full_name stringclasses 752 values | star int64 1.01k 183k | len_tokens int64 1 18.5M |
|---|---|---|---|---|---|---|---|---|
```restructuredtext
.. _toolchain_xtools:
Crosstool-NG (Deprecated)
#########################
.. warning::
``xtools`` toolchain variant is deprecated. The
:ref:`cross-compile toolchain variant <other_x_compilers>` should be used
when using a custom toolchain built with Crosstool-NG.
You can build toolchains from source code using crosstool-NG.
#. Follow the steps on the crosstool-NG website to `prepare your host
<path_to_url`_.
#. Follow the `Zephyr SDK with Crosstool NG instructions
<path_to_url`_ to
build your toolchain. Repeat as necessary to build toolchains for multiple
target architectures.
You will need to clone the ``sdk-ng`` repo and run the following command:
.. code-block:: console
./go.sh <arch>
.. note::
Currently, only i586 and Arm toolchain builds are verified.
#. :ref:`Set these environment variables <env_vars>`:
- Set :envvar:`ZEPHYR_TOOLCHAIN_VARIANT` to ``xtools``.
- Set :envvar:`XTOOLS_TOOLCHAIN_PATH` to the toolchain build directory.
#. To check that you have set these variables correctly in your current
environment, follow these example shell sessions (the
:envvar:`XTOOLS_TOOLCHAIN_PATH` values may be different on your system):
.. code-block:: console
# Linux, macOS:
$ echo $ZEPHYR_TOOLCHAIN_VARIANT
xtools
$ echo $XTOOLS_TOOLCHAIN_PATH
/Volumes/CrossToolNGNew/build/output/
.. _crosstool-ng site: path_to_url
``` | /content/code_sandbox/doc/develop/toolchains/crosstool_ng.rst | restructuredtext | 2016-05-26T17:54:19 | 2024-08-16T18:09:06 | zephyr | zephyrproject-rtos/zephyr | 10,307 | 370 |
```restructuredtext
.. _other_x_compilers:
Other Cross Compilers
######################
This toolchain variant is borrowed from the Linux kernel build system's
mechanism of using a ``CROSS_COMPILE`` environment variable to set up a
GNU-based cross toolchain.
Examples of such "other cross compilers" are cross toolchains that your Linux
distribution packaged, that you compiled on your own, or that you downloaded
from the net. Unlike toolchains specifically listed in
:ref:`toolchains`, the Zephyr build system may not have been
tested with them, and doesn't officially support them. (Nonetheless, the
toolchain set-up mechanism itself is supported.)
Follow these steps to use one of these toolchains.
#. Install a cross compiler suitable for your host and target systems.
For example, you might install the ``gcc-arm-none-eabi`` package on
Debian-based Linux systems, or ``arm-none-eabi-newlib`` on Fedora or Red
Hat:
.. code-block:: console
# On Debian or Ubuntu
sudo apt-get install gcc-arm-none-eabi
# On Fedora or Red Hat
sudo dnf install arm-none-eabi-newlib
#. :ref:`Set these environment variables <env_vars>`:
- Set :envvar:`ZEPHYR_TOOLCHAIN_VARIANT` to ``cross-compile``.
- Set ``CROSS_COMPILE`` to the common path prefix which your
toolchain's binaries have, e.g. the path to the directory containing the
compiler binaries plus the target triplet and trailing dash.
#. To check that you have set these variables correctly in your current
environment, follow these example shell sessions (the
``CROSS_COMPILE`` value may be different on your system):
.. code-block:: console
# Linux, macOS:
$ echo $ZEPHYR_TOOLCHAIN_VARIANT
cross-compile
$ echo $CROSS_COMPILE
/usr/bin/arm-none-eabi-
You can also set ``CROSS_COMPILE`` as a CMake variable.
When using this option, all of your toolchain binaries must reside in the same
directory and have a common file name prefix. The ``CROSS_COMPILE`` variable
is set to the directory concatenated with the file name prefix. In the Debian
example above, the ``gcc-arm-none-eabi`` package installs binaries such as
``arm-none-eabi-gcc`` and ``arm-none-eabi-ld`` in directory ``/usr/bin/``, so
the common prefix is ``/usr/bin/arm-none-eabi-`` (including the trailing dash,
``-``). If your toolchain is installed in ``/opt/mytoolchain/bin`` with binary
names based on target triplet ``myarch-none-elf``, ``CROSS_COMPILE`` would be
set to ``/opt/mytoolchain/bin/myarch-none-elf-``.
``` | /content/code_sandbox/doc/develop/toolchains/other_x_compilers.rst | restructuredtext | 2016-05-26T17:54:19 | 2024-08-16T18:09:06 | zephyr | zephyrproject-rtos/zephyr | 10,307 | 608 |
```restructuredtext
.. _toolchain_designware_arc_mwdt:
DesignWare ARC MetaWare Development Toolkit (MWDT)
##################################################
#. You need to have `ARC MWDT <path_to_url`_ installed on
your host.
#. You need to have :ref:`Zephyr SDK <toolchain_zephyr_sdk>` installed on your host.
.. note::
A Zephyr SDK is used as a source of tools like device tree compiler (DTC), QEMU, etc...
Even though ARC MWDT toolchain is used for Zephyr RTOS build, still the GNU preprocessor & GNU
objcopy might be used for some steps like device tree preprocessing and ``.bin`` file
generation. We used Zephyr SDK as a source of these ARC GNU tools as well.
To setup ARC GNU toolchain please use SDK Bundle (Full or Minimal) instead of manual installation
of separate tarballs. It installs and registers toolchain and host tools in the system,
that allows you to avoid toolchain related issues while building Zephyr.
#. :ref:`Set these environment variables <env_vars>`:
- Set :envvar:`ZEPHYR_TOOLCHAIN_VARIANT` to ``arcmwdt``.
- Set :envvar:`ARCMWDT_TOOLCHAIN_PATH` to the toolchain installation directory. MWDT installation
provides :envvar:`METAWARE_ROOT` so simply set :envvar:`ARCMWDT_TOOLCHAIN_PATH` to
``$METAWARE_ROOT/../`` (Linux) or ``%METAWARE_ROOT%\..\`` (Windows).
.. tip::
If you have only one ARC MWDT toolchain version installed on your machine you may skip setting
:envvar:`ARCMWDT_TOOLCHAIN_PATH` - it would be detected automatically.
#. To check that you have set these variables correctly in your current
environment, follow these example shell sessions (the
:envvar:`ARCMWDT_TOOLCHAIN_PATH` values may be different on your system):
.. code-block:: console
# Linux:
$ echo $ZEPHYR_TOOLCHAIN_VARIANT
arcmwdt
$ echo $ARCMWDT_TOOLCHAIN_PATH
/home/you/ARC/MWDT_2023.03/
# Windows:
> echo %ZEPHYR_TOOLCHAIN_VARIANT%
arcmwdt
> echo %ARCMWDT_TOOLCHAIN_PATH%
C:\ARC\MWDT_2023.03\
``` | /content/code_sandbox/doc/develop/toolchains/designware_arc_mwdt.rst | restructuredtext | 2016-05-26T17:54:19 | 2024-08-16T18:09:06 | zephyr | zephyrproject-rtos/zephyr | 10,307 | 534 |
```restructuredtext
.. _api_overview:
API Overview
############
The table lists Zephyr's APIs and information about them, including their
current :ref:`stability level <api_lifecycle>`. More details about API changes
between major releases are available in the :ref:`zephyr_release_notes`.
The version column uses `semantic version <path_to_url`_, and has the
following expectations:
* Major version zero (0.y.z) is for initial development. Anything MAY
change at any time. The public API SHOULD NOT be considered stable.
* If minor version is up to one (0.1.z), API is considered
:ref:`experimental <api_lifecycle_experimental>`.
* If minor version is larger than one (0.y.z | y > 1), API is considered
:ref:`unstable <api_lifecycle_unstable>`.
* Version 1.0.0 defines the public API. The way in which the version number
is incremented after this release is dependent on this public API and how it
changes.
* APIs with major versions equal or larger than one (x.y.z | x >= 1 ) are
considered :ref:`stable <api_lifecycle_stable>`.
* All existing stable APIs in Zephyr will be start with version 1.0.0.
* Patch version Z (x.y.Z | x > 0) MUST be incremented if only backwards
compatible bug fixes are introduced. A bug fix is defined as an internal
change that fixes incorrect behavior.
* Minor version Y (x.Y.z | x > 0) MUST be incremented if new, backwards
compatible functionality is introduced to the public API. It MUST be
incremented if any public API functionality is marked as deprecated. It MAY
be incremented if substantial new functionality or improvements are
introduced within the private code. It MAY include patch level changes.
Patch version MUST be reset to 0 when minor version is incremented.
* Major version X (x.Y.z | x > 0) MUST be incremented if a compatibility
breaking change was made to the API.
.. note::
Version for existing APIs are initially set based on the current state of the
APIs:
- 0.1.0 denotes an :ref:`experimental <api_lifecycle_experimental>` API
- 0.8.0 denote an :ref:`unstable <api_lifecycle_unstable>` API,
- and finally 1.0.0 indicates a :ref:`stable <api_lifecycle_stable>` APIs.
Changes to APIs in the future will require adapting the version following the
guidelines above.
.. api-overview-table::
``` | /content/code_sandbox/doc/develop/api/overview.rst | restructuredtext | 2016-05-26T17:54:19 | 2024-08-16T18:09:06 | zephyr | zephyrproject-rtos/zephyr | 10,307 | 568 |
```restructuredtext
.. _api_status_and_guidelines:
API Status and Guidelines
#########################
.. toctree::
:maxdepth: 1
overview.rst
api_lifecycle.rst
design_guidelines.rst
terminology.rst
``` | /content/code_sandbox/doc/develop/api/index.rst | restructuredtext | 2016-05-26T17:54:19 | 2024-08-16T18:09:06 | zephyr | zephyrproject-rtos/zephyr | 10,307 | 53 |
```restructuredtext
.. _toolchain_zephyr_sdk:
Zephyr SDK
##########
The Zephyr Software Development Kit (SDK) contains toolchains for each of
Zephyr's supported architectures. It also includes additional host tools, such
as custom QEMU and OpenOCD.
Use of the Zephyr SDK is highly recommended and may even be required under
certain conditions (for example, running tests in QEMU for some architectures).
Supported architectures
***********************
The Zephyr SDK supports the following target architectures:
* ARC (32-bit and 64-bit; ARCv1, ARCv2, ARCv3)
* ARM (32-bit and 64-bit; ARMv6, ARMv7, ARMv8; A/R/M Profiles)
* MIPS (32-bit and 64-bit)
* Nios II
* RISC-V (32-bit and 64-bit; RV32I, RV32E, RV64I)
* x86 (32-bit and 64-bit)
* Xtensa
.. _toolchain_zephyr_sdk_bundle_variables:
Installation bundle and variables
*********************************
The Zephyr SDK bundle supports all major operating systems (Linux, macOS and
Windows) and is delivered as a compressed file.
The installation consists of extracting the file and running the included setup
script. Additional OS-specific instructions are described in the sections below.
If no toolchain is selected, the build system looks for Zephyr SDK and uses the toolchain
from there. You can enforce this by setting the environment variable
:envvar:`ZEPHYR_TOOLCHAIN_VARIANT` to ``zephyr``.
If you install the Zephyr SDK outside any of the default locations (listed in
the operating system specific instructions below) and you want automatic discovery
of the Zephyr SDK, then you must register the Zephyr SDK in the CMake package registry
by running the setup script. If you decide not to register the Zephyr SDK in the CMake registry,
then the :envvar:`ZEPHYR_SDK_INSTALL_DIR` can be used to point to the Zephyr SDK installation
directory.
You can also set :envvar:`ZEPHYR_SDK_INSTALL_DIR` to point to a directory
containing multiple Zephyr SDKs, allowing for automatic toolchain selection. For
example, you can set ``ZEPHYR_SDK_INSTALL_DIR`` to ``/company/tools``, where the
``company/tools`` folder contains the following subfolders:
* ``/company/tools/zephyr-sdk-0.13.2``
* ``/company/tools/zephyr-sdk-a.b.c``
* ``/company/tools/zephyr-sdk-x.y.z``
This allows the Zephyr build system to choose the correct version of the SDK,
while allowing multiple Zephyr SDKs to be grouped together at a specific path.
.. _toolchain_zephyr_sdk_compatibility:
Zephyr SDK version compatibility
********************************
In general, the Zephyr SDK version referenced in this page should be considered
the recommended version for the corresponding Zephyr version.
For the full list of compatible Zephyr and Zephyr SDK versions, refer to the
`Zephyr SDK Version Compatibility Matrix`_.
.. _toolchain_zephyr_sdk_install:
Zephyr SDK installation
***********************
.. toolchain_zephyr_sdk_install_start
.. note:: You can change |sdk-version-literal| to another version in the instructions below
if needed; the `Zephyr SDK Releases`_ page contains all available
SDK releases.
.. note:: If you want to uninstall the SDK, you may simply remove the directory
where you installed it.
.. tabs::
.. group-tab:: Ubuntu
.. _ubuntu_zephyr_sdk:
#. Download and verify the `Zephyr SDK bundle`_:
.. parsed-literal::
cd ~
wget |sdk-url-linux|
wget -O - |sdk-url-linux-sha| | shasum --check --ignore-missing
If your host architecture is 64-bit ARM (for example, Raspberry Pi), replace ``x86_64``
with ``aarch64`` in order to download the 64-bit ARM Linux SDK.
#. Extract the Zephyr SDK bundle archive:
.. parsed-literal::
tar xvf zephyr-sdk- |sdk-version-trim| _linux-x86_64.tar.xz
.. note::
It is recommended to extract the Zephyr SDK bundle at one of the following locations:
* ``$HOME``
* ``$HOME/.local``
* ``$HOME/.local/opt``
* ``$HOME/bin``
* ``/opt``
* ``/usr/local``
The Zephyr SDK bundle archive contains the ``zephyr-sdk-<version>``
directory and, when extracted under ``$HOME``, the resulting
installation path will be ``$HOME/zephyr-sdk-<version>``.
#. Run the Zephyr SDK bundle setup script:
.. parsed-literal::
cd zephyr-sdk- |sdk-version-ltrim|
./setup.sh
.. note::
You only need to run the setup script once after extracting the Zephyr SDK bundle.
You must rerun the setup script if you relocate the Zephyr SDK bundle directory after
the initial setup.
#. Install `udev <path_to_url`_ rules, which
allow you to flash most Zephyr boards as a regular user:
.. parsed-literal::
sudo cp ~/zephyr-sdk- |sdk-version-trim| /sysroots/x86_64-pokysdk-linux/usr/share/openocd/contrib/60-openocd.rules /etc/udev/rules.d
sudo udevadm control --reload
.. group-tab:: macOS
.. _macos_zephyr_sdk:
#. Download and verify the `Zephyr SDK bundle`_:
.. parsed-literal::
cd ~
curl -L -O |sdk-url-macos|
curl -L |sdk-url-macos-sha| | shasum --check --ignore-missing
If your host architecture is 64-bit ARM (Apple Silicon), replace
``x86_64`` with ``aarch64`` in order to download the 64-bit ARM macOS SDK.
#. Extract the Zephyr SDK bundle archive:
.. parsed-literal::
tar xvf zephyr-sdk- |sdk-version-trim| _macos-x86_64.tar.xz
.. note::
It is recommended to extract the Zephyr SDK bundle at one of the following locations:
* ``$HOME``
* ``$HOME/.local``
* ``$HOME/.local/opt``
* ``$HOME/bin``
* ``/opt``
* ``/usr/local``
The Zephyr SDK bundle archive contains the ``zephyr-sdk-<version>``
directory and, when extracted under ``$HOME``, the resulting
installation path will be ``$HOME/zephyr-sdk-<version>``.
#. Run the Zephyr SDK bundle setup script:
.. parsed-literal::
cd zephyr-sdk- |sdk-version-ltrim|
./setup.sh
.. note::
You only need to run the setup script once after extracting the Zephyr SDK bundle.
You must rerun the setup script if you relocate the Zephyr SDK bundle directory after
the initial setup.
.. group-tab:: Windows
.. _windows_zephyr_sdk:
#. Open a ``cmd.exe`` terminal window **as a regular user**
#. Download the `Zephyr SDK bundle`_:
.. parsed-literal::
cd %HOMEPATH%
wget |sdk-url-windows|
#. Extract the Zephyr SDK bundle archive:
.. parsed-literal::
7z x zephyr-sdk- |sdk-version-trim| _windows-x86_64.7z
.. note::
It is recommended to extract the Zephyr SDK bundle at one of the following locations:
* ``%HOMEPATH%``
* ``%PROGRAMFILES%``
The Zephyr SDK bundle archive contains the ``zephyr-sdk-<version>``
directory and, when extracted under ``%HOMEPATH%``, the resulting
installation path will be ``%HOMEPATH%\zephyr-sdk-<version>``.
#. Run the Zephyr SDK bundle setup script:
.. parsed-literal::
cd zephyr-sdk- |sdk-version-ltrim|
setup.cmd
.. note::
You only need to run the setup script once after extracting the Zephyr SDK bundle.
You must rerun the setup script if you relocate the Zephyr SDK bundle directory after
the initial setup.
.. _Zephyr SDK Releases: path_to_url
.. _Zephyr SDK Version Compatibility Matrix: path_to_url
.. toolchain_zephyr_sdk_install_end
``` | /content/code_sandbox/doc/develop/toolchains/zephyr_sdk.rst | restructuredtext | 2016-05-26T17:54:19 | 2024-08-16T18:09:06 | zephyr | zephyrproject-rtos/zephyr | 10,307 | 1,926 |
```restructuredtext
.. _api_terms:
API Terminology
###############
The following terms may be used as shorthand API tags to indicate the
allowed calling context (thread, ISR, pre-kernel), the effect of a call
on the current thread state, and other behavioral characteristics.
:ref:`api_term_reschedule`
if executing the function reaches a reschedule point
:ref:`api_term_sleep`
if executing the function can cause the invoking thread to sleep
:ref:`api_term_no-wait`
if a parameter to the function can prevent the invoking thread from
trying to sleep
:ref:`api_term_isr-ok`
if the function can be safely called and will have its specified
effect whether invoked from interrupt or thread context
:ref:`api_term_pre-kernel-ok`
if the function can be safely called before the kernel has been fully
initialized and will have its specified effect when invoked from that
context.
:ref:`api_term_async`
if the function may return before the operation it initializes is
complete (i.e. function return and operation completion are
asynchronous)
:ref:`api_term_supervisor`
if the calling thread must have supervisor privileges to execute the
function
Details on the behavioral impact of each attribute are in the following
sections.
.. _api_term_reschedule:
reschedule
==========
The reschedule attribute is used on a function that can reach a
:ref:`reschedule point <scheduling_v2>` within its execution.
Details
-------
The significance of this attribute is that when a rescheduling function
is invoked by a thread it is possible for that thread to be suspended as
a consequence of a higher-priority thread being made ready. Whether the
suspension actually occurs depends on the operation associated with the
reschedule point and the relative priorities of the invoking thread and
the head of the ready queue.
Note that in the case of timeslicing, or reschedule points executed from
interrupts, any thread may be suspended in any function.
Functions that are not **reschedule** may be invoked from either thread
or interrupt context.
Functions that are **reschedule** may be invoked from thread context.
Functions that are **reschedule** but not **sleep** may be invoked from
interrupt context.
.. _api_term_sleep:
sleep
=====
The sleep attribute is used on a function that can cause the invoking
thread to :ref:`sleep <scheduling_v2>`.
Explanation
-----------
This attribute is of relevance specifically when considering
applications that use only non-preemptible threads, because the kernel
will not replace a running cooperative-only thread at a reschedule point
unless that thread has explicitly invoked an operation that caused it to
sleep.
This attribute does not imply the function will sleep unconditionally,
but that the operation may require an invoking thread that would have to
suspend, wait, or invoke :c:func:`k_yield` before it can complete
its operation. This behavior may be mediated by **no-wait**.
Functions that are **sleep** are implicitly **reschedule**.
Functions that are **sleep** may be invoked from thread context.
Functions that are **sleep** may be invoked from interrupt and
pre-kernel contexts if and only if invoked in **no-wait** mode.
.. _api_term_no-wait:
no-wait
=======
The no-wait attribute is used on a function that is also **sleep** to
indicate that a parameter to the function can force an execution path
that will not cause the invoking thread to sleep.
Explanation
-----------
The paradigmatic case of a no-wait function is a function that takes a
timeout, to which :c:macro:`K_NO_WAIT` can be passed. The semantics of
this special timeout value are to execute the function's operation as
long as it can be completed immediately, and to return an error code
rather than sleep if it cannot.
It is use of the no-wait feature that allows functions like
:c:func:`k_sem_take` to be invoked from ISRs, since it is not
permitted to sleep in interrupt context.
A function with a no-wait path does not imply that taking that path
guarantees the function is synchronous.
Functions with this attribute may be invoked from interrupt and
pre-kernel contexts only when the parameter selects the no-wait path.
.. _api_term_isr-ok:
isr-ok
======
The isr-ok attribute is used on a function to indicate that it works
whether it is being invoked from interrupt or thread context.
Explanation
-----------
Any function that is not **sleep** is inherently **isr-ok**. Functions
that are **sleep** are **isr-ok** if the implementation ensures that the
documented behavior is implemented even if called from an interrupt
context. This may be achieved by having the implementation detect the
calling context and transfer the operation that would sleep to a thread,
or by documenting that when invoked from a non-thread context the
function will return a specific error (generally ``-EWOULDBLOCK``).
Note that a function that is **no-wait** is safe to call from interrupt
context only when the no-wait path is selected. **isr-ok** functions
need not provide a no-wait path.
.. _api_term_pre-kernel-ok:
pre-kernel-ok
=============
The pre-kernel-ok attribute is used on a function to indicate that it
works as documented even when invoked before the kernel main thread has
been started.
Explanation
-----------
This attribute is similar to **isr-ok** in function, but is intended for
use by any API that is expected to be called in :c:macro:`DEVICE_DEFINE()`
or :c:macro:`SYS_INIT()` calls that may be invoked with ``PRE_KERNEL_1``
or ``PRE_KERNEL_2`` initialization levels.
Generally a function that is **pre-kernel-ok** checks
:c:func:`k_is_pre_kernel` when determining whether it can fulfill its
required behavior. In many cases it would also check
:c:func:`k_is_in_isr` so it can be **isr-ok** as well.
.. _api_term_async:
async
=====
A function is **async** (i.e. asynchronous) if it may return before the
operation it initiates has completed. An asynchronous function will
generally provide a mechanism by which operation completion is reported,
e.g. a callback or event.
A function that is not asynchronous is synchronous, i.e. the operation
will always be complete when the function returns. As most functions
are synchronous this behavior does not have a distinct attribute to
identify it.
Explanation
-----------
Be aware that **async** is orthogonal to context-switching. Some APIs
may provide completion information through a callback, but may suspend
while waiting for the resource necessary to initiate the operation; an
example is :c:func:`spi_transceive_signal`.
If a function is both **no-wait** and **async** then selecting the
no-wait path only guarantees that the function will not sleep. It does
not affect whether the operation will be completed before the function
returns.
.. _api_term_supervisor:
supervisor
==========
The supervisor attribute is relevant only in user-mode applications, and
indicates that the function cannot be invoked from user mode.
``` | /content/code_sandbox/doc/develop/api/terminology.rst | restructuredtext | 2016-05-26T17:54:19 | 2024-08-16T18:09:06 | zephyr | zephyrproject-rtos/zephyr | 10,307 | 1,536 |
```restructuredtext
.. _design_guidelines:
API Design Guidelines
#####################
Zephyr development and evolution is a group effort, and to simplify
maintenance and enhancements there are some general policies that should
be followed when developing a new capability or interface.
Using Callbacks
***************
Many APIs involve passing a callback as a parameter or as a member of a
configuration structure. The following policies should be followed when
specifying the signature of a callback:
* The first parameter should be a pointer to the object most closely
associated with the callback. In the case of device drivers this
would be ``const struct device *dev``. For library functions it may be a
pointer to another object that was referenced when the callback was
provided.
* The next parameter(s) should be additional information specific to the
callback invocation, such as a channel identifier, new status value,
and/or a message pointer followed by the message length.
* The final parameter should be a ``void *user_data`` pointer carrying
context that allows a shared callback function to locate additional
material necessary to process the callback.
An exception to providing ``user_data`` as the last parameter may be
allowed when the callback itself was provided through a structure that
will be embedded in another structure. An example of such a case is
:c:struct:`gpio_callback`, normally defined within a data structure
specific to the code that also defines the callback function. In those
cases further context can accessed by the callback indirectly by
:c:macro:`CONTAINER_OF`.
Examples
========
* The requirements of :c:type:`k_timer_expiry_t` invoked when a system
timer alarm fires are satisfied by::
void handle_timeout(struct k_timer *timer)
{ ... }
The assumption here, as with :c:struct:`gpio_callback`, is that the
timer is embedded in a structure reachable from
:c:macro:`CONTAINER_OF` that can provide additional context to the
callback.
* The requirements of :c:type:`counter_alarm_callback_t` invoked when a
counter device alarm fires are satisfied by::
void handle_alarm(const struct device *dev,
uint8_t chan_id,
uint32_t ticks,
void *user_data)
{ ... }
This provides more complete useful information, including which
counter channel timed-out and the counter value at which the timeout
occurred, as well as user context which may or may not be the
:c:struct:`counter_alarm_cfg` used to register the callback, depending
on user needs.
Conditional Data and APIs
*************************
APIs and libraries may provide features that are expensive in RAM or
code size but are optional in the sense that some applications can be
implemented without them. Examples of such feature include
:kconfig:option:`capturing a timestamp <CONFIG_CAN_RX_TIMESTAMP>` or
:kconfig:option:`providing an alternative interface <CONFIG_SPI_ASYNC>`. The
developer in coordination with the community must determine whether
enabling the features is to be controllable through a Kconfig option.
In the case where a feature is determined to be optional the following
practices should be followed.
* Any data that is accessed only when the feature is enabled should be
conditionally included via ``#ifdef CONFIG_MYFEATURE`` in the
structure or union declaration. This reduces memory use for
applications that don't need the capability.
* Function declarations that are available only when the option is
enabled should be provided unconditionally. Add a note in the
description that the function is available only when the specified
feature is enabled, referencing the required Kconfig symbol by name.
In the cases where the function is used but not enabled the definition
of the function shall be excluded from compilation, so references to
the unsupported API will result in a link-time error.
* Where code specific to the feature is isolated in a source file that
has no other content that file should be conditionally included in
``CMakeLists.txt``::
zephyr_sources_ifdef(CONFIG_MYFEATURE foo_funcs.c)
* Where code specific to the feature is part of a source file that has
other content the feature-specific code should be conditionally
processed using ``#ifdef CONFIG_MYFEATURE``.
The Kconfig flag used to enable the feature should be added to the
``PREDEFINED`` variable in :file:`doc/zephyr.doxyfile.in` to ensure the
conditional API and functions appear in generated documentation.
Return Codes
************
Implementations of an API, for example an API for accessing a peripheral might
implement only a subset of the functions that is required for minimal operation.
A distinction is needed between APIs that are not supported and those that are
not implemented or optional:
- APIs that are supported but not implemented shall return ``-ENOSYS``.
- Optional APIs that are not supported by the hardware should be implemented and
the return code in this case shall be ``-ENOTSUP``.
- When an API is implemented, but the particular combination of options
requested in the call cannot be satisfied by the implementation the call shall
return ``-ENOTSUP``. (For example, a request for a level-triggered GPIO interrupt on
hardware that supports only edge-triggered interrupts)
``` | /content/code_sandbox/doc/develop/api/design_guidelines.rst | restructuredtext | 2016-05-26T17:54:19 | 2024-08-16T18:09:06 | zephyr | zephyrproject-rtos/zephyr | 10,307 | 1,117 |
```restructuredtext
.. _code_data_relocation:
Code And Data Relocation
########################
Overview
********
This feature will allow relocating .text, .rodata, .data, and .bss sections from
required files and place them in the required memory region. The memory region
and file are given to the :ref:`gen_relocate_app.py` script in the form
of a string. This script is always invoked from inside cmake.
This script provides a robust way to re-order the memory contents without
actually having to modify the code. In simple terms this script will do the job
of ``__attribute__((section("name")))`` for a bunch of files together.
Details
*******
The memory region and file are given to the :ref:`gen_relocate_app.py` script in the form of a string.
An example of such a string is:
``SRAM2:/home/xyz/zephyr/samples/hello_world/src/main.c,SRAM1:/home/xyz/zephyr/samples/hello_world/src/main2.c``
This script is invoked with the following parameters:
``python3 gen_relocate_app.py -i input_string -o generated_linker -c generated_code``
Kconfig :kconfig:option:`CONFIG_CODE_DATA_RELOCATION` option, when enabled in
``prj.conf``, will invoke the script and do the required relocation.
This script also trigger the generation of ``linker_relocate.ld`` and
``code_relocation.c`` files. The ``linker_relocate.ld`` file creates
appropriate sections and links the required functions or variables from all the
selected files.
.. note::
The text section is split into 2 parts in the main linker script. The first
section will have some info regarding vector tables and other debug related
info. The second section will have the complete text section. This is
needed to force the required functions and data variables to the correct
locations. This is due to the behavior of the linker. The linker will only
link once and hence this text section had to be split to make room for the
generated linker script.
The ``code_relocation.c`` file has code that is needed for
initializing data sections, and a copy of the text sections (if XIP).
Also this contains code needed for bss zeroing and
for data copy operations from ROM to required memory type.
**The procedure to invoke this feature is:**
* Enable :kconfig:option:`CONFIG_CODE_DATA_RELOCATION` in the ``prj.conf`` file
* Inside the ``CMakeLists.txt`` file in the project, mention
all the files that need relocation.
``zephyr_code_relocate(FILES src/*.c LOCATION SRAM2)``
Where the first argument is the file/files and the second
argument is the memory where it must be placed.
.. note::
function zephyr_code_relocate() can be called as many times as required.
Additional Configurations
=========================
This section shows additional configuration options that can be set in
``CMakeLists.txt``
* if the memory is SRAM1, SRAM2, CCD, or AON, then place the full object in the
sections for example:
.. code-block:: none
zephyr_code_relocate(FILES src/file1.c LOCATION SRAM2)
zephyr_code_relocate(FILES src/file2.c LOCATION SRAM)
* if the memory type is appended with _DATA, _TEXT, _RODATA or _BSS, only the
selected memory is placed in the required memory region.
for example:
.. code-block:: none
zephyr_code_relocate(FILES src/file1.c LOCATION SRAM2_DATA)
zephyr_code_relocate(FILES src/file2.c LOCATION SRAM2_TEXT)
* Multiple regions can also be appended together such as: SRAM2_DATA_BSS.
This will place data and bss inside SRAM2.
* Multiple files can be passed to the FILES argument, or CMake generator
expressions can be used to relocate a comma-separated list of files
.. code-block:: none
file(GLOB sources "file*.c")
zephyr_code_relocate(FILES ${sources} LOCATION SRAM)
zephyr_code_relocate(FILES $<TARGET_PROPERTY:my_tgt,SOURCES> LOCATION SRAM)
NOKEEP flag
===========
By default, all relocated functions and variables will be marked with ``KEEP()``
when generating ``linker_relocate.ld``. Therefore, if any input file happens to
contain unused symbols, then they will not be discarded by the linker, even when
it is invoked with ``--gc-sections``. If you'd like to override this behavior,
you can pass ``NOKEEP`` to your ``zephyr_code_relocate()`` call.
.. code-block:: none
zephyr_code_relocate(FILES src/file1.c LOCATION SRAM2_TEXT NOKEEP)
The example above will help ensure that any unused code found in the .text
sections of ``file1.c`` will not stick to SRAM2.
NOCOPY flag
===========
When a ``NOCOPY`` option is passed to the ``zephyr_code_relocate()`` function,
the relocation code is not generated in ``code_relocation.c``. This flag can be
used when we want to move the content of a specific file (or set of files) to a
XIP area.
This example will place the .text section of the ``xip_external_flash.c`` file
to the ``EXTFLASH`` memory region where it will be executed from (XIP). The
.data will be relocated as usual into SRAM.
.. code-block:: none
zephyr_code_relocate(FILES src/xip_external_flash.c LOCATION EXTFLASH_TEXT NOCOPY)
zephyr_code_relocate(FILES src/xip_external_flash.c LOCATION SRAM_DATA)
Relocating libraries
====================
Libraries can be relocated using the LIBRARY argument to
``zephyr_code_relocation()`` with the library name. For example, the following
snippet will relocate serial drivers to SRAM2:
.. code-block:: none
zephyr_code_relocate(LIBRARY drivers__serial LOCATION SRAM2)
Tips
====
Take care if relocating kernel/arch files, some contain early initialization
code that executes before code relocation takes place.
Additional MPU/MMU configuration may be required to ensure that the
destination memory region is configured to allow code execution.
Samples/ Tests
==============
A test showcasing this feature is provided at
``$ZEPHYR_BASE/tests/application_development/code_relocation``
This test shows how the code relocation feature is used.
This test will place .text, .data, .bss from 3 files to various parts in the SRAM
using a custom linker file derived from ``include/zephyr/arch/arm/cortex_m/scripts/linker.ld``
A sample showcasing the NOCOPY flag is provided at
``$ZEPHYR_BASE/samples/application_development/code_relocation_nocopy/``
``` | /content/code_sandbox/doc/kernel/code-relocation.rst | restructuredtext | 2016-05-26T17:54:19 | 2024-08-16T18:09:06 | zephyr | zephyrproject-rtos/zephyr | 10,307 | 1,505 |
```restructuredtext
.. _kernel:
Kernel
######
.. toctree::
:maxdepth: 1
services/index.rst
drivers/index.rst
usermode/index.rst
memory_management/index.rst
data_structures/index.rst
timing_functions/index.rst
object_cores/index.rst
timeutil.rst
util/index.rst
iterable_sections/index.rst
code-relocation.rst
``` | /content/code_sandbox/doc/kernel/index.rst | restructuredtext | 2016-05-26T17:54:19 | 2024-08-16T18:09:06 | zephyr | zephyrproject-rtos/zephyr | 10,307 | 96 |
```restructuredtext
.. _api_lifecycle:
API Lifecycle
#############
Developers using Zephyr's APIs need to know how long they can trust that a
given API will not change in future releases. At the same time, developers
maintaining and extending Zephyr's APIs need to be able to introduce
new APIs that aren't yet fully proven, and to potentially retire old APIs when they're
no longer optimal or supported by the underlying platforms.
.. figure:: api_lifecycle.png
:align: center
:alt: API Life Cycle
:figclass: align-center
API Life Cycle
An up-to-date table of all APIs and their maturity level can be found in the
:ref:`api_overview` page.
.. _api_lifecycle_experimental:
Experimental
*************
Experimental APIs denote that a feature was introduced recently, and may change
or be removed in future versions. Try it out and provide feedback
to the community via the `Developer mailing list <path_to_url`_.
The following requirements apply to all new APIs:
- Documentation of the API (usage)
explaining its design and assumptions, how it is to be used, current
implementation limitations, and future potential, if appropriate.
- The API introduction should be accompanied by at least one implementation
of said API (in the case of peripheral APIs, this corresponds to one driver)
- At least one sample using the new API (may only build on one single board)
When introducing a new and experimental API, mark the API version in the headers
where the API is defined. An experimental API shall have a version where the minor
version is up to one (0.1.z). (see :ref:`api_overview`)
Peripheral APIs (Hardware Related)
==================================
When introducing an API (public header file with documentation) for a new
peripheral or driver subsystem, review of the API is enforced and is driven by
the Architecture working group consisting of representatives from different vendors.
The API shall be promoted to ``unstable`` when it has at least two
implementations on different hardware platforms.
.. _api_lifecycle_unstable:
Unstable
********
The API is in the process of settling, but has not yet had sufficient real-world
testing to be considered stable. The API is considered generic in nature and can
be used on different hardware platforms.
When the API changes status to unstable API, mark the API version in the headers
where the API is defined. Unstable APIs shall have a version where the minor
version is larger than one (0.y.z | y > 1 ). (see :ref:`api_overview`)
.. note::
Changes will not be announced.
Peripheral APIs (Hardware Related)
==================================
The API shall be promoted from ``experimental`` to ``unstable`` when it has at
least two implementations on different hardware platforms.
Hardware Agnostic APIs
=======================
For hardware agnostic APIs, multiple applications using it are required to
promote an API from ``experimental`` to ``unstable``.
.. _api_lifecycle_stable:
Stable
*******
The API has proven satisfactory, but cleanup in the underlying code may cause
minor changes. Backwards-compatibility will be maintained if reasonable.
An API can be declared ``stable`` after fulfilling the following requirements:
- Test cases for the new API with 100% coverage
- Complete documentation in code. All public interfaces shall be documented
and available in online documentation.
- The API has been in-use and was available in at least 2 development releases
- Stable APIs can get backward compatible updates, bug fixes and security fixes
at any time.
In order to declare an API ``stable``, the following steps need to be followed:
#. A Pull Request must be opened that changes the corresponding entry in the
:ref:`api_overview` table
#. An email must be sent to the ``devel`` mailing list announcing the API
upgrade request
#. The Pull Request must be submitted for discussion in the next
`Zephyr Architecture meeting`_ where, barring any objections, the Pull Request
will be merged
When the API changes status to stable API, mark the API version in the headers
where the API is defined. Stable APIs shall have a version where the major
version is one or larger (x.y.z | x >= 1 ). (see :ref:`api_overview`)
.. _breaking_api_changes:
Introducing breaking API changes
================================
A stable API, as described above, strives to remain backwards-compatible through
its life-cycle. There are however cases where fulfilling this objective prevents
technical progress, or is simply unfeasible without unreasonable burden on the
maintenance of the API and its implementation(s).
A breaking API change is defined as one that forces users to modify their
existing code in order to maintain the current behavior of their application.
The need for recompilation of applications (without changing the application
itself) is not considered a breaking API change.
In order to restrict and control the introduction of a change that breaks the
promise of backwards compatibility, the following steps must be followed whenever
such a change is considered necessary in order to accept it in the project:
#. An :ref:`RFC issue <rfcs>` must be opened on GitHub with the following
content:
.. code-block:: none
Title: RFC: Breaking API Change: <subsystem>
Contents: - Problem Description:
- Background information on why the change is required
- Proposed Change (detailed):
- Brief description of the API change
- Detailed RFC:
- Function call changes
- Device Tree changes (source and bindings)
- Kconfig option changes
- Dependencies:
- Impact to users of the API, including the steps required
to adapt out-of-tree users of the API to the change
Instead of a written description of the changes, the RFC issue may link to a
Pull Request containing those changes in code form.
#. The RFC issue must be labeled with the GitHub ``Breaking API Change`` label
#. The RFC issue must be submitted for discussion in the next `Zephyr
Architecture meeting`_
#. An email must be sent to the ``devel`` mailing list with a subject identical
to the RFC issue title and that links to the RFC issue
The RFC will then receive feedback through issue comments and will also be
discussed in the Zephyr Architecture meeting, where the stakeholders and the
community at large will have a chance to discuss it in detail.
Finally, and if not done as part of the first step, a Pull Request must be
opened on GitHub. It is left to the person proposing the change to decide
whether to introduce both the RFC and the Pull Request at the same time or to
wait until the RFC has gathered consensus enough so that the implementation can
proceed with confidence that it will be accepted.
The Pull Request must include the following:
- A title that matches the RFC issue
- A link to the RFC issue
- The actual changes to the API
- Changes to the API header file
- Changes to the API implementation(s)
- Changes to the relevant API documentation
- Changes to Device Tree source and bindings
- The changes required to adapt in-tree users of the API to the change.
Depending on the scope of this task this might require additional help from
the corresponding maintainers
- An entry in the "API Changes" section of the release notes for the next
upcoming release
- The labels ``API``, ``Breaking API Change`` and ``Release Notes``, as well as
any others that are applicable
- The label ``Architecture Review`` if the RFC was not yet discussed and agreed upon in `Zephyr
Architecture meeting`_
Once the steps above have been completed, the outcome of the proposal will
depend on the approval of the actual Pull Request by the maintainer of the
corresponding subsystem. As with any other Pull Request, the author can request
for it to be discussed and ultimately even voted on in the `Zephyr TSC meeting`_.
If the Pull Request is merged then an email must be sent to the ``devel`` and
``user`` mailing lists informing them of the change.
The API version shall be changed to signal backward incompatible changes. This
is achieved by incrementing the major version (X.y.z | X > 1). It MAY also
include minor and patch level changes. Patch and minor versions MUST be reset to
0 when major version is incremented. (see :ref:`api_overview`)
.. note::
Breaking API changes will be listed and described in the migration guide.
Deprecated
***********
.. note::
Unstable APIs can be removed without deprecation at any time.
Deprecation and removal of APIs will be announced in the "API Changes"
section of the release notes.
The following are the requirements for deprecating an existing API:
- Deprecation Time (stable APIs): 2 Releases
The API needs to be marked as deprecated in at least two full releases.
For example, if an API was first deprecated in release 4.0,
it will be ready to be removed in 4.2 at the earliest.
There may be special circumstances, determined by the Architecture working group,
where an API is deprecated sooner.
- What is required when deprecating:
- Mark as deprecated. This can be done by using the compiler itself
(``__deprecated`` for function declarations and ``__DEPRECATED_MACRO`` for
macro definitions), or by introducing a Kconfig option (typically one that
contains the ``DEPRECATED`` word in it) that, when enabled, reverts the APIs
back to their previous form
- Document the deprecation
- Include the deprecation in the "API Changes" of the release notes for the
next upcoming release
- Code using the deprecated API needs to be modified to remove usage of said
API
- The change needs to be atomic and bisectable
- Add an entry in the corresponding release
`GitHub issue <path_to_url`_
tracking removal of deprecated APIs.
In this example in the one corresponding to the 4.2 release.
During the deprecation waiting period, the API will be in the ``deprecated``
state. The Zephyr maintainers will track usage of deprecated APIs on
``docs.zephyrproject.org`` and support developers migrating their code. Zephyr
will continue to provide warnings:
- API documentation will inform users that the API is deprecated.
- Attempts to use a deprecated API at build time will log a warning to the
console.
Retired
*******
In this phase, the API is removed.
The target removal date is 2 releases after deprecation is announced.
The Zephyr maintainers will decide when to actually remove the API: this
will depend on how many developers have successfully migrated from the
deprecated API, and on how urgently the API needs to be removed.
If it's OK to remove the API, it will be removed. The maintainers will remove
the corresponding documentation, and communicate the removal in the usual ways:
the release notes, mailing lists, Github issues and pull-requests.
If it's not OK to remove the API, the maintainers will continue to support
migration and update the roadmap with the aim to remove the API in the next
release.
.. _`Zephyr TSC meeting`: path_to_url#technical-steering-committee-tsc
.. _`Zephyr Architecture meeting`: path_to_url
``` | /content/code_sandbox/doc/develop/api/api_lifecycle.rst | restructuredtext | 2016-05-26T17:54:19 | 2024-08-16T18:09:06 | zephyr | zephyrproject-rtos/zephyr | 10,307 | 2,423 |
```restructuredtext
.. _kernel_api:
Kernel Services
###############
The Zephyr kernel lies at the heart of every Zephyr application. It provides
a low footprint, high performance, multi-threaded execution environment
with a rich set of available features. The rest of the Zephyr ecosystem,
including device drivers, networking stack, and application-specific code,
uses the kernel's features to create a complete application.
The configurable nature of the kernel allows you to incorporate only those
features needed by your application, making it ideal for systems with limited
amounts of memory (as little as 2 KB!) or with simple multi-threading
requirements (such as a set of interrupt handlers and a single background task).
Examples of such systems include: embedded sensor hubs, environmental sensors,
simple LED wearable, and store inventory tags.
Applications requiring more memory (50 to 900 KB), multiple communication
devices (like Wi-Fi and Bluetooth Low Energy), and complex multi-threading,
can also be developed using the Zephyr kernel. Examples of such systems
include: fitness wearables, smart watches, and IoT wireless gateways.
Scheduling, Interrupts, and Synchronization
*******************************************
These pages cover basic kernel services related to thread scheduling and
synchronization.
.. toctree::
:maxdepth: 1
threads/index.rst
scheduling/index.rst
threads/system_threads.rst
threads/workqueue.rst
threads/nothread.rst
interrupts.rst
polling.rst
synchronization/semaphores.rst
synchronization/mutexes.rst
synchronization/condvar.rst
synchronization/events.rst
smp/smp.rst
.. _kernel_data_passing_api:
Data Passing
************
These pages cover kernel objects which can be used to pass data between
threads and ISRs.
The following table summarizes their high-level features.
=============== ============== =================== ============== ============== ================= ============== ===============================
Object Bidirectional? Data structure Data item size Data Alignment ISRs can receive? ISRs can send? Overrun handling
=============== ============== =================== ============== ============== ================= ============== ===============================
FIFO No Queue Arbitrary [1] 4 B [2] Yes [3] Yes N/A
LIFO No Queue Arbitrary [1] 4 B [2] Yes [3] Yes N/A
Stack No Array Word Word Yes [3] Yes Undefined behavior
Message queue No Ring buffer Arbitrary [6] Power of two Yes [3] Yes Pend thread or return -errno
Mailbox Yes Queue Arbitrary [1] Arbitrary No No N/A
Pipe No Ring buffer [4] Arbitrary Arbitrary Yes [5] Yes [5] Pend thread or return -errno
=============== ============== =================== ============== ============== ================= ============== ===============================
[1] Callers allocate space for queue overhead in the data
elements themselves.
[2] Objects added with k_fifo_alloc_put() and k_lifo_alloc_put()
do not have alignment constraints, but use temporary memory from the
calling thread's resource pool.
[3] ISRs can receive only when passing K_NO_WAIT as the timeout
argument.
[4] Optional.
[5] ISRS can send and/or receive only when passing K_NO_WAIT as the
timeout argument.
[6] Data item size must be a multiple of the data alignment.
.. toctree::
:maxdepth: 1
data_passing/queues.rst
data_passing/fifos.rst
data_passing/lifos.rst
data_passing/stacks.rst
data_passing/message_queues.rst
data_passing/mailboxes.rst
data_passing/pipes.rst
.. _kernel_memory_management_api:
Memory Management
*****************
See :ref:`memory_management_api`.
Timing
******
These pages cover timing related services.
.. toctree::
:maxdepth: 1
timing/clocks.rst
timing/timers.rst
Other
*****
These pages cover other kernel services.
.. toctree::
:maxdepth: 1
other/atomic.rst
other/float.rst
other/version.rst
other/fatal.rst
other/thread_local_storage.rst
``` | /content/code_sandbox/doc/kernel/services/index.rst | restructuredtext | 2016-05-26T17:54:19 | 2024-08-16T18:09:06 | zephyr | zephyrproject-rtos/zephyr | 10,307 | 956 |
```restructuredtext
.. _timeutil_api:
Time Utilities
##############
Overview
********
:ref:`kernel_timing_uptime` in Zephyr is based on the a tick counter. With
the default :kconfig:option:`CONFIG_TICKLESS_KERNEL` this counter advances at a
nominally constant rate from zero at the instant the system started. The POSIX
equivalent to this counter is something like ``CLOCK_MONOTONIC`` or, in Linux,
``CLOCK_MONOTONIC_RAW``. :c:func:`k_uptime_get()` provides a millisecond
representation of this time.
Applications often need to correlate the Zephyr internal time with external
time scales used in daily life, such as local time or Coordinated Universal
Time. These systems interpret time in different ways and may have
discontinuities due to `leap seconds <path_to_url`__ and
local time offsets like daylight saving time.
Because of these discontinuities, as well as significant inaccuracies in the
clocks underlying the cycle counter, the offset between time estimated from
the Zephyr clock and the actual time in a "real" civil time scale is not
constant and can vary widely over the runtime of a Zephyr application.
The time utilities API supports:
* :ref:`converting between time representations <timeutil_repr>`
* :ref:`synchronizing and aligning time scales <timeutil_sync>`
For terminology and concepts that support these functions see
:ref:`timeutil_concepts`.
Time Utility APIs
*****************
.. _timeutil_repr:
Representation Transformation
=============================
Time scale instants can be represented in multiple ways including:
* Seconds since an epoch. POSIX representations of time in this form include
``time_t`` and ``struct timespec``, which are generally interpreted as a
representation of `"UNIX Time"
<path_to_url#section-2>`__.
* Calendar time as a year, month, day, hour, minutes, and seconds relative to
an epoch. POSIX representations of time in this form include ``struct tm``.
Keep in mind that these are simply time representations that must be
interpreted relative to a time scale which may be local time, UTC, or some
other continuous or discontinuous scale.
Some necessary transformations are available in standard C library
routines. For example, ``time_t`` measuring seconds since the POSIX EPOCH is
converted to ``struct tm`` representing calendar time with `gmtime()
<path_to_url`__.
Sub-second timestamps like ``struct timespec`` can also use this to produce
the calendar time representation and deal with sub-second offsets separately.
The inverse transformation is not standardized: APIs like ``mktime()`` expect
information about time zones. Zephyr provides this transformation with
:c:func:`timeutil_timegm` and :c:func:`timeutil_timegm64`.
.. doxygengroup:: timeutil_repr_apis
.. _timeutil_sync:
Time Scale Synchronization
==========================
There are several factors that affect synchronizing time scales:
* The rate of discrete instant representation change. For example Zephyr
uptime is tracked in ticks which advance at events that nominally occur at
:kconfig:option:`CONFIG_SYS_CLOCK_TICKS_PER_SEC` Hertz, while an external time
source may provide data in whole or fractional seconds (e.g. microseconds).
* The absolute offset required to align the two scales at a single instant.
* The relative error between observable instants in each scale, required to
align multiple instants consistently. For example a reference clock that's
conditioned by a 1-pulse-per-second GPS signal will be much more accurate
than a Zephyr system clock driven by a RC oscillator with a +/- 250 ppm
error.
Synchronization or alignment between time scales is done with a multi-step
process:
* An instant in a time scale is represented by an (unsigned) 64-bit integer,
assumed to advance at a fixed nominal rate.
* :c:struct:`timeutil_sync_config` records the nominal rates of a reference
time scale/source (e.g. TAI) and a local time source
(e.g. :c:func:`k_uptime_ticks`).
* :c:struct:`timeutil_sync_instant` records the representation of a single
instant in both the reference and local time scales.
* :c:struct:`timeutil_sync_state` provides storage for an initial instant, a
recently received second observation, and a skew that can adjust for
relative errors in the actual rate of each time scale.
* :c:func:`timeutil_sync_ref_from_local()` and
:c:func:`timeutil_sync_local_from_ref()` convert instants in one time scale
to another taking into account skew that can be estimated from the two
instances stored in the state structure by
:c:func:`timeutil_sync_estimate_skew`.
.. doxygengroup:: timeutil_sync_apis
.. _timeutil_concepts:
Concepts Underlying Time Support in Zephyr
******************************************
Terms from `ISO/TC 154/WG 5 N0038
<path_to_url`__
(ISO/WD 8601-1) and elsewhere:
* A *time axis* is a representation of time as an ordered sequence of
instants.
* A *time scale* is a way of representing an instant relative to an origin
that serves as the epoch.
* A time scale is *monotonic* (increasing) if the representation of successive
time instants never decreases in value.
* A time scale is *continuous* if the representation has no abrupt changes in
value, e.g. jumping forward or back when going between successive instants.
* `Civil time <path_to_url`__ generally refers
to time scales that legally defined by civil authorities, like local
governments, often to align local midnight to solar time.
Relevant Time Scales
====================
`International Atomic Time
<path_to_url`__ (TAI) is a time
scale based on averaging clocks that count in SI seconds. TAI is a monotonic
and continuous time scale.
`Universal Time <path_to_url`__ (UT) is a
time scale based on Earths rotation. UT is a discontinuous time scale as it
requires occasional adjustments (`leap seconds
<path_to_url`__) to maintain alignment to
changes in Earths rotation. Thus the difference between TAI and UT varies
over time. There are several variants of UT, with `UTC
<path_to_url`__ being the most
common.
UT times are independent of location. UT is the basis for Standard Time
(or "local time") which is the time at a particular location. Standard
time has a fixed offset from UT at any given instant, primarily
influenced by longitude, but the offset may be adjusted ("daylight
saving time") to align standard time to the local solar time. In a sense
local time is "more discontinuous" than UT.
`POSIX Time <path_to_url#section-2>`__ is a time scale
that counts seconds since the "POSIX epoch" at 1970-01-01T00:00:00Z (i.e.the
start of 1970 UTC). `UNIX Time
<path_to_url#section-2>`__ is an extension of POSIX
time using negative values to represent times before the POSIX epoch. Both of
these scales assume that every day has exactly 86400 seconds. In normal use
instants in these scales correspond to times in the UTC scale, so they inherit
the discontinuity.
The continuous analogue is `UNIX Leap Time
<path_to_url#section-2>`__ which is UNIX time plus all
leap-second corrections added after the POSIX epoch (when TAI-UTC was 8 s).
Example of Time Scale Differences
---------------------------------
A positive leap second was introduced at the end of 2016, increasing the
difference between TAI and UTC from 36 seconds to 37 seconds. There was
no leap second introduced at the end of 1999, when the difference
between TAI and UTC was only 32 seconds. The following table shows
relevant civil and epoch times in several scales:
==================== ========== =================== ======= ==============
UTC Date UNIX time TAI Date TAI-UTC UNIX Leap Time
==================== ========== =================== ======= ==============
1970-01-01T00:00:00Z 0 1970-01-01T00:00:08 +8 0
1999-12-31T23:59:28Z 946684768 2000-01-01T00:00:00 +32 946684792
1999-12-31T23:59:59Z 946684799 2000-01-01T00:00:31 +32 946684823
2000-01-01T00:00:00Z 946684800 2000-01-01T00:00:32 +32 946684824
2016-12-31T23:59:59Z 1483228799 2017-01-01T00:00:35 +36 1483228827
2016-12-31T23:59:60Z undefined 2017-01-01T00:00:36 +36 1483228828
2017-01-01T00:00:00Z 1483228800 2017-01-01T00:00:37 +37 1483228829
==================== ========== =================== ======= ==============
Functional Requirements
-----------------------
The Zephyr tick counter has no concept of leap seconds or standard time
offsets and is a continuous time scale. However it can be relatively
inaccurate, with drifts as much as three minutes per hour (assuming an RC
timer with 5% tolerance).
There are two stages required to support conversion between Zephyr time and
common human time scales:
* Translation between the continuous but inaccurate Zephyr time scale and an
accurate external stable time scale;
* Translation between the stable time scale and the (possibly discontinuous)
civil time scale.
The API around :c:func:`timeutil_sync_state_update()` supports the first step
of converting between continuous time scales.
The second step requires external information including schedules of leap
seconds and local time offset changes. This may be best provided by an
external library, and is not currently part of the time utility APIs.
Selecting an External Source and Time Scale
-------------------------------------------
If an application requires civil time accuracy within several seconds then UTC
could be used as the stable time source. However, if the external source
adjusts to a leap second there will be a discontinuity: the elapsed time
between two observations taken at 1 Hz is not equal to the numeric difference
between their timestamps.
For precise activities a continuous scale that is independent of local and
solar adjustments simplifies things considerably. Suitable continuous scales
include:
- GPS time: epoch of 1980-01-06T00:00:00Z, continuous following TAI with an
offset of TAI-GPS=19 s.
- Bluetooth Mesh time: epoch of 2000-01-01T00:00:00Z, continuous following TAI
with an offset of -32.
- UNIX Leap Time: epoch of 1970-01-01T00:00:00Z, continuous following TAI with
an offset of -8.
Because C and Zephyr library functions support conversion between integral and
calendar time representations using the UNIX epoch, UNIX Leap Time is an ideal
choice for the external time scale.
The mechanism used to populate synchronization points is not relevant: it may
involve reading from a local high-precision RTC peripheral, exchanging packets
over a network using a protocol like NTP or PTP, or processing NMEA messages
received a GPS with or without a 1pps signal.
``` | /content/code_sandbox/doc/kernel/timeutil.rst | restructuredtext | 2016-05-26T17:54:19 | 2024-08-16T18:09:06 | zephyr | zephyrproject-rtos/zephyr | 10,307 | 2,555 |
```restructuredtext
.. _polling_v2:
Polling API
###########
The polling API is used to wait concurrently for any one of multiple conditions
to be fulfilled.
.. contents::
:local:
:depth: 2
Concepts
********
The polling API's main function is :c:func:`k_poll`, which is very similar
in concept to the POSIX :c:func:`poll` function, except that it operates on
kernel objects rather than on file descriptors.
The polling API allows a single thread to wait concurrently for one or more
conditions to be fulfilled without actively looking at each one individually.
There is a limited set of such conditions:
- a semaphore becomes available
- a kernel FIFO contains data ready to be retrieved
- a kernel message queue contains data ready to be retrieved
- a kernel pipe contains data ready to be retrieved
- a poll signal is raised
A thread that wants to wait on multiple conditions must define an array of
**poll events**, one for each condition.
All events in the array must be initialized before the array can be polled on.
Each event must specify which **type** of condition must be satisfied so that
its state is changed to signal the requested condition has been met.
Each event must specify what **kernel object** it wants the condition to be
satisfied.
Each event must specify which **mode** of operation is used when the condition
is satisfied.
Each event can optionally specify a **tag** to group multiple events together,
to the user's discretion.
Apart from the kernel objects, there is also a **poll signal** pseudo-object
type that be directly signaled.
The :c:func:`k_poll` function returns as soon as one of the conditions it
is waiting for is fulfilled. It is possible for more than one to be fulfilled
when :c:func:`k_poll` returns, if they were fulfilled before
:c:func:`k_poll` was called, or due to the preemptive multi-threading
nature of the kernel. The caller must look at the state of all the poll events
in the array to figure out which ones were fulfilled and what actions to take.
Currently, there is only one mode of operation available: the object is not
acquired. As an example, this means that when :c:func:`k_poll` returns and
the poll event states that the semaphore is available, the caller of
:c:func:`k_poll()` must then invoke :c:func:`k_sem_take` to take
ownership of the semaphore. If the semaphore is contested, there is no
guarantee that it will be still available when :c:func:`k_sem_take` is
called.
Implementation
**************
Using k_poll()
==============
The main API is :c:func:`k_poll`, which operates on an array of poll events
of type :c:struct:`k_poll_event`. Each entry in the array represents one
event a call to :c:func:`k_poll` will wait for its condition to be
fulfilled.
Poll events can be initialized using either the runtime initializers
:c:macro:`K_POLL_EVENT_INITIALIZER()` or :c:func:`k_poll_event_init`, or
the static initializer :c:macro:`K_POLL_EVENT_STATIC_INITIALIZER()`. An object
that matches the **type** specified must be passed to the initializers. The
**mode** *must* be set to :c:enumerator:`K_POLL_MODE_NOTIFY_ONLY`. The state
*must* be set to :c:macro:`K_POLL_STATE_NOT_READY` (the initializers take care
of this). The user **tag** is optional and completely opaque to the API: it is
there to help a user to group similar events together. Being optional, it is
passed to the static initializer, but not the runtime ones for performance
reasons. If using runtime initializers, the user must set it separately in the
:c:struct:`k_poll_event` data structure. If an event in the array is to be
ignored, most likely temporarily, its type can be set to
:c:macro:`K_POLL_TYPE_IGNORE`.
.. code-block:: c
struct k_poll_event events[4] = {
K_POLL_EVENT_STATIC_INITIALIZER(K_POLL_TYPE_SEM_AVAILABLE,
K_POLL_MODE_NOTIFY_ONLY,
&my_sem, 0),
K_POLL_EVENT_STATIC_INITIALIZER(K_POLL_TYPE_FIFO_DATA_AVAILABLE,
K_POLL_MODE_NOTIFY_ONLY,
&my_fifo, 0),
K_POLL_EVENT_STATIC_INITIALIZER(K_POLL_TYPE_MSGQ_DATA_AVAILABLE,
K_POLL_MODE_NOTIFY_ONLY,
&my_msgq, 0),
K_POLL_EVENT_STATIC_INITIALIZER(K_POLL_TYPE_PIPE_DATA_AVAILABLE,
K_POLL_MODE_NOTIFY_ONLY,
&my_pipe, 0),
};
or at runtime
.. code-block:: c
struct k_poll_event events[4];
void some_init(void)
{
k_poll_event_init(&events[0],
K_POLL_TYPE_SEM_AVAILABLE,
K_POLL_MODE_NOTIFY_ONLY,
&my_sem);
k_poll_event_init(&events[1],
K_POLL_TYPE_FIFO_DATA_AVAILABLE,
K_POLL_MODE_NOTIFY_ONLY,
&my_fifo);
k_poll_event_init(&events[2],
K_POLL_TYPE_MSGQ_DATA_AVAILABLE,
K_POLL_MODE_NOTIFY_ONLY,
&my_msgq);
k_poll_event_init(&events[3],
K_POLL_TYPE_PIPE_DATA_AVAILABLE,
K_POLL_MODE_NOTIFY_ONLY,
&my_pipe);
// tags are left uninitialized if unused
}
After the events are initialized, the array can be passed to
:c:func:`k_poll`. A timeout can be specified to wait only for a specified
amount of time, or the special values :c:macro:`K_NO_WAIT` and
:c:macro:`K_FOREVER` to either not wait or wait until an event condition is
satisfied and not sooner.
A list of pollers is offered on each semaphore or FIFO and as many events
can wait in it as the app wants.
Notice that the waiters will be served in first-come-first-serve order,
not in priority order.
In case of success, :c:func:`k_poll` returns 0. If it times out, it returns
-:c:macro:`EAGAIN`.
.. code-block:: c
// assume there is no contention on this semaphore and FIFO
// -EADDRINUSE will not occur; the semaphore and/or data will be available
void do_stuff(void)
{
rc = k_poll(events, ARRAY_SIZE(events), K_MSEC(1000));
if (rc == 0) {
if (events[0].state == K_POLL_STATE_SEM_AVAILABLE) {
k_sem_take(events[0].sem, 0);
} else if (events[1].state == K_POLL_STATE_FIFO_DATA_AVAILABLE) {
data = k_fifo_get(events[1].fifo, 0);
// handle data
} else if (events[2].state == K_POLL_STATE_MSGQ_DATA_AVAILABLE) {
ret = k_msgq_get(events[2].msgq, buf, K_NO_WAIT);
// handle data
} else if (events[3].state == K_POLL_STATE_PIPE_DATA_AVAILABLE) {
ret = k_pipe_get(events[3].pipe, buf, bytes_to_read, &bytes_read, min_xfer, K_NO_WAIT);
// handle data
}
} else {
// handle timeout
}
}
When :c:func:`k_poll` is called in a loop, the events state must be reset
to :c:macro:`K_POLL_STATE_NOT_READY` by the user.
.. code-block:: c
void do_stuff(void)
{
for(;;) {
rc = k_poll(events, ARRAY_SIZE(events), K_FOREVER);
if (events[0].state == K_POLL_STATE_SEM_AVAILABLE) {
k_sem_take(events[0].sem, 0);
} else if (events[1].state == K_POLL_STATE_FIFO_DATA_AVAILABLE) {
data = k_fifo_get(events[1].fifo, 0);
// handle data
} else if (events[2].state == K_POLL_STATE_MSGQ_DATA_AVAILABLE) {
ret = k_msgq_get(events[2].msgq, buf, K_NO_WAIT);
// handle data
} else if (events[3].state == K_POLL_STATE_PIPE_DATA_AVAILABLE) {
ret = k_pipe_get(events[3].pipe, buf, bytes_to_read, &bytes_read, min_xfer, K_NO_WAIT);
// handle data
}
events[0].state = K_POLL_STATE_NOT_READY;
events[1].state = K_POLL_STATE_NOT_READY;
events[2].state = K_POLL_STATE_NOT_READY;
events[3].state = K_POLL_STATE_NOT_READY;
}
}
Using k_poll_signal_raise()
===========================
One of the types of events is :c:macro:`K_POLL_TYPE_SIGNAL`: this is a "direct"
signal to a poll event. This can be seen as a lightweight binary semaphore only
one thread can wait for.
A poll signal is a separate object of type :c:struct:`k_poll_signal` that
must be attached to a k_poll_event, similar to a semaphore or FIFO. It must
first be initialized either via :c:macro:`K_POLL_SIGNAL_INITIALIZER()` or
:c:func:`k_poll_signal_init`.
.. code-block:: c
struct k_poll_signal signal;
void do_stuff(void)
{
k_poll_signal_init(&signal);
}
It is signaled via the :c:func:`k_poll_signal_raise` function. This function
takes a user **result** parameter that is opaque to the API and can be used to
pass extra information to the thread waiting on the event.
.. code-block:: c
struct k_poll_signal signal;
// thread A
void do_stuff(void)
{
k_poll_signal_init(&signal);
struct k_poll_event events[1] = {
K_POLL_EVENT_INITIALIZER(K_POLL_TYPE_SIGNAL,
K_POLL_MODE_NOTIFY_ONLY,
&signal),
};
k_poll(events, 1, K_FOREVER);
int signaled, result;
k_poll_signal_check(&signal, &signaled, &result);
if (signaled && (result == 0x1337)) {
// A-OK!
} else {
// weird error
}
}
// thread B
void signal_do_stuff(void)
{
k_poll_signal_raise(&signal, 0x1337);
}
If the signal is to be polled in a loop, *both* its event state must be
reset to :c:macro:`K_POLL_STATE_NOT_READY` *and* its ``result`` must be
reset using :c:func:`k_poll_signal_reset()` on each iteration if it has
been signaled.
.. code-block:: c
struct k_poll_signal signal;
void do_stuff(void)
{
k_poll_signal_init(&signal);
struct k_poll_event events[1] = {
K_POLL_EVENT_INITIALIZER(K_POLL_TYPE_SIGNAL,
K_POLL_MODE_NOTIFY_ONLY,
&signal),
};
for (;;) {
k_poll(events, 1, K_FOREVER);
int signaled, result;
k_poll_signal_check(&signal, &signaled, &result);
if (signaled && (result == 0x1337)) {
// A-OK!
} else {
// weird error
}
k_poll_signal_reset(signal);
events[0].state = K_POLL_STATE_NOT_READY;
}
}
Note that poll signals are not internally synchronized. A :c:func:`k_poll` call
that is passed a signal will return after any code in the system calls
:c:func:`k_poll_signal_raise()`. But if the signal is being
externally managed and reset via :c:func:`k_poll_signal_init()`, it is
possible that by the time the application checks, the event state may
no longer be equal to :c:macro:`K_POLL_STATE_SIGNALED`, and a (naive)
application will miss events. Best practice is always to reset the
signal only from within the thread invoking the :c:func:`k_poll` loop, or else
to use some other event type which tracks event counts: semaphores and
FIFOs are more error-proof in this sense because they can't "miss"
events, architecturally.
Suggested Uses
**************
Use :c:func:`k_poll` to consolidate multiple threads that would be pending
on one object each, saving possibly large amounts of stack space.
Use a poll signal as a lightweight binary semaphore if only one thread pends on
it.
.. note::
Because objects are only signaled if no other thread is waiting for them to
become available and only one thread can poll on a specific object, polling
is best used when objects are not subject of contention between multiple
threads, basically when a single thread operates as a main "server" or
"dispatcher" for multiple objects and is the only one trying to acquire
these objects.
Configuration Options
*********************
Related configuration options:
* :kconfig:option:`CONFIG_POLL`
API Reference
*************
.. doxygengroup:: poll_apis
``` | /content/code_sandbox/doc/kernel/services/polling.rst | restructuredtext | 2016-05-26T17:54:19 | 2024-08-16T18:09:06 | zephyr | zephyrproject-rtos/zephyr | 10,307 | 2,812 |
```restructuredtext
.. _condvar:
Condition Variables
###################
A :dfn:`condition variable` is a synchronization primitive
that enables threads to wait until a particular condition occurs.
.. contents::
:local:
:depth: 2
Concepts
********
Any number of condition variables can be defined (limited only by available RAM). Each
condition variable is referenced by its memory address.
To wait for a condition to become true, a thread can make use of a condition
variable.
A condition variable is basically a queue of threads that threads can put
themselves on when some state of execution (i.e., some condition) is not as
desired (by waiting on the condition). The function
:c:func:`k_condvar_wait` performs atomically the following steps;
#. Releases the last acquired mutex.
#. Puts the current thread in the condition variable queue.
Some other thread, when it changes said state, can then wake one (or more)
of those waiting threads and thus allow them to continue by signaling on
the condition using :c:func:`k_condvar_signal` or
:c:func:`k_condvar_broadcast` then it:
#. Re-acquires the mutex previously released.
#. Returns from :c:func:`k_condvar_wait`.
A condition variable must be initialized before it can be used.
Implementation
**************
Defining a Condition Variable
=============================
A condition variable is defined using a variable of type :c:struct:`k_condvar`.
It must then be initialized by calling :c:func:`k_condvar_init`.
The following code defines a condition variable:
.. code-block:: c
struct k_condvar my_condvar;
k_condvar_init(&my_condvar);
Alternatively, a condition variable can be defined and initialized at compile time
by calling :c:macro:`K_CONDVAR_DEFINE`.
The following code has the same effect as the code segment above.
.. code-block:: c
K_CONDVAR_DEFINE(my_condvar);
Waiting on a Condition Variable
===============================
A thread can wait on a condition by calling :c:func:`k_condvar_wait`.
The following code waits on the condition variable.
.. code-block:: c
K_MUTEX_DEFINE(mutex);
K_CONDVAR_DEFINE(condvar)
int main(void)
{
k_mutex_lock(&mutex, K_FOREVER);
/* block this thread until another thread signals cond. While
* blocked, the mutex is released, then re-acquired before this
* thread is woken up and the call returns.
*/
k_condvar_wait(&condvar, &mutex, K_FOREVER);
...
k_mutex_unlock(&mutex);
}
Signaling a Condition Variable
===============================
A condition variable is signaled on by calling :c:func:`k_condvar_signal` for
one thread or by calling :c:func:`k_condvar_broadcast` for multiple threads.
The following code builds on the example above.
.. code-block:: c
void worker_thread(void)
{
k_mutex_lock(&mutex, K_FOREVER);
/*
* Do some work and fulfill the condition
*/
...
...
k_condvar_signal(&condvar);
k_mutex_unlock(&mutex);
}
Suggested Uses
**************
Use condition variables with a mutex to signal changing states (conditions) from
one thread to another thread.
Condition variables are not the condition itself and they are not events.
The condition is contained in the surrounding programming logic.
Mutexes alone are not designed for use as a notification/synchronization
mechanism. They are meant to provide mutually exclusive access to a shared
resource only.
Configuration Options
*********************
Related configuration options:
* None.
API Reference
**************
.. doxygengroup:: condvar_apis
``` | /content/code_sandbox/doc/kernel/services/synchronization/condvar.rst | restructuredtext | 2016-05-26T17:54:19 | 2024-08-16T18:09:06 | zephyr | zephyrproject-rtos/zephyr | 10,307 | 784 |
```restructuredtext
.. _semaphores_v2:
Semaphores
##########
A :dfn:`semaphore` is a kernel object that implements a traditional
counting semaphore.
.. contents::
:local:
:depth: 2
Concepts
********
Any number of semaphores can be defined (limited only by available RAM). Each
semaphore is referenced by its memory address.
A semaphore has the following key properties:
* A **count** that indicates the number of times the semaphore can be taken.
A count of zero indicates that the semaphore is unavailable.
* A **limit** that indicates the maximum value the semaphore's count
can reach.
A semaphore must be initialized before it can be used. Its count must be set
to a non-negative value that is less than or equal to its limit.
A semaphore may be **given** by a thread or an ISR. Giving the semaphore
increments its count, unless the count is already equal to the limit.
A semaphore may be **taken** by a thread. Taking the semaphore
decrements its count, unless the semaphore is unavailable (i.e. at zero).
When a semaphore is unavailable a thread may choose to wait for it to be given.
Any number of threads may wait on an unavailable semaphore simultaneously.
When the semaphore is given, it is taken by the highest priority thread
that has waited longest.
.. note::
You may initialize a "full" semaphore (count equal to limit) to limit the number
of threads able to execute the critical section at the same time. You may also
initialize an empty semaphore (count equal to 0, with a limit greater than 0)
to create a gate through which no waiting thread may pass until the semaphore
is incremented. All standard use cases of the common semaphore are supported.
.. note::
The kernel does allow an ISR to take a semaphore, however the ISR must
not attempt to wait if the semaphore is unavailable.
Implementation
**************
Defining a Semaphore
====================
A semaphore is defined using a variable of type :c:struct:`k_sem`.
It must then be initialized by calling :c:func:`k_sem_init`.
The following code defines a semaphore, then configures it as a binary
semaphore by setting its count to 0 and its limit to 1.
.. code-block:: c
struct k_sem my_sem;
k_sem_init(&my_sem, 0, 1);
Alternatively, a semaphore can be defined and initialized at compile time
by calling :c:macro:`K_SEM_DEFINE`.
The following code has the same effect as the code segment above.
.. code-block:: c
K_SEM_DEFINE(my_sem, 0, 1);
Giving a Semaphore
==================
A semaphore is given by calling :c:func:`k_sem_give`.
The following code builds on the example above, and gives the semaphore to
indicate that a unit of data is available for processing by a consumer thread.
.. code-block:: c
void input_data_interrupt_handler(void *arg)
{
/* notify thread that data is available */
k_sem_give(&my_sem);
...
}
Taking a Semaphore
==================
A semaphore is taken by calling :c:func:`k_sem_take`.
The following code builds on the example above, and waits up to 50 milliseconds
for the semaphore to be given.
A warning is issued if the semaphore is not obtained in time.
.. code-block:: c
void consumer_thread(void)
{
...
if (k_sem_take(&my_sem, K_MSEC(50)) != 0) {
printk("Input data not available!");
} else {
/* fetch available data */
...
}
...
}
Suggested Uses
**************
Use a semaphore to control access to a set of resources by multiple threads.
Use a semaphore to synchronize processing between a producing and consuming
threads or ISRs.
Configuration Options
*********************
Related configuration options:
* None.
API Reference
**************
.. doxygengroup:: semaphore_apis
User Mode Semaphore API Reference
*********************************
The sys_sem exists in user memory working as counter semaphore for user mode
thread when user mode enabled. When user mode isn't enabled, sys_sem behaves
like k_sem.
.. doxygengroup:: user_semaphore_apis
``` | /content/code_sandbox/doc/kernel/services/synchronization/semaphores.rst | restructuredtext | 2016-05-26T17:54:19 | 2024-08-16T18:09:06 | zephyr | zephyrproject-rtos/zephyr | 10,307 | 906 |
```restructuredtext
.. _mutexes_v2:
Mutexes
#######
A :dfn:`mutex` is a kernel object that implements a traditional
reentrant mutex. A mutex allows multiple threads to safely share
an associated hardware or software resource by ensuring mutually exclusive
access to the resource.
.. contents::
:local:
:depth: 2
Concepts
********
Any number of mutexes can be defined (limited only by available RAM). Each mutex
is referenced by its memory address.
A mutex has the following key properties:
* A **lock count** that indicates the number of times the mutex has been locked
by the thread that has locked it. A count of zero indicates that the mutex
is unlocked.
* An **owning thread** that identifies the thread that has locked the mutex,
when it is locked.
A mutex must be initialized before it can be used. This sets its lock count
to zero.
A thread that needs to use a shared resource must first gain exclusive rights
to access it by **locking** the associated mutex. If the mutex is already locked
by another thread, the requesting thread may choose to wait for the mutex
to be unlocked.
After locking a mutex, the thread may safely use the associated resource
for as long as needed; however, it is considered good practice to hold the lock
for as short a time as possible to avoid negatively impacting other threads
that want to use the resource. When the thread no longer needs the resource
it must **unlock** the mutex to allow other threads to use the resource.
Any number of threads may wait on a locked mutex simultaneously.
When the mutex becomes unlocked it is then locked by the highest-priority
thread that has waited the longest.
.. note::
Mutex objects are *not* designed for use by ISRs.
Reentrant Locking
=================
A thread is permitted to lock a mutex it has already locked.
This allows the thread to access the associated resource at a point
in its execution when the mutex may or may not already be locked.
A mutex that is repeatedly locked by a thread must be unlocked an equal number
of times before the mutex becomes fully unlocked so it can be claimed
by another thread.
Priority Inheritance
====================
The thread that has locked a mutex is eligible for :dfn:`priority inheritance`.
This means the kernel will *temporarily* elevate the thread's priority
if a higher priority thread begins waiting on the mutex. This allows the owning
thread to complete its work and release the mutex more rapidly by executing
at the same priority as the waiting thread. Once the mutex has been unlocked,
the unlocking thread resets its priority to the level it had before locking
that mutex.
.. note::
The :kconfig:option:`CONFIG_PRIORITY_CEILING` configuration option limits
how high the kernel can raise a thread's priority due to priority
inheritance. The default value of 0 permits unlimited elevation.
The owning thread's base priority is saved in the mutex when it obtains the
lock. Each time a higher priority thread waits on a mutex, the kernel adjusts
the owning thread's priority. When the owning thread releases the lock (or if
the high priority waiting thread times out), the kernel restores the thread's
base priority from the value saved in the mutex.
This works well for priority inheritance as long as only one locked mutex is
involved. However, if multiple mutexes are involved, sub-optimal behavior will
be observed if the mutexes are not unlocked in the reverse order to which the
owning thread's priority was previously raised. Consequently it is recommended
that a thread lock only a single mutex at a time when multiple mutexes are
shared between threads of different priorities.
Implementation
**************
Defining a Mutex
================
A mutex is defined using a variable of type :c:struct:`k_mutex`.
It must then be initialized by calling :c:func:`k_mutex_init`.
The following code defines and initializes a mutex.
.. code-block:: c
struct k_mutex my_mutex;
k_mutex_init(&my_mutex);
Alternatively, a mutex can be defined and initialized at compile time
by calling :c:macro:`K_MUTEX_DEFINE`.
The following code has the same effect as the code segment above.
.. code-block:: c
K_MUTEX_DEFINE(my_mutex);
Locking a Mutex
===============
A mutex is locked by calling :c:func:`k_mutex_lock`.
The following code builds on the example above, and waits indefinitely
for the mutex to become available if it is already locked by another thread.
.. code-block:: c
k_mutex_lock(&my_mutex, K_FOREVER);
The following code waits up to 100 milliseconds for the mutex to become
available, and gives a warning if the mutex does not become available.
.. code-block:: c
if (k_mutex_lock(&my_mutex, K_MSEC(100)) == 0) {
/* mutex successfully locked */
} else {
printf("Cannot lock XYZ display\n");
}
Unlocking a Mutex
=================
A mutex is unlocked by calling :c:func:`k_mutex_unlock`.
The following code builds on the example above,
and unlocks the mutex that was previously locked by the thread.
.. code-block:: c
k_mutex_unlock(&my_mutex);
Suggested Uses
**************
Use a mutex to provide exclusive access to a resource, such as a physical
device.
Configuration Options
*********************
Related configuration options:
* :kconfig:option:`CONFIG_PRIORITY_CEILING`
API Reference
*************
.. doxygengroup:: mutex_apis
Futex API Reference
*******************
k_futex is a lightweight mutual exclusion primitive designed to minimize
kernel involvement. Uncontended operation relies only on atomic access
to shared memory. k_futex are tracked as kernel objects and can live in
user memory so that any access bypasses the kernel object permission
management mechanism.
.. doxygengroup:: futex_apis
User Mode Mutex API Reference
*****************************
sys_mutex behaves almost exactly like k_mutex, with the added advantage
that a sys_mutex instance can reside in user memory. When user mode isn't
enabled, sys_mutex behaves like k_mutex.
.. doxygengroup:: user_mutex_apis
``` | /content/code_sandbox/doc/kernel/services/synchronization/mutexes.rst | restructuredtext | 2016-05-26T17:54:19 | 2024-08-16T18:09:06 | zephyr | zephyrproject-rtos/zephyr | 10,307 | 1,293 |
```restructuredtext
.. _events:
Events
######
An :dfn:`event object` is a kernel object that implements traditional events.
.. contents::
:local:
:depth: 2
Concepts
********
Any number of event objects can be defined (limited only by available RAM). Each
event object is referenced by its memory address. One or more threads may wait
on an event object until the desired set of events has been delivered to the
event object. When new events are delivered to the event object, all threads
whose wait conditions have been satisfied become ready simultaneously.
An event object has the following key properties:
* A 32-bit value that tracks which events have been delivered to it.
An event object must be initialized before it can be used.
Events may be **delivered** by a thread or an ISR. When delivering events, the
events may either overwrite the existing set of events or add to them in
a bitwise fashion. When overwriting the existing set of events, this is referred
to as setting. When adding to them in a bitwise fashion, this is referred to as
posting. Both posting and setting events have the potential to fulfill match
conditions of multiple threads waiting on the event object. All threads whose
match conditions have been met are made active at the same time.
Threads may wait on one or more events. They may either wait for all of the
requested events, or for any of them. Furthermore, threads making a wait request
have the option of resetting the current set of events tracked by the event
object prior to waiting. Care must be taken with this option when multiple
threads wait on the same event object.
.. note::
The kernel does allow an ISR to query an event object, however the ISR must
not attempt to wait for the events.
Implementation
**************
Defining an Event Object
========================
An event object is defined using a variable of type :c:struct:`k_event`.
It must then be initialized by calling :c:func:`k_event_init`.
The following code defines an event object.
.. code-block:: c
struct k_event my_event;
k_event_init(&my_event);
Alternatively, an event object can be defined and initialized at compile time
by calling :c:macro:`K_EVENT_DEFINE`.
The following code has the same effect as the code segment above.
.. code-block:: c
K_EVENT_DEFINE(my_event);
Setting Events
==============
Events in an event object are set by calling :c:func:`k_event_set`.
The following code builds on the example above, and sets the events tracked by
the event object to 0x001.
.. code-block:: c
void input_available_interrupt_handler(void *arg)
{
/* notify threads that data is available */
k_event_set(&my_event, 0x001);
...
}
Posting Events
==============
Events are posted to an event object by calling :c:func:`k_event_post`.
The following code builds on the example above, and posts a set of events to
the event object.
.. code-block:: c
void input_available_interrupt_handler(void *arg)
{
...
/* notify threads that more data is available */
k_event_post(&my_event, 0x120);
...
}
Waiting for Events
==================
Threads wait for events by calling :c:func:`k_event_wait`.
The following code builds on the example above, and waits up to 50 milliseconds
for any of the specified events to be posted. A warning is issued if none
of the events are posted in time.
.. code-block:: c
void consumer_thread(void)
{
uint32_t events;
events = k_event_wait(&my_event, 0xFFF, false, K_MSEC(50));
if (events == 0) {
printk("No input devices are available!");
} else {
/* Access the desired input device(s) */
...
}
...
}
Alternatively, the consumer thread may desire to wait for all the events
before continuing.
.. code-block:: c
void consumer_thread(void)
{
uint32_t events;
events = k_event_wait_all(&my_event, 0x121, false, K_MSEC(50));
if (events == 0) {
printk("At least one input device is not available!");
} else {
/* Access the desired input devices */
...
}
...
}
Suggested Uses
**************
Use events to indicate that a set of conditions have occurred.
Use events to pass small amounts of data to multiple threads at once.
Configuration Options
*********************
Related configuration options:
* :kconfig:option:`CONFIG_EVENTS`
API Reference
**************
.. doxygengroup:: event_apis
``` | /content/code_sandbox/doc/kernel/services/synchronization/events.rst | restructuredtext | 2016-05-26T17:54:19 | 2024-08-16T18:09:06 | zephyr | zephyrproject-rtos/zephyr | 10,307 | 1,000 |
```restructuredtext
.. _interrupts_v2:
Interrupts
##########
An :dfn:`interrupt service routine` (ISR) is a function that executes
asynchronously in response to a hardware or software interrupt.
An ISR normally preempts the execution of the current thread,
allowing the response to occur with very low overhead.
Thread execution resumes only once all ISR work has been completed.
.. contents::
:local:
:depth: 2
Concepts
********
Any number of ISRs can be defined (limited only by available RAM), subject to
the constraints imposed by underlying hardware.
An ISR has the following key properties:
* An **interrupt request (IRQ) signal** that triggers the ISR.
* A **priority level** associated with the IRQ.
* An **interrupt service routine** that is invoked to handle the interrupt.
* An **argument value** that is passed to that function.
An :abbr:`IDT (Interrupt Descriptor Table)` or a vector table is used
to associate a given interrupt source with a given ISR.
Only a single ISR can be associated with a specific IRQ at any given time.
Multiple ISRs can utilize the same function to process interrupts,
allowing a single function to service a device that generates
multiple types of interrupts or to service multiple devices
(usually of the same type). The argument value passed to an ISR's function
allows the function to determine which interrupt has been signaled.
The kernel provides a default ISR for all unused IDT entries. This ISR
generates a fatal system error if an unexpected interrupt is signaled.
The kernel supports **interrupt nesting**. This allows an ISR to be preempted
in mid-execution if a higher priority interrupt is signaled. The lower
priority ISR resumes execution once the higher priority ISR has completed
its processing.
An ISR executes in the kernel's **interrupt context**. This context has its
own dedicated stack area (or, on some architectures, stack areas). The size
of the interrupt context stack must be capable of handling the execution of
multiple concurrent ISRs if interrupt
nesting support is enabled.
.. important::
Many kernel APIs can be used only by threads, and not by ISRs. In cases
where a routine may be invoked by both threads and ISRs the kernel
provides the :c:func:`k_is_in_isr` API to allow the routine to
alter its behavior depending on whether it is executing as part of
a thread or as part of an ISR.
.. _multi_level_interrupts:
Multi-level Interrupt Handling
==============================
A hardware platform can support more interrupt lines than natively-provided
through the use of one or more nested interrupt controllers. Sources of
hardware interrupts are combined into one line that is then routed to
the parent controller.
If nested interrupt controllers are supported, :kconfig:option:`CONFIG_MULTI_LEVEL_INTERRUPTS`
should be enabled, and :kconfig:option:`CONFIG_2ND_LEVEL_INTERRUPTS` and
:kconfig:option:`CONFIG_3RD_LEVEL_INTERRUPTS` configured as well, based on the
hardware architecture.
A unique 32-bit interrupt number is assigned with information
embedded in it to select and invoke the correct Interrupt
Service Routine (ISR). Each interrupt level is given a byte within this 32-bit
number, providing support for up to four interrupt levels using this arch, as
illustrated and explained below:
.. code-block:: none
9 2 0
_ _ _ _ _ _ _ _ _ _ _ _ _ (LEVEL 1)
5 | A |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ (LEVEL 2)
| C B
_ _ _ _ _ _ _ (LEVEL 3)
D
There are three interrupt levels shown here.
* '-' means interrupt line and is numbered from 0 (right most).
* LEVEL 1 has 12 interrupt lines, with two lines (2 and 9) connected
to nested controllers and one device 'A' on line 4.
* One of the LEVEL 2 controllers has interrupt line 5 connected to
a LEVEL 3 nested controller and one device 'C' on line 3.
* The other LEVEL 2 controller has no nested controllers but has one
device 'B' on line 2.
* The LEVEL 3 controller has one device 'D' on line 2.
Here's how unique interrupt numbers are generated for each
hardware interrupt. Let's consider four interrupts shown above
as A, B, C, and D:
.. code-block:: none
A -> 0x00000004
B -> 0x00000302
C -> 0x00000409
D -> 0x00030609
.. note::
The bit positions for LEVEL 2 and onward are offset by 1, as 0 means that
interrupt number is not present for that level. For our example, the LEVEL 3
controller has device D on line 2, connected to the LEVEL 2 controller's line
5, that is connected to the LEVEL 1 controller's line 9 (2 -> 5 -> 9).
Because of the encoding offset for LEVEL 2 and onward, device D is given the
number 0x00030609.
Preventing Interruptions
========================
In certain situations it may be necessary for the current thread to
prevent ISRs from executing while it is performing time-sensitive
or critical section operations.
A thread may temporarily prevent all IRQ handling in the system using
an **IRQ lock**. This lock can be applied even when it is already in effect,
so routines can use it without having to know if it is already in effect.
The thread must unlock its IRQ lock the same number of times it was locked
before interrupts can be once again processed by the kernel while the thread
is running.
.. important::
The IRQ lock is thread-specific. If thread A locks out interrupts
then performs an operation that puts itself to sleep (e.g. sleeping
for N milliseconds), the thread's IRQ lock no longer applies once
thread A is swapped out and the next ready thread B starts to
run.
This means that interrupts can be processed while thread B is
running unless thread B has also locked out interrupts using its own
IRQ lock. (Whether interrupts can be processed while the kernel is
switching between two threads that are using the IRQ lock is
architecture-specific.)
When thread A eventually becomes the current thread once again, the kernel
re-establishes thread A's IRQ lock. This ensures thread A won't be
interrupted until it has explicitly unlocked its IRQ lock.
If thread A does not sleep but does make a higher-priority thread B
ready, the IRQ lock will inhibit any preemption that would otherwise
occur. Thread B will not run until the next :ref:`reschedule point
<scheduling_v2>` reached after releasing the IRQ lock.
Alternatively, a thread may temporarily **disable** a specified IRQ
so its associated ISR does not execute when the IRQ is signaled.
The IRQ must be subsequently **enabled** to permit the ISR to execute.
.. important::
Disabling an IRQ prevents *all* threads in the system from being preempted
by the associated ISR, not just the thread that disabled the IRQ.
Zero Latency Interrupts
-----------------------
Preventing interruptions by applying an IRQ lock may increase the observed
interrupt latency. A high interrupt latency, however, may not be acceptable
for certain low-latency use-cases.
The kernel addresses such use-cases by allowing interrupts with critical
latency constraints to execute at a priority level that cannot be blocked
by interrupt locking. These interrupts are defined as
*zero-latency interrupts*. The support for zero-latency interrupts requires
:kconfig:option:`CONFIG_ZERO_LATENCY_IRQS` to be enabled. In addition to that, the
flag :c:macro:`IRQ_ZERO_LATENCY` must be passed to :c:macro:`IRQ_CONNECT` or
:c:macro:`IRQ_DIRECT_CONNECT` macros to configure the particular interrupt
with zero latency.
Zero-latency interrupts are expected to be used to manage hardware events
directly, and not to interoperate with the kernel code at all. They should
treat all kernel APIs as undefined behavior (i.e. an application that uses the
APIs inside a zero-latency interrupt context is responsible for directly
verifying correct behavior). Zero-latency interrupts may not modify any data
inspected by kernel APIs invoked from normal Zephyr contexts and shall not
generate exceptions that need to be handled synchronously (e.g. kernel panic).
.. important::
Zero-latency interrupts are supported on an architecture-specific basis.
The feature is currently implemented in the ARM Cortex-M architecture
variant.
Offloading ISR Work
===================
An ISR should execute quickly to ensure predictable system operation.
If time consuming processing is required the ISR should offload some or all
processing to a thread, thereby restoring the kernel's ability to respond
to other interrupts.
The kernel supports several mechanisms for offloading interrupt-related
processing to a thread.
* An ISR can signal a helper thread to do interrupt-related processing
using a kernel object, such as a FIFO, LIFO, or semaphore.
* An ISR can instruct the system workqueue thread to execute a work item.
(See :ref:`workqueues_v2`.)
When an ISR offloads work to a thread, there is typically a single context
switch to that thread when the ISR completes, allowing interrupt-related
processing to continue almost immediately. However, depending on the
priority of the thread handling the offload, it is possible that
the currently executing cooperative thread or other higher-priority threads
may execute before the thread handling the offload is scheduled.
Sharing interrupt lines
=======================
In the case of some hardware platforms, the same interrupt lines may be used
by different IPs. For example, interrupt 17 may be used by a DMA controller to
signal that a data transfer has been completed or by a DAI controller to signal
that the transfer FIFO has reached its watermark. To make this work, one would
have to either employ some special logic or find a workaround (for example, using
the shared_irq interrupt controller), which doesn't scale very well.
To solve this problem, one may use shared interrupts, which can be enabled using
:kconfig:option:`CONFIG_SHARED_INTERRUPTS`. Whenever an attempt to register
a second ISR/argument pair on the same interrupt line is made (using
:c:macro:`IRQ_CONNECT` or :c:func:`irq_connect_dynamic`), the interrupt line will
become shared, meaning the two ISR/argument pairs (previous one and the one that
has just been registered) will be invoked each time the interrupt is triggered.
The entities that make use of an interrupt line in the shared interrupt context
are known as clients. The maximum number of allowed clients for an interrupt is
controlled by :kconfig:option:`CONFIG_SHARED_IRQ_MAX_NUM_CLIENTS`.
Interrupt sharing is transparent to the user. As such, the user may register
interrupts using :c:macro:`IRQ_CONNECT` and :c:func:`irq_connect_dynamic` as
they normally would. The interrupt sharing is taken care of behind the scenes.
Enabling the shared interrupt support and dynamic interrupt support will
allow users to dynamically disconnect ISRs using :c:func:`irq_disconnect_dynamic`.
After an ISR is disconnected, whenever the interrupt line for which it was
register gets triggered, the ISR will no longer get invoked.
Please note that enabling :kconfig:option:`CONFIG_SHARED_INTERRUPTS` will
result in a non-negligible increase in the binary size. Use with caution.
Implementation
**************
Defining a regular ISR
======================
An ISR is defined at runtime by calling :c:macro:`IRQ_CONNECT`. It must
then be enabled by calling :c:func:`irq_enable`.
.. important::
IRQ_CONNECT() is not a C function and does some inline assembly magic
behind the scenes. All its arguments must be known at build time.
Drivers that have multiple instances may need to define per-instance
config functions to configure each instance of the interrupt.
The following code defines and enables an ISR.
.. code-block:: c
#define MY_DEV_IRQ 24 /* device uses IRQ 24 */
#define MY_DEV_PRIO 2 /* device uses interrupt priority 2 */
/* argument passed to my_isr(), in this case a pointer to the device */
#define MY_ISR_ARG DEVICE_GET(my_device)
#define MY_IRQ_FLAGS 0 /* IRQ flags */
void my_isr(void *arg)
{
... /* ISR code */
}
void my_isr_installer(void)
{
...
IRQ_CONNECT(MY_DEV_IRQ, MY_DEV_PRIO, my_isr, MY_ISR_ARG, MY_IRQ_FLAGS);
irq_enable(MY_DEV_IRQ);
...
}
Since the :c:macro:`IRQ_CONNECT` macro requires that all its parameters be
known at build time, in some cases this may not be acceptable. It is also
possible to install interrupts at runtime with
:c:func:`irq_connect_dynamic`. It is used in exactly the same way as
:c:macro:`IRQ_CONNECT`:
.. code-block:: c
void my_isr_installer(void)
{
...
irq_connect_dynamic(MY_DEV_IRQ, MY_DEV_PRIO, my_isr, MY_ISR_ARG,
MY_IRQ_FLAGS);
irq_enable(MY_DEV_IRQ);
...
}
Dynamic interrupts require the :kconfig:option:`CONFIG_DYNAMIC_INTERRUPTS` option to
be enabled. Removing or re-configuring a dynamic interrupt is currently
unsupported.
Defining a 'direct' ISR
=======================
Regular Zephyr interrupts introduce some overhead which may be unacceptable
for some low-latency use-cases. Specifically:
* The argument to the ISR is retrieved and passed to the ISR
* If power management is enabled and the system was idle, all the hardware
will be resumed from low-power state before the ISR is executed, which can be
very time-consuming
* Although some architectures will do this in hardware, other architectures
need to switch to the interrupt stack in code
* After the interrupt is serviced, the OS then performs some logic to
potentially make a scheduling decision.
Zephyr supports so-called 'direct' interrupts, which are installed via
:c:macro:`IRQ_DIRECT_CONNECT`. These direct interrupts have some special
implementation requirements and a reduced feature set; see the definition
of :c:macro:`IRQ_DIRECT_CONNECT` for details.
The following code demonstrates a direct ISR:
.. code-block:: c
#define MY_DEV_IRQ 24 /* device uses IRQ 24 */
#define MY_DEV_PRIO 2 /* device uses interrupt priority 2 */
/* argument passed to my_isr(), in this case a pointer to the device */
#define MY_IRQ_FLAGS 0 /* IRQ flags */
ISR_DIRECT_DECLARE(my_isr)
{
do_stuff();
ISR_DIRECT_PM(); /* PM done after servicing interrupt for best latency */
return 1; /* We should check if scheduling decision should be made */
}
void my_isr_installer(void)
{
...
IRQ_DIRECT_CONNECT(MY_DEV_IRQ, MY_DEV_PRIO, my_isr, MY_IRQ_FLAGS);
irq_enable(MY_DEV_IRQ);
...
}
Installation of dynamic direct interrupts is supported on an
architecture-specific basis. (The feature is currently implemented in
ARM Cortex-M architecture variant. Dynamic direct interrupts feature is
exposed to the user via an ARM-only API.)
Sharing an interrupt line
=========================
The following code defines two ISRs using the same interrupt number.
.. code-block:: c
#define MY_DEV_IRQ 24 /* device uses INTID 24 */
#define MY_DEV_IRQ_PRIO 2 /* device uses interrupt priority 2 */
/* this argument may be anything */
#define MY_FST_ISR_ARG INT_TO_POINTER(1)
/* this argument may be anything */
#define MY_SND_ISR_ARG INT_TO_POINTER(2)
#define MY_IRQ_FLAGS 0 /* IRQ flags */
void my_first_isr(void *arg)
{
... /* some magic happens here */
}
void my_second_isr(void *arg)
{
... /* even more magic happens here */
}
void my_isr_installer(void)
{
...
IRQ_CONNECT(MY_DEV_IRQ, MY_DEV_IRQ_PRIO, my_first_isr, MY_FST_ISR_ARG, MY_IRQ_FLAGS);
IRQ_CONNECT(MY_DEV_IRQ, MY_DEV_IRQ_PRIO, my_second_isr, MY_SND_ISR_ARG, MY_IRQ_FLAGS);
...
}
The same restrictions regarding :c:macro:`IRQ_CONNECT` described in `Defining a regular ISR`_
are applicable here. If :kconfig:option:`CONFIG_SHARED_INTERRUPTS` is disabled, the above
code will generate a build error. Otherwise, the above code will result in the two ISRs
being invoked each time interrupt 24 is triggered.
If :kconfig:option:`CONFIG_SHARED_IRQ_MAX_NUM_CLIENTS` is set to a value lower than 2
(current number of clients), a build error will be generated.
If dynamic interrupts are enabled, :c:func:`irq_connect_dynamic` will allow sharing interrupts
during runtime. Exceeding the configured maximum number of allowed clients will result in
a failed assertion.
Dynamically disconnecting an ISR
================================
The following code defines two ISRs using the same interrupt number. The second
ISR is disconnected during runtime.
.. code-block:: c
#define MY_DEV_IRQ 24 /* device uses INTID 24 */
#define MY_DEV_IRQ_PRIO 2 /* device uses interrupt priority 2 */
/* this argument may be anything */
#define MY_FST_ISR_ARG INT_TO_POINTER(1)
/* this argument may be anything */
#define MY_SND_ISR_ARG INT_TO_POINTER(2)
#define MY_IRQ_FLAGS 0 /* IRQ flags */
void my_first_isr(void *arg)
{
... /* some magic happens here */
}
void my_second_isr(void *arg)
{
... /* even more magic happens here */
}
void my_isr_installer(void)
{
...
IRQ_CONNECT(MY_DEV_IRQ, MY_DEV_IRQ_PRIO, my_first_isr, MY_FST_ISR_ARG, MY_IRQ_FLAGS);
IRQ_CONNECT(MY_DEV_IRQ, MY_DEV_IRQ_PRIO, my_second_isr, MY_SND_ISR_ARG, MY_IRQ_FLAGS);
...
}
void my_isr_uninstaller(void)
{
...
irq_disconnect_dynamic(MY_DEV_IRQ, MY_DEV_IRQ_PRIO, my_first_isr, MY_FST_ISR_ARG, MY_IRQ_FLAGS);
...
}
The :c:func:`irq_disconnect_dynamic` call will result in interrupt 24 becoming
unshared, meaning the system will act as if the first :c:macro:`IRQ_CONNECT`
call never happened. This behaviour is only allowed if
:kconfig:option:`CONFIG_DYNAMIC_INTERRUPTS` is enabled, otherwise a linker
error will be generated.
Implementation Details
======================
Interrupt tables are set up at build time using some special build tools. The
details laid out here apply to all architectures except x86, which are
covered in the `x86 Details`_ section below.
The invocation of :c:macro:`IRQ_CONNECT` will declare an instance of
struct _isr_list which is placed in a special .intList section.
This section is placed in compiled code on precompilation stages only.
It is meant to be used by Zephyr script to generate interrupt tables
and is removed from the final build.
The script implements different parsers to process the data from .intList section
and produce the required output.
The default parser generates C arrays filled with arguments and interrupt
handlers in a form of addresses directly taken from .intList section entries.
It works with all the architectures and compilers (with the exception mentioned above).
The limitation of this parser is the fact that after the arrays are generated
it is expected for the code not to relocate.
Any relocation on this stage may lead to the situation where the entry in the interrupt array
is no longer pointing to the function that was expected.
It means that this parser, being more compatible is limiting us from using Link Time Optimization.
The local isr declaration parser uses different approach to construct
the same arrays at binnary level.
All the entries to the arrays are declared and defined locally,
directly in the file where :c:macro:`IRQ_CONNECT` is used.
They are placed in a section with the unique, synthesized name.
The name of the section is then placed in .intList section and it is used to create linker script
to properly place the created entry in the right place in the memory.
This parser is now limited to the supported architectures and toolchains but in reward it keeps
the information about object relations for linker thus allowing the Link Time Optimization.
Implementation using C arrays
-----------------------------
This is the default configuration available for all Zephyr supported architectures.
Any invocation of :c:macro:`IRQ_CONNECT` will declare an instance of
struct _isr_list which is placed in a special .intList section:
.. code-block:: c
struct _isr_list {
/** IRQ line number */
int32_t irq;
/** Flags for this IRQ, see ISR_FLAG_* definitions */
int32_t flags;
/** ISR to call */
void *func;
/** Parameter for non-direct IRQs */
void *param;
};
Zephyr is built in two phases; the first phase of the build produces
``${ZEPHYR_PREBUILT_EXECUTABLE}``.elf which contains all the entries in
the .intList section preceded by a header:
.. code-block:: c
struct {
void *spurious_irq_handler;
void *sw_irq_handler;
uint32_t num_isrs;
uint32_t num_vectors;
struct _isr_list isrs[]; <- of size num_isrs
};
This data consisting of the header and instances of struct _isr_list inside
``${ZEPHYR_PREBUILT_EXECUTABLE}``.elf is then used by the
gen_isr_tables.py script to generate a C file defining a vector table and
software ISR table that are then compiled and linked into the final
application.
The priority level of any interrupt is not encoded in these tables, instead
:c:macro:`IRQ_CONNECT` also has a runtime component which programs the desired
priority level of the interrupt to the interrupt controller. Some architectures
do not support the notion of interrupt priority, in which case the priority
argument is ignored.
Vector Table
~~~~~~~~~~~~
A vector table is generated when :kconfig:option:`CONFIG_GEN_IRQ_VECTOR_TABLE` is
enabled. This data structure is used natively by the CPU and is simply an
array of function pointers, where each element n corresponds to the IRQ handler
for IRQ line n, and the function pointers are:
#. For 'direct' interrupts declared with :c:macro:`IRQ_DIRECT_CONNECT`, the
handler function will be placed here.
#. For regular interrupts declared with :c:macro:`IRQ_CONNECT`, the address
of the common software IRQ handler is placed here. This code does common
kernel interrupt bookkeeping and looks up the ISR and parameter from the
software ISR table.
#. For interrupt lines that are not configured at all, the address of the
spurious IRQ handler will be placed here. The spurious IRQ handler
causes a system fatal error if encountered.
Some architectures (such as the Nios II internal interrupt controller) have a
common entry point for all interrupts and do not support a vector table, in
which case the :kconfig:option:`CONFIG_GEN_IRQ_VECTOR_TABLE` option should be
disabled.
Some architectures may reserve some initial vectors for system exceptions
and declare this in a table elsewhere, in which case
CONFIG_GEN_IRQ_START_VECTOR needs to be set to properly offset the indices
in the table.
SW ISR Table
~~~~~~~~~~~~
This is an array of struct _isr_table_entry:
.. code-block:: c
struct _isr_table_entry {
void *arg;
void (*isr)(void *);
};
This is used by the common software IRQ handler to look up the ISR and its
argument and execute it. The active IRQ line is looked up in an interrupt
controller register and used to index this table.
Shared SW ISR Table
~~~~~~~~~~~~~~~~~~~
This is an array of struct z_shared_isr_table_entry:
.. code-block:: c
struct z_shared_isr_table_entry {
struct _isr_table_entry clients[CONFIG_SHARED_IRQ_MAX_NUM_CLIENTS];
size_t client_num;
};
This table keeps track of the registered clients for each of the interrupt
lines. Whenever an interrupt line becomes shared, :c:func:`z_shared_isr` will
replace the currently registered ISR in _sw_isr_table. This special ISR will
iterate through the list of registered clients and invoke the ISRs.
Implementation using linker script
----------------------------------
This way of prepare and parse .isrList section to implement interrupt vectors arrays
is called local isr declaration.
The name comes from the fact that all the entries to the arrays that would create
interrupt vectors are created locally in place of invocation of :c:macro:`IRQ_CONNECT` macro.
Then automatically generated linker scripts are used to place it in the right place in the memory.
This option requires enabling by the choose of :kconfig:option:`ISR_TABLES_LOCAL_DECLARATION`.
If this configuration is supported by the used architecture and toolchaing the
:kconfig:option:`ISR_TABLES_LOCAL_DECLARATION_SUPPORTED` is set.
See details of this option for the information about currently supported configurations.
Any invocation of :c:macro:`IRQ_CONNECT` or `IRQ_DIRECT_CONNECT` will declare an instance of struct
_isr_list_sname which is placde in a special .intList section:
.. code-block:: c
struct _isr_list_sname {
/** IRQ line number */
int32_t irq;
/** Flags for this IRQ, see ISR_FLAG_* definitions */
int32_t flags;
/** The section name */
const char sname[];
};
Note that the section name is placed in flexible array member.
It means that the size of the initialized structure will warry depending on the
structure name length.
The whole entry is used by the script during the build of the application
and has all the information needed for proper interrupt placement.
Beside of the _isr_list_sname the :c:macro:`IRQ_CONNECT` macro generates an entry
that would be the part of the interrupt array:
.. code-block:: c
struct _isr_table_entry {
const void *arg;
void (*isr)(const void *);
};
This array is placed in a section with the name saved in _isr_list_sname structure.
The values created by :c:macro:`IRQ_DIRECT_CONNECT` macro depends on the architecture.
It can be changed to variable that points to a interrupt handler:
.. code-block:: c
static uintptr_t <unique name> = ((uintptr_t)func);
Or to actually naked function that implements a jump to the interrupt handler:
.. code-block:: c
static void <unique name>(void)
{
__asm(ARCH_IRQ_VECTOR_JUMP_CODE(func));
}
Similar like for :c:macro:`IRQ_CONNECT`, the created variable or function is placed
in a section, saved in _isr_list_sname section.
Files generated by the script
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The interrupt tables generator script creates 3 files:
isr_tables.c, isr_tables_swi.ld, and isr_tables_vt.ld.
The isr_tables.c will contain all the structures for interrupts, direct interrupts and
shared interrupts (if enabled). This file implements only all the structures that
are not implemented by the application, leaving a comment where the interrupt
not implemented here can be found.
Then two linker files are used. The isr_tables_vt.ld file is included in place
where the interrupt vectors are required to be placed in the selected architecture.
The isr_tables_swi.ld file describes the placement of the software interrupt table
elements. The separated file is required as it might be placed in writable on nonwritable
section, depending on the current configuration.
x86 Details
-----------
The x86 architecture has a special type of vector table called the Interrupt
Descriptor Table (IDT) which must be laid out in a certain way per the x86
processor documentation. It is still fundamentally a vector table, and the
:ref:`gen_idt.py` tool uses the .intList section to create it. However, on APIC-based
systems the indexes in the vector table do not correspond to the IRQ line. The
first 32 vectors are reserved for CPU exceptions, and all remaining vectors (up
to index 255) correspond to the priority level, in groups of 16. In this
scheme, interrupts of priority level 0 will be placed in vectors 32-47, level 1
48-63, and so forth. When the :ref:`gen_idt.py` tool is constructing the IDT, when it
configures an interrupt it will look for a free vector in the appropriate range
for the requested priority level and set the handler there.
On x86 when an interrupt or exception vector is executed by the CPU, there is
no foolproof way to determine which vector was fired, so a software ISR table
indexed by IRQ line is not used. Instead, the :c:macro:`IRQ_CONNECT` call
creates a small assembly language function which calls the common interrupt
code in :c:func:`_interrupt_enter` with the ISR and parameter as arguments.
It is the address of this assembly interrupt stub which gets placed in the IDT.
For interrupts declared with :c:macro:`IRQ_DIRECT_CONNECT` the parameterless
ISR is placed directly in the IDT.
On systems where the position in the vector table corresponds to the
interrupt's priority level, the interrupt controller needs to know at
runtime what vector is associated with an IRQ line. :ref:`gen_idt.py` additionally
creates an _irq_to_interrupt_vector array which maps an IRQ line to its
configured vector in the IDT. This is used at runtime by :c:macro:`IRQ_CONNECT`
to program the IRQ-to-vector association in the interrupt controller.
For dynamic interrupts, the build must generate some 4-byte dynamic interrupt
stubs, one stub per dynamic interrupt in use. The number of stubs is controlled
by the :kconfig:option:`CONFIG_X86_DYNAMIC_IRQ_STUBS` option. Each stub pushes an
unique identifier which is then used to fetch the appropriate handler function
and parameter out of a table populated when the dynamic interrupt was
connected.
Going Beyond the Default Supported Number of Interrupts
-------------------------------------------------------
When generating interrupts in the multi-level configuration, 8-bits per level is the default
mask used when determining which level a given interrupt code belongs to. This can become
a problem when dealing with CPUs that support more than 255 interrupts per single
aggregator. In this case it may be desirable to override these defaults and use a custom
number of bits per level. Regardless of how many bits used for each level, the sum of
the total bits used between all levels must sum to be less than or equal to 32-bits,
fitting into a single 32-bit integer. To modify the bit total per level, override the
default 8 in `Kconfig.multilevel` by setting :kconfig:option:`CONFIG_1ST_LEVEL_INTERRUPT_BITS`
for the first level, :kconfig:option:`CONFIG_2ND_LEVEL_INTERRUPT_BITS` for the second level and
:kconfig:option:`CONFIG_3RD_LEVEL_INTERRUPT_BITS` for the third level. These masks control the
length of the bit masks and shift to apply when generating interrupt values, when checking the
interrupts level and converting interrupts to a different level. The logic controlling
this can be found in :file:`irq_multilevel.h`
Suggested Uses
**************
Use a regular or direct ISR to perform interrupt processing that requires a
very rapid response, and can be done quickly without blocking.
.. note::
Interrupt processing that is time consuming, or involves blocking,
should be handed off to a thread. See `Offloading ISR Work`_ for
a description of various techniques that can be used in an application.
Configuration Options
*********************
Related configuration options:
* :kconfig:option:`CONFIG_ISR_STACK_SIZE`
Additional architecture-specific and device-specific configuration options
also exist.
API Reference
*************
.. doxygengroup:: isr_apis
``` | /content/code_sandbox/doc/kernel/services/interrupts.rst | restructuredtext | 2016-05-26T17:54:19 | 2024-08-16T18:09:06 | zephyr | zephyrproject-rtos/zephyr | 10,307 | 6,919 |
```restructuredtext
.. _scheduling_v2:
Scheduling
##########
The kernel's priority-based scheduler allows an application's threads
to share the CPU.
Concepts
********
The scheduler determines which thread is allowed to execute
at any point in time; this thread is known as the **current thread**.
There are various points in time when the scheduler is given an
opportunity to change the identity of the current thread. These points
are called **reschedule points**. Some potential reschedule points are:
- transition of a thread from running state to a suspended or waiting
state, for example by :c:func:`k_sem_take` or :c:func:`k_sleep`.
- transition of a thread to the :ref:`ready state <thread_states>`, for
example by :c:func:`k_sem_give` or :c:func:`k_thread_start`
- return to thread context after processing an interrupt
- when a running thread invokes :c:func:`k_yield`
A thread **sleeps** when it voluntarily initiates an operation that
transitions itself to a suspended or waiting state.
Whenever the scheduler changes the identity of the current thread,
or when execution of the current thread is replaced by an ISR,
the kernel first saves the current thread's CPU register values.
These register values get restored when the thread later resumes execution.
Scheduling Algorithm
====================
The kernel's scheduler selects the highest priority ready thread
to be the current thread. When multiple ready threads of the same priority
exist, the scheduler chooses the one that has been waiting longest.
A thread's relative priority is primarily determined by its static priority.
However, when both earliest-deadline-first scheduling is enabled
(:kconfig:option:`CONFIG_SCHED_DEADLINE`) and a choice of threads have equal
static priority, then the thread with the earlier deadline is considered
to have the higher priority. Thus, when earliest-deadline-first scheduling is
enabled, two threads are only considered to have the same priority when both
their static priorities and deadlines are equal. The routine
:c:func:`k_thread_deadline_set` is used to set a thread's deadline.
.. note::
Execution of ISRs takes precedence over thread execution,
so the execution of the current thread may be replaced by an ISR
at any time unless interrupts have been masked. This applies to both
cooperative threads and preemptive threads.
The kernel can be built with one of several choices for the ready queue
implementation, offering different choices between code size, constant factor
runtime overhead and performance scaling when many threads are added.
* Simple linked-list ready queue (:kconfig:option:`CONFIG_SCHED_DUMB`)
The scheduler ready queue will be implemented as a simple unordered list, with
very fast constant time performance for single threads and very low code size.
This implementation should be selected on systems with constrained code size
that will never see more than a small number (3, maybe) of runnable threads in
the queue at any given time. On most platforms (that are not otherwise using
the red/black tree) this results in a savings of ~2k of code size.
* Red/black tree ready queue (:kconfig:option:`CONFIG_SCHED_SCALABLE`)
The scheduler ready queue will be implemented as a red/black tree. This has
rather slower constant-time insertion and removal overhead, and on most
platforms (that are not otherwise using the red/black tree somewhere) requires
an extra ~2kb of code. The resulting behavior will scale cleanly and
quickly into the many thousands of threads.
Use this for applications needing many concurrent runnable threads (> 20 or
so). Most applications won't need this ready queue implementation.
* Traditional multi-queue ready queue (:kconfig:option:`CONFIG_SCHED_MULTIQ`)
When selected, the scheduler ready queue will be implemented as the
classic/textbook array of lists, one per priority.
This corresponds to the scheduler algorithm used in Zephyr versions prior to
1.12.
It incurs only a tiny code size overhead vs. the "dumb" scheduler and runs in
O(1) time in almost all circumstances with very low constant factor. But it
requires a fairly large RAM budget to store those list heads, and the limited
features make it incompatible with features like deadline scheduling that
need to sort threads more finely, and SMP affinity which need to traverse the
list of threads.
Typical applications with small numbers of runnable threads probably want the
DUMB scheduler.
The wait_q abstraction used in IPC primitives to pend threads for later wakeup
shares the same backend data structure choices as the scheduler, and can use
the same options.
* Scalable wait_q implementation (:kconfig:option:`CONFIG_WAITQ_SCALABLE`)
When selected, the wait_q will be implemented with a balanced tree. Choose
this if you expect to have many threads waiting on individual primitives.
There is a ~2kb code size increase over :kconfig:option:`CONFIG_WAITQ_DUMB` (which may
be shared with :kconfig:option:`CONFIG_SCHED_SCALABLE`) if the red/black tree is not
used elsewhere in the application, and pend/unpend operations on "small"
queues will be somewhat slower (though this is not generally a performance
path).
* Simple linked-list wait_q (:kconfig:option:`CONFIG_WAITQ_DUMB`)
When selected, the wait_q will be implemented with a doubly-linked list.
Choose this if you expect to have only a few threads blocked on any single
IPC primitive.
Cooperative Time Slicing
========================
Once a cooperative thread becomes the current thread, it remains
the current thread until it performs an action that makes it unready.
Consequently, if a cooperative thread performs lengthy computations,
it may cause an unacceptable delay in the scheduling of other threads,
including those of higher priority and equal priority.
.. image:: cooperative.svg
:align: center
To overcome such problems, a cooperative thread can voluntarily relinquish
the CPU from time to time to permit other threads to execute.
A thread can relinquish the CPU in two ways:
* Calling :c:func:`k_yield` puts the thread at the back of the scheduler's
prioritized list of ready threads, and then invokes the scheduler.
All ready threads whose priority is higher or equal to that of the
yielding thread are then allowed to execute before the yielding thread is
rescheduled. If no such ready threads exist, the scheduler immediately
reschedules the yielding thread without context switching.
* Calling :c:func:`k_sleep` makes the thread unready for a specified
time period. Ready threads of *all* priorities are then allowed to execute;
however, there is no guarantee that threads whose priority is lower
than that of the sleeping thread will actually be scheduled before
the sleeping thread becomes ready once again.
Preemptive Time Slicing
=======================
Once a preemptive thread becomes the current thread, it remains
the current thread until a higher priority thread becomes ready,
or until the thread performs an action that makes it unready.
Consequently, if a preemptive thread performs lengthy computations,
it may cause an unacceptable delay in the scheduling of other threads,
including those of equal priority.
.. image:: preemptive.svg
:align: center
To overcome such problems, a preemptive thread can perform cooperative
time slicing (as described above), or the scheduler's time slicing capability
can be used to allow other threads of the same priority to execute.
.. image:: timeslicing.svg
:align: center
The scheduler divides time into a series of **time slices**, where slices
are measured in system clock ticks. The time slice size is configurable,
but this size can be changed while the application is running.
At the end of every time slice, the scheduler checks to see if the current
thread is preemptible and, if so, implicitly invokes :c:func:`k_yield`
on behalf of the thread. This gives other ready threads of the same priority
the opportunity to execute before the current thread is scheduled again.
If no threads of equal priority are ready, the current thread remains
the current thread.
Threads with a priority higher than specified limit are exempt from preemptive
time slicing, and are never preempted by a thread of equal priority.
This allows an application to use preemptive time slicing
only when dealing with lower priority threads that are less time-sensitive.
.. note::
The kernel's time slicing algorithm does *not* ensure that a set
of equal-priority threads receive an equitable amount of CPU time,
since it does not measure the amount of time a thread actually gets to
execute. However, the algorithm *does* ensure that a thread never executes
for longer than a single time slice without being required to yield.
Scheduler Locking
=================
A preemptible thread that does not wish to be preempted while performing
a critical operation can instruct the scheduler to temporarily treat it
as a cooperative thread by calling :c:func:`k_sched_lock`. This prevents
other threads from interfering while the critical operation is being performed.
Once the critical operation is complete the preemptible thread must call
:c:func:`k_sched_unlock` to restore its normal, preemptible status.
If a thread calls :c:func:`k_sched_lock` and subsequently performs an
action that makes it unready, the scheduler will switch the locking thread out
and allow other threads to execute. When the locking thread again
becomes the current thread, its non-preemptible status is maintained.
.. note::
Locking out the scheduler is a more efficient way for a preemptible thread
to prevent preemption than changing its priority level to a negative value.
.. _thread_sleeping:
Thread Sleeping
===============
A thread can call :c:func:`k_sleep` to delay its processing
for a specified time period. During the time the thread is sleeping
the CPU is relinquished to allow other ready threads to execute.
Once the specified delay has elapsed the thread becomes ready
and is eligible to be scheduled once again.
A sleeping thread can be woken up prematurely by another thread using
:c:func:`k_wakeup`. This technique can sometimes be used
to permit the secondary thread to signal the sleeping thread
that something has occurred *without* requiring the threads
to define a kernel synchronization object, such as a semaphore.
Waking up a thread that is not sleeping is allowed, but has no effect.
.. _busy_waiting:
Busy Waiting
============
A thread can call :c:func:`k_busy_wait` to perform a ``busy wait``
that delays its processing for a specified time period
*without* relinquishing the CPU to another ready thread.
A busy wait is typically used instead of thread sleeping
when the required delay is too short to warrant having the scheduler
context switch from the current thread to another thread and then back again.
Suggested Uses
**************
Use cooperative threads for device drivers and other performance-critical work.
Use cooperative threads to implement mutually exclusion without the need
for a kernel object, such as a mutex.
Use preemptive threads to give priority to time-sensitive processing
over less time-sensitive processing.
.. _cpu_idle:
CPU Idling
##########
Although normally reserved for the idle thread, in certain special
applications, a thread might want to make the CPU idle.
.. contents::
:local:
:depth: 2
Concepts
********
Making the CPU idle causes the kernel to pause all operations until an event,
normally an interrupt, wakes up the CPU. In a regular system, the idle thread
is responsible for this. However, in some constrained systems, it is possible
that another thread takes this duty.
Implementation
**************
Making the CPU idle
===================
Making the CPU idle is simple: call the k_cpu_idle() API. The CPU will stop
executing instructions until an event occurs. Most likely, the function will
be called within a loop. Note that in certain architectures, upon return,
k_cpu_idle() unconditionally unmasks interrupts.
.. code-block:: c
static k_sem my_sem;
void my_isr(void *unused)
{
k_sem_give(&my_sem);
}
int main(void)
{
k_sem_init(&my_sem, 0, 1);
/* wait for semaphore from ISR, then do related work */
for (;;) {
/* wait for ISR to trigger work to perform */
if (k_sem_take(&my_sem, K_NO_WAIT) == 0) {
/* ... do processing */
}
/* put CPU to sleep to save power */
k_cpu_idle();
}
}
Making the CPU idle in an atomic fashion
========================================
It is possible that there is a need to do some work atomically before making
the CPU idle. In such a case, k_cpu_atomic_idle() should be used instead.
In fact, there is a race condition in the previous example: the interrupt could
occur between the time the semaphore is taken, finding out it is not available
and making the CPU idle again. In some systems, this can cause the CPU to idle
until *another* interrupt occurs, which might be *never*, thus hanging the
system completely. To prevent this, k_cpu_atomic_idle() should have been used,
like in this example.
.. code-block:: c
static k_sem my_sem;
void my_isr(void *unused)
{
k_sem_give(&my_sem);
}
int main(void)
{
k_sem_init(&my_sem, 0, 1);
for (;;) {
unsigned int key = irq_lock();
/*
* Wait for semaphore from ISR; if acquired, do related work, then
* go to next loop iteration (the semaphore might have been given
* again); else, make the CPU idle.
*/
if (k_sem_take(&my_sem, K_NO_WAIT) == 0) {
irq_unlock(key);
/* ... do processing */
} else {
/* put CPU to sleep to save power */
k_cpu_atomic_idle(key);
}
}
}
Suggested Uses
**************
Use k_cpu_atomic_idle() when a thread has to do some real work in addition to
idling the CPU to wait for an event. See example above.
Use k_cpu_idle() only when a thread is only responsible for idling the CPU,
i.e. not doing any real work, like in this example below.
.. code-block:: c
int main(void)
{
/* ... do some system/application initialization */
/* thread is only used for CPU idling from this point on */
for (;;) {
k_cpu_idle();
}
}
.. note::
**Do not use these APIs unless absolutely necessary.** In a normal system,
the idle thread takes care of power management, including CPU idling.
API Reference
*************
.. doxygengroup:: cpu_idle_apis
``` | /content/code_sandbox/doc/kernel/services/scheduling/index.rst | restructuredtext | 2016-05-26T17:54:19 | 2024-08-16T18:09:06 | zephyr | zephyrproject-rtos/zephyr | 10,307 | 3,156 |
```restructuredtext
.. _nothread:
Operation without Threads
#########################
Thread support is not necessary in some applications:
* Bootloaders
* Simple event-driven applications
* Examples intended to demonstrate core functionality
Thread support can be disabled by setting
:kconfig:option:`CONFIG_MULTITHREADING` to ``n``. Since this configuration has
a significant impact on Zephyr's functionality and testing of it has
been limited, there are conditions on what can be expected to work in
this configuration.
What Can be Expected to Work
****************************
These core capabilities shall function correctly when
:kconfig:option:`CONFIG_MULTITHREADING` is disabled:
* The :ref:`build system <application>`
* The ability to boot the application to ``main()``
* :ref:`Interrupt management <interrupts_v2>`
* The system clock including :c:func:`k_uptime_get`
* Timers, i.e. :c:func:`k_timer`
* Non-sleeping delays e.g. :c:func:`k_busy_wait`.
* Sleeping :c:func:`k_cpu_idle`.
* Pre ``main()`` drivers and subsystems initialization e.g. :c:macro:`SYS_INIT`.
* :ref:`kernel_memory_management_api`
* Specifically identified drivers in certain subsystems, listed below.
The expectations above affect selection of other features; for example
:kconfig:option:`CONFIG_SYS_CLOCK_EXISTS` cannot be set to ``n``.
What Cannot be Expected to Work
*******************************
Functionality that will not work with :kconfig:option:`CONFIG_MULTITHREADING`
includes majority of the kernel API:
* :ref:`threads_v2`
* :ref:`scheduling_v2`
* :ref:`workqueues_v2`
* :ref:`polling_v2`
* :ref:`semaphores_v2`
* :ref:`mutexes_v2`
* :ref:`condvar`
* :ref:`kernel_data_passing_api`
.. contents::
:local:
:depth: 1
Subsystem Behavior Without Thread Support
*****************************************
The sections below list driver and functional subsystems that are
expected to work to some degree when :kconfig:option:`CONFIG_MULTITHREADING` is
disabled. Subsystems that are not listed here should not be expected to
work.
Some existing drivers within the listed subsystems do not work when
threading is disabled, but are within scope based on their subsystem, or
may be sufficiently isolated that supporting them on a particular
platform is low-impact. Enhancements to add support to existing
capabilities that were not originally implemented to work with threads
disabled will be considered.
Flash
=====
The :ref:`flash_api` is expected to work for all SoC flash peripheral
drivers. Bus-accessed devices like serial memories may not be
supported.
*List/table of supported drivers to go here*
GPIO
====
The :ref:`gpio_api` is expected to work for all SoC GPIO peripheral
drivers. Bus-accessed devices like GPIO extenders may not be supported.
*List/table of supported drivers to go here*
UART
====
A subset of the :ref:`uart_api` is expected to work for all SoC UART
peripheral drivers.
* Applications that select :kconfig:option:`CONFIG_UART_INTERRUPT_DRIVEN` may
work, depending on driver implementation.
* Applications that select :kconfig:option:`CONFIG_UART_ASYNC_API` may
work, depending on driver implementation.
* Applications that do not select either :kconfig:option:`CONFIG_UART_ASYNC_API`
or :kconfig:option:`CONFIG_UART_INTERRUPT_DRIVEN` are expected to work.
*List/table of supported drivers to go here, including which API options
are supported*
``` | /content/code_sandbox/doc/kernel/services/threads/nothread.rst | restructuredtext | 2016-05-26T17:54:19 | 2024-08-16T18:09:06 | zephyr | zephyrproject-rtos/zephyr | 10,307 | 780 |
```restructuredtext
.. _threads_v2:
Threads
#######
.. note::
There is also limited support for using :ref:`nothread`.
.. contents::
:local:
:depth: 2
This section describes kernel services for creating, scheduling, and deleting
independently executable threads of instructions.
A :dfn:`thread` is a kernel object that is used for application processing
that is too lengthy or too complex to be performed by an ISR.
Any number of threads can be defined by an application (limited only by
available RAM). Each thread is referenced by a :dfn:`thread id` that is assigned
when the thread is spawned.
A thread has the following key properties:
* A **stack area**, which is a region of memory used for the thread's stack.
The **size** of the stack area can be tailored to conform to the actual needs
of the thread's processing. Special macros exist to create and work with
stack memory regions.
* A **thread control block** for private kernel bookkeeping of the thread's
metadata. This is an instance of type :c:struct:`k_thread`.
* An **entry point function**, which is invoked when the thread is started.
Up to 3 **argument values** can be passed to this function.
* A **scheduling priority**, which instructs the kernel's scheduler how to
allocate CPU time to the thread. (See :ref:`scheduling_v2`.)
* A set of **thread options**, which allow the thread to receive special
treatment by the kernel under specific circumstances.
(See :ref:`thread_options_v2`.)
* A **start delay**, which specifies how long the kernel should wait before
starting the thread.
* An **execution mode**, which can either be supervisor or user mode.
By default, threads run in supervisor mode and allow access to
privileged CPU instructions, the entire memory address space, and
peripherals. User mode threads have a reduced set of privileges.
This depends on the :kconfig:option:`CONFIG_USERSPACE` option. See :ref:`usermode_api`.
.. _lifecycle_v2:
Lifecycle
***********
.. _spawning_thread:
Thread Creation
===============
A thread must be created before it can be used. The kernel initializes
the thread control block as well as one end of the stack portion. The remainder
of the thread's stack is typically left uninitialized.
Specifying a start delay of :c:macro:`K_NO_WAIT` instructs the kernel
to start thread execution immediately. Alternatively, the kernel can be
instructed to delay execution of the thread by specifying a timeout
value -- for example, to allow device hardware used by the thread
to become available.
The kernel allows a delayed start to be canceled before the thread begins
executing. A cancellation request has no effect if the thread has already
started. A thread whose delayed start was successfully canceled must be
re-spawned before it can be used.
Thread Termination
===================
Once a thread is started it typically executes forever. However, a thread may
synchronously end its execution by returning from its entry point function.
This is known as **termination**.
A thread that terminates is responsible for releasing any shared resources
it may own (such as mutexes and dynamically allocated memory)
prior to returning, since the kernel does *not* reclaim them automatically.
In some cases a thread may want to sleep until another thread terminates.
This can be accomplished with the :c:func:`k_thread_join` API. This
will block the calling thread until either the timeout expires, the target
thread self-exits, or the target thread aborts (either due to a
:c:func:`k_thread_abort` call or triggering a fatal error).
Once a thread has terminated, the kernel guarantees that no use will
be made of the thread struct. The memory of such a struct can then be
re-used for any purpose, including spawning a new thread. Note that
the thread must be fully terminated, which presents race conditions
where a thread's own logic signals completion which is seen by another
thread before the kernel processing is complete. Under normal
circumstances, application code should use :c:func:`k_thread_join` or
:c:func:`k_thread_abort` to synchronize on thread termination state
and not rely on signaling from within application logic.
Thread Aborting
===============
A thread may asynchronously end its execution by **aborting**. The kernel
automatically aborts a thread if the thread triggers a fatal error condition,
such as dereferencing a null pointer.
A thread can also be aborted by another thread (or by itself)
by calling :c:func:`k_thread_abort`. However, it is typically preferable
to signal a thread to terminate itself gracefully, rather than aborting it.
As with thread termination, the kernel does not reclaim shared resources
owned by an aborted thread.
.. note::
The kernel does not currently make any claims regarding an application's
ability to respawn a thread that aborts.
Thread Suspension
==================
A thread can be prevented from executing for an indefinite period of time
if it becomes **suspended**. The function :c:func:`k_thread_suspend`
can be used to suspend any thread, including the calling thread.
Suspending a thread that is already suspended has no additional effect.
Once suspended, a thread cannot be scheduled until another thread calls
:c:func:`k_thread_resume` to remove the suspension.
.. note::
A thread can prevent itself from executing for a specified period of time
using :c:func:`k_sleep`. However, this is different from suspending
a thread since a sleeping thread becomes executable automatically when the
time limit is reached.
.. _thread_states:
Thread States
*************
A thread that has no factors that prevent its execution is deemed
to be **ready**, and is eligible to be selected as the current thread.
A thread that has one or more factors that prevent its execution
is deemed to be **unready**, and cannot be selected as the current thread.
The following factors make a thread unready:
* The thread has not been started.
* The thread is waiting for a kernel object to complete an operation.
(For example, the thread is taking a semaphore that is unavailable.)
* The thread is waiting for a timeout to occur.
* The thread has been suspended.
* The thread has terminated or aborted.
.. image:: thread_states.svg
:align: center
.. note::
Although the diagram above may appear to suggest that both **Ready** and
**Running** are distinct thread states, that is not the correct
interpretation. **Ready** is a thread state, and **Running** is a
schedule state that only applies to **Ready** threads.
Thread Stack objects
********************
Every thread requires its own stack buffer for the CPU to push context.
Depending on configuration, there are several constraints that must be
met:
- There may need to be additional memory reserved for memory management
structures
- If guard-based stack overflow detection is enabled, a small write-
protected memory management region must immediately precede the stack buffer
to catch overflows.
- If userspace is enabled, a separate fixed-size privilege elevation stack must
be reserved to serve as a private kernel stack for handling system calls.
- If userspace is enabled, the thread's stack buffer must be appropriately
sized and aligned such that a memory protection region may be programmed
to exactly fit.
The alignment constraints can be quite restrictive, for example some MPUs
require their regions to be of some power of two in size, and aligned to its
own size.
Because of this, portable code can't simply pass an arbitrary character buffer
to :c:func:`k_thread_create`. Special macros exist to instantiate stacks,
prefixed with ``K_KERNEL_STACK`` and ``K_THREAD_STACK``.
Kernel-only Stacks
==================
If it is known that a thread will never run in user mode, or the stack is
being used for special contexts like handling interrupts, it is best to
define stacks using the ``K_KERNEL_STACK`` macros.
These stacks save memory because an MPU region will never need to be
programmed to cover the stack buffer itself, and the kernel will not need
to reserve additional room for the privilege elevation stack, or memory
management data structures which only pertain to user mode threads.
Attempts from user mode to use stacks declared in this way will result in
a fatal error for the caller.
If ``CONFIG_USERSPACE`` is not enabled, the set of ``K_THREAD_STACK`` macros
have an identical effect to the ``K_KERNEL_STACK`` macros.
Thread stacks
=============
If it is known that a stack will need to host user threads, or if this
cannot be determined, define the stack with ``K_THREAD_STACK`` macros.
This may use more memory but the stack object is suitable for hosting
user threads.
If ``CONFIG_USERSPACE`` is not enabled, the set of ``K_THREAD_STACK`` macros
have an identical effect to the ``K_KERNEL_STACK`` macros.
.. _thread_priorities:
Thread Priorities
******************
A thread's priority is an integer value, and can be either negative or
non-negative.
Numerically lower priorities takes precedence over numerically higher values.
For example, the scheduler gives thread A of priority 4 *higher* priority
over thread B of priority 7; likewise thread C of priority -2 has higher
priority than both thread A and thread B.
The scheduler distinguishes between two classes of threads,
based on each thread's priority.
* A :dfn:`cooperative thread` has a negative priority value.
Once it becomes the current thread, a cooperative thread remains
the current thread until it performs an action that makes it unready.
* A :dfn:`preemptible thread` has a non-negative priority value.
Once it becomes the current thread, a preemptible thread may be supplanted
at any time if a cooperative thread, or a preemptible thread of higher
or equal priority, becomes ready.
A thread's initial priority value can be altered up or down after the thread
has been started. Thus it is possible for a preemptible thread to become
a cooperative thread, and vice versa, by changing its priority.
.. note::
The scheduler does not make heuristic decisions to re-prioritize threads.
Thread priorities are set and changed only at the application's request.
The kernel supports a virtually unlimited number of thread priority levels.
The configuration options :kconfig:option:`CONFIG_NUM_COOP_PRIORITIES` and
:kconfig:option:`CONFIG_NUM_PREEMPT_PRIORITIES` specify the number of priority
levels for each class of thread, resulting in the following usable priority
ranges:
* cooperative threads: (-:kconfig:option:`CONFIG_NUM_COOP_PRIORITIES`) to -1
* preemptive threads: 0 to (:kconfig:option:`CONFIG_NUM_PREEMPT_PRIORITIES` - 1)
.. image:: priorities.svg
:align: center
For example, configuring 5 cooperative priorities and 10 preemptive priorities
results in the ranges -5 to -1 and 0 to 9, respectively.
.. _metairq_priorities:
Meta-IRQ Priorities
===================
When enabled (see :kconfig:option:`CONFIG_NUM_METAIRQ_PRIORITIES`), there is a special
subclass of cooperative priorities at the highest (numerically lowest)
end of the priority space: meta-IRQ threads. These are scheduled
according to their normal priority, but also have the special ability
to preempt all other threads (and other meta-IRQ threads) at lower
priorities, even if those threads are cooperative and/or have taken a
scheduler lock. Meta-IRQ threads are still threads, however,
and can still be interrupted by any hardware interrupt.
This behavior makes the act of unblocking a meta-IRQ thread (by any
means, e.g. creating it, calling k_sem_give(), etc.) into the
equivalent of a synchronous system call when done by a lower
priority thread, or an ARM-like "pended IRQ" when done from true
interrupt context. The intent is that this feature will be used to
implement interrupt "bottom half" processing and/or "tasklet" features
in driver subsystems. The thread, once woken, will be guaranteed to
run before the current CPU returns into application code.
Unlike similar features in other OSes, meta-IRQ threads are true
threads and run on their own stack (which must be allocated normally),
not the per-CPU interrupt stack. Design work to enable the use of the
IRQ stack on supported architectures is pending.
Note that because this breaks the promise made to cooperative
threads by the Zephyr API (namely that the OS won't schedule other
thread until the current thread deliberately blocks), it should be
used only with great care from application code. These are not simply
very high priority threads and should not be used as such.
.. _thread_options_v2:
Thread Options
***************
The kernel supports a small set of :dfn:`thread options` that allow a thread
to receive special treatment under specific circumstances. The set of options
associated with a thread are specified when the thread is spawned.
A thread that does not require any thread option has an option value of zero.
A thread that requires a thread option specifies it by name, using the
:literal:`|` character as a separator if multiple options are needed
(i.e. combine options using the bitwise OR operator).
The following thread options are supported.
:c:macro:`K_ESSENTIAL`
This option tags the thread as an :dfn:`essential thread`. This instructs
the kernel to treat the termination or aborting of the thread as a fatal
system error.
By default, the thread is not considered to be an essential thread.
:c:macro:`K_SSE_REGS`
This x86-specific option indicate that the thread uses the CPU's
SSE registers. Also see :c:macro:`K_FP_REGS`.
By default, the kernel does not attempt to save and restore the contents
of these registers when scheduling the thread.
:c:macro:`K_FP_REGS`
This option indicate that the thread uses the CPU's floating point
registers. This instructs the kernel to take additional steps to save
and restore the contents of these registers when scheduling the thread.
(For more information see :ref:`float_v2`.)
By default, the kernel does not attempt to save and restore the contents
of this register when scheduling the thread.
:c:macro:`K_USER`
If :kconfig:option:`CONFIG_USERSPACE` is enabled, this thread will be created in
user mode and will have reduced privileges. See :ref:`usermode_api`. Otherwise
this flag does nothing.
:c:macro:`K_INHERIT_PERMS`
If :kconfig:option:`CONFIG_USERSPACE` is enabled, this thread will inherit all
kernel object permissions that the parent thread had, except the parent
thread object. See :ref:`usermode_api`.
.. _custom_data_v2:
Thread Custom Data
******************
Every thread has a 32-bit :dfn:`custom data` area, accessible only by
the thread itself, and may be used by the application for any purpose
it chooses. The default custom data value for a thread is zero.
.. note::
Custom data support is not available to ISRs because they operate
within a single shared kernel interrupt handling context.
By default, thread custom data support is disabled. The configuration option
:kconfig:option:`CONFIG_THREAD_CUSTOM_DATA` can be used to enable support.
The :c:func:`k_thread_custom_data_set` and
:c:func:`k_thread_custom_data_get` functions are used to write and read
a thread's custom data, respectively. A thread can only access its own
custom data, and not that of another thread.
The following code uses the custom data feature to record the number of times
each thread calls a specific routine.
.. note::
Obviously, only a single routine can use this technique,
since it monopolizes the use of the custom data feature.
.. code-block:: c
int call_tracking_routine(void)
{
uint32_t call_count;
if (k_is_in_isr()) {
/* ignore any call made by an ISR */
} else {
call_count = (uint32_t)k_thread_custom_data_get();
call_count++;
k_thread_custom_data_set((void *)call_count);
}
/* do rest of routine's processing */
...
}
Use thread custom data to allow a routine to access thread-specific information,
by using the custom data as a pointer to a data structure owned by the thread.
Implementation
**************
Spawning a Thread
=================
A thread is spawned by defining its stack area and its thread control block,
and then calling :c:func:`k_thread_create`.
The stack area must be defined using :c:macro:`K_THREAD_STACK_DEFINE` or
:c:macro:`K_KERNEL_STACK_DEFINE` to ensure it is properly set up in memory.
The size parameter for the stack must be one of three values:
- The original requested stack size passed to
``K_THREAD_STACK`` or ``K_KERNEL_STACK`` family of stack instantiation
macros.
- For a stack object defined with the ``K_THREAD_STACK`` family of
macros, the return value of :c:macro:`K_THREAD_STACK_SIZEOF()` for that'
object.
- For a stack object defined with the ``K_KERNEL_STACK`` family of
macros, the return value of :c:macro:`K_KERNEL_STACK_SIZEOF()` for that
object.
The thread spawning function returns its thread id, which can be used
to reference the thread.
The following code spawns a thread that starts immediately.
.. code-block:: c
#define MY_STACK_SIZE 500
#define MY_PRIORITY 5
extern void my_entry_point(void *, void *, void *);
K_THREAD_STACK_DEFINE(my_stack_area, MY_STACK_SIZE);
struct k_thread my_thread_data;
k_tid_t my_tid = k_thread_create(&my_thread_data, my_stack_area,
K_THREAD_STACK_SIZEOF(my_stack_area),
my_entry_point,
NULL, NULL, NULL,
MY_PRIORITY, 0, K_NO_WAIT);
Alternatively, a thread can be declared at compile time by calling
:c:macro:`K_THREAD_DEFINE`. Observe that the macro defines
the stack area, control block, and thread id variables automatically.
The following code has the same effect as the code segment above.
.. code-block:: c
#define MY_STACK_SIZE 500
#define MY_PRIORITY 5
extern void my_entry_point(void *, void *, void *);
K_THREAD_DEFINE(my_tid, MY_STACK_SIZE,
my_entry_point, NULL, NULL, NULL,
MY_PRIORITY, 0, 0);
.. note::
The delay parameter to :c:func:`k_thread_create` is a
:c:type:`k_timeout_t` value, so :c:macro:`K_NO_WAIT` means to start the
thread immediately. The corresponding parameter to :c:macro:`K_THREAD_DEFINE`
is a duration in integral milliseconds, so the equivalent argument is 0.
User Mode Constraints
---------------------
This section only applies if :kconfig:option:`CONFIG_USERSPACE` is enabled, and a user
thread tries to create a new thread. The :c:func:`k_thread_create` API is
still used, but there are additional constraints which must be met or the
calling thread will be terminated:
* The calling thread must have permissions granted on both the child thread
and stack parameters; both are tracked by the kernel as kernel objects.
* The child thread and stack objects must be in an uninitialized state,
i.e. it is not currently running and the stack memory is unused.
* The stack size parameter passed in must be equal to or less than the
bounds of the stack object when it was declared.
* The :c:macro:`K_USER` option must be used, as user threads can only create
other user threads.
* The :c:macro:`K_ESSENTIAL` option must not be used, user threads may not be
considered essential threads.
* The priority of the child thread must be a valid priority value, and equal to
or lower than the parent thread.
Dropping Permissions
====================
If :kconfig:option:`CONFIG_USERSPACE` is enabled, a thread running in supervisor mode
may perform a one-way transition to user mode using the
:c:func:`k_thread_user_mode_enter` API. This is a one-way operation which
will reset and zero the thread's stack memory. The thread will be marked
as non-essential.
Terminating a Thread
====================
A thread terminates itself by returning from its entry point function.
The following code illustrates the ways a thread can terminate.
.. code-block:: c
void my_entry_point(int unused1, int unused2, int unused3)
{
while (1) {
...
if (<some condition>) {
return; /* thread terminates from mid-entry point function */
}
...
}
/* thread terminates at end of entry point function */
}
If :kconfig:option:`CONFIG_USERSPACE` is enabled, aborting a thread will additionally
mark the thread and stack objects as uninitialized so that they may be re-used.
Runtime Statistics
******************
Thread runtime statistics can be gathered and retrieved if
:kconfig:option:`CONFIG_THREAD_RUNTIME_STATS` is enabled, for example, total number of
execution cycles of a thread.
By default, the runtime statistics are gathered using the default kernel
timer. For some architectures, SoCs or boards, there are timers with higher
resolution available via timing functions. Using of these timers can be
enabled via :kconfig:option:`CONFIG_THREAD_RUNTIME_STATS_USE_TIMING_FUNCTIONS`.
Here is an example:
.. code-block:: c
k_thread_runtime_stats_t rt_stats_thread;
k_thread_runtime_stats_get(k_current_get(), &rt_stats_thread);
printk("Cycles: %llu\n", rt_stats_thread.execution_cycles);
Suggested Uses
**************
Use threads to handle processing that cannot be handled in an ISR.
Use separate threads to handle logically distinct processing operations
that can execute in parallel.
Configuration Options
**********************
Related configuration options:
* :kconfig:option:`CONFIG_MAIN_THREAD_PRIORITY`
* :kconfig:option:`CONFIG_MAIN_STACK_SIZE`
* :kconfig:option:`CONFIG_IDLE_STACK_SIZE`
* :kconfig:option:`CONFIG_THREAD_CUSTOM_DATA`
* :kconfig:option:`CONFIG_NUM_COOP_PRIORITIES`
* :kconfig:option:`CONFIG_NUM_PREEMPT_PRIORITIES`
* :kconfig:option:`CONFIG_TIMESLICING`
* :kconfig:option:`CONFIG_TIMESLICE_SIZE`
* :kconfig:option:`CONFIG_TIMESLICE_PRIORITY`
* :kconfig:option:`CONFIG_USERSPACE`
API Reference
**************
.. doxygengroup:: thread_apis
.. doxygengroup:: thread_stack_api
``` | /content/code_sandbox/doc/kernel/services/threads/index.rst | restructuredtext | 2016-05-26T17:54:19 | 2024-08-16T18:09:06 | zephyr | zephyrproject-rtos/zephyr | 10,307 | 4,880 |
```restructuredtext
.. _system_threads_v2:
System Threads
##############
.. contents::
:local:
:depth: 2
A :dfn:`system thread` is a thread that the kernel spawns automatically
during system initialization.
The kernel spawns the following system threads:
**Main thread**
This thread performs kernel initialization, then calls the application's
:c:func:`main` function (if one is defined).
By default, the main thread uses the highest configured preemptible thread
priority (i.e. 0). If the kernel is not configured to support preemptible
threads, the main thread uses the lowest configured cooperative thread
priority (i.e. -1).
The main thread is an essential thread while it is performing kernel
initialization or executing the application's :c:func:`main` function;
this means a fatal system error is raised if the thread aborts. If
:c:func:`main` is not defined, or if it executes and then does a normal
return, the main thread terminates normally and no error is raised.
**Idle thread**
This thread executes when there is no other work for the system to do.
If possible, the idle thread activates the board's power management support
to save power; otherwise, the idle thread simply performs a "do nothing"
loop. The idle thread remains in existence as long as the system is running
and never terminates.
The idle thread always uses the lowest configured thread priority.
If this makes it a cooperative thread, the idle thread repeatedly
yields the CPU to allow the application's other threads to run when
they need to.
The idle thread is an essential thread, which means a fatal system error
is raised if the thread aborts.
Additional system threads may also be spawned, depending on the kernel
and board configuration options specified by the application. For example,
enabling the system workqueue spawns a system thread
that services the work items submitted to it. (See :ref:`workqueues_v2`.)
Implementation
**************
Writing a main() function
=========================
An application-supplied :c:func:`main` function begins executing once
kernel initialization is complete. The kernel does not pass any arguments
to the function.
The following code outlines a trivial :c:func:`main` function.
The function used by a real application can be as complex as needed.
.. code-block:: c
int main(void)
{
/* initialize a semaphore */
...
/* register an ISR that gives the semaphore */
...
/* monitor the semaphore forever */
while (1) {
/* wait for the semaphore to be given by the ISR */
...
/* do whatever processing is now needed */
...
}
}
Suggested Uses
**************
Use the main thread to perform thread-based processing in an application
that only requires a single thread, rather than defining an additional
application-specific thread.
``` | /content/code_sandbox/doc/kernel/services/threads/system_threads.rst | restructuredtext | 2016-05-26T17:54:19 | 2024-08-16T18:09:06 | zephyr | zephyrproject-rtos/zephyr | 10,307 | 615 |
```restructuredtext
.. _workqueues_v2:
Workqueue Threads
#################
.. contents::
:local:
:depth: 1
A :dfn:`workqueue` is a kernel object that uses a dedicated thread to process
work items in a first in, first out manner. Each work item is processed by
calling the function specified by the work item. A workqueue is typically
used by an ISR or a high-priority thread to offload non-urgent processing
to a lower-priority thread so it does not impact time-sensitive processing.
Any number of workqueues can be defined (limited only by available RAM). Each
workqueue is referenced by its memory address.
A workqueue has the following key properties:
* A **queue** of work items that have been added, but not yet processed.
* A **thread** that processes the work items in the queue. The priority of the
thread is configurable, allowing it to be either cooperative or preemptive
as required.
Regardless of workqueue thread priority the workqueue thread will yield
between each submitted work item, to prevent a cooperative workqueue from
starving other threads.
A workqueue must be initialized before it can be used. This sets its queue to
empty and spawns the workqueue's thread. The thread runs forever, but sleeps
when no work items are available.
.. note::
The behavior described here is changed from the Zephyr workqueue
implementation used prior to release 2.6. Among the changes are:
* Precise tracking of the status of cancelled work items, so that the
caller need not be concerned that an item may be processing when the
cancellation returns. Checking of return values on cancellation is still
required.
* Direct submission of delayable work items to the queue with
:c:macro:`K_NO_WAIT` rather than always going through the timeout API,
which could introduce delays.
* The ability to wait until a work item has completed or a queue has been
drained.
* Finer control of behavior when scheduling a delayable work item,
specifically allowing a previous deadline to remain unchanged when a work
item is scheduled again.
* Safe handling of work item resubmission when the item is being processed
on another workqueue.
Using the return values of :c:func:`k_work_busy_get()` or
:c:func:`k_work_is_pending()`, or measurements of remaining time until
delayable work is scheduled, should be avoided to prevent race conditions
of the type observed with the previous implementation. See also `Workqueue
Best Practices`_.
Work Item Lifecycle
********************
Any number of **work items** can be defined. Each work item is referenced
by its memory address.
A work item is assigned a **handler function**, which is the function
executed by the workqueue's thread when the work item is processed. This
function accepts a single argument, which is the address of the work item
itself. The work item also maintains information about its status.
A work item must be initialized before it can be used. This records the work
item's handler function and marks it as not pending.
A work item may be **queued** (:c:enumerator:`K_WORK_QUEUED`) by submitting it to a
workqueue by an ISR or a thread. Submitting a work item appends the work item
to the workqueue's queue. Once the workqueue's thread has processed all of
the preceding work items in its queue the thread will remove the next work
item from the queue and invoke the work item's handler function. Depending on
the scheduling priority of the workqueue's thread, and the work required by
other items in the queue, a queued work item may be processed quickly or it
may remain in the queue for an extended period of time.
A delayable work item may be **scheduled** (:c:enumerator:`K_WORK_DELAYED`) to a
workqueue; see `Delayable Work`_.
A work item will be **running** (:c:enumerator:`K_WORK_RUNNING`) when it is running
on a work queue, and may also be **canceling** (:c:enumerator:`K_WORK_CANCELING`)
if it started running before a thread has requested that it be cancelled.
A work item can be in multiple states; for example it can be:
* running on a queue;
* marked canceling (because a thread used :c:func:`k_work_cancel_sync()` to
wait until the work item completed);
* queued to run again on the same queue;
* scheduled to be submitted to a (possibly different) queue
*all simultaneously*. A work item that is in any of these states is **pending**
(:c:func:`k_work_is_pending()`) or **busy** (:c:func:`k_work_busy_get()`).
A handler function can use any kernel API available to threads. However,
operations that are potentially blocking (e.g. taking a semaphore) must be
used with care, since the workqueue cannot process subsequent work items in
its queue until the handler function finishes executing.
The single argument that is passed to a handler function can be ignored if it
is not required. If the handler function requires additional information about
the work it is to perform, the work item can be embedded in a larger data
structure. The handler function can then use the argument value to compute the
address of the enclosing data structure with :c:macro:`CONTAINER_OF`, and
thereby obtain access to the additional information it needs.
A work item is typically initialized once and then submitted to a specific
workqueue whenever work needs to be performed. If an ISR or a thread attempts
to submit a work item that is already queued the work item is not affected;
the work item remains in its current place in the workqueue's queue, and
the work is only performed once.
A handler function is permitted to re-submit its work item argument
to the workqueue, since the work item is no longer queued at that time.
This allows the handler to execute work in stages, without unduly delaying
the processing of other work items in the workqueue's queue.
.. important::
A pending work item *must not* be altered until the item has been processed
by the workqueue thread. This means a work item must not be re-initialized
while it is busy. Furthermore, any additional information the work item's
handler function needs to perform its work must not be altered until
the handler function has finished executing.
.. _k_delayable_work:
Delayable Work
**************
An ISR or a thread may need to schedule a work item that is to be processed
only after a specified period of time, rather than immediately. This can be
done by **scheduling** a **delayable work item** to be submitted to a
workqueue at a future time.
A delayable work item contains a standard work item but adds fields that
record when and where the item should be submitted.
A delayable work item is initialized and scheduled to a workqueue in a similar
manner to a standard work item, although different kernel APIs are used. When
the schedule request is made the kernel initiates a timeout mechanism that is
triggered after the specified delay has elapsed. Once the timeout occurs the
kernel submits the work item to the specified workqueue, where it remains
queued until it is processed in the standard manner.
Note that work handler used for delayable still receives a pointer to the
underlying non-delayable work structure, which is not publicly accessible from
:c:struct:`k_work_delayable`. To get access to an object that contains the
delayable work object use this idiom:
.. code-block:: c
static void work_handler(struct k_work *work)
{
struct k_work_delayable *dwork = k_work_delayable_from_work(work);
struct work_context *ctx = CONTAINER_OF(dwork, struct work_context,
timed_work);
...
Triggered Work
**************
The :c:func:`k_work_poll_submit` interface schedules a triggered work
item in response to a **poll event** (see :ref:`polling_v2`), that will
call a user-defined function when a monitored resource becomes available
or poll signal is raised, or a timeout occurs.
In contrast to :c:func:`k_poll`, the triggered work does not require
a dedicated thread waiting or actively polling for a poll event.
A triggered work item is a standard work item that has the following
added properties:
* A pointer to an array of poll events that will trigger work item
submissions to the workqueue
* A size of the array containing poll events.
A triggered work item is initialized and submitted to a workqueue in a similar
manner to a standard work item, although dedicated kernel APIs are used.
When a submit request is made, the kernel begins observing kernel objects
specified by the poll events. Once at least one of the observed kernel
object's changes state, the work item is submitted to the specified workqueue,
where it remains queued until it is processed in the standard manner.
.. important::
The triggered work item as well as the referenced array of poll events
have to be valid and cannot be modified for a complete triggered work
item lifecycle, from submission to work item execution or cancellation.
An ISR or a thread may **cancel** a triggered work item it has submitted
as long as it is still waiting for a poll event. In such case, the kernel
stops waiting for attached poll events and the specified work is not executed.
Otherwise the cancellation cannot be performed.
System Workqueue
*****************
The kernel defines a workqueue known as the *system workqueue*, which is
available to any application or kernel code that requires workqueue support.
The system workqueue is optional, and only exists if the application makes
use of it.
.. important::
Additional workqueues should only be defined when it is not possible
to submit new work items to the system workqueue, since each new workqueue
incurs a significant cost in memory footprint. A new workqueue can be
justified if it is not possible for its work items to co-exist with
existing system workqueue work items without an unacceptable impact;
for example, if the new work items perform blocking operations that
would delay other system workqueue processing to an unacceptable degree.
How to Use Workqueues
*********************
Defining and Controlling a Workqueue
====================================
A workqueue is defined using a variable of type :c:struct:`k_work_q`.
The workqueue is initialized by defining the stack area used by its
thread, initializing the :c:struct:`k_work_q`, either zeroing its
memory or calling :c:func:`k_work_queue_init`, and then calling
:c:func:`k_work_queue_start`. The stack area must be defined using
:c:macro:`K_THREAD_STACK_DEFINE` to ensure it is properly set up in
memory.
The following code defines and initializes a workqueue:
.. code-block:: c
#define MY_STACK_SIZE 512
#define MY_PRIORITY 5
K_THREAD_STACK_DEFINE(my_stack_area, MY_STACK_SIZE);
struct k_work_q my_work_q;
k_work_queue_init(&my_work_q);
k_work_queue_start(&my_work_q, my_stack_area,
K_THREAD_STACK_SIZEOF(my_stack_area), MY_PRIORITY,
NULL);
In addition the queue identity and certain behavior related to thread
rescheduling can be controlled by the optional final parameter; see
:c:func:`k_work_queue_start()` for details.
The following API can be used to interact with a workqueue:
* :c:func:`k_work_queue_drain()` can be used to block the caller until the
work queue has no items left. Work items resubmitted from the workqueue
thread are accepted while a queue is draining, but work items from any other
thread or ISR are rejected. The restriction on submitting more work can be
extended past the completion of the drain operation in order to allow the
blocking thread to perform additional work while the queue is "plugged".
Note that draining a queue has no effect on scheduling or processing
delayable items, but if the queue is plugged and the deadline expires the
item will silently fail to be submitted.
* :c:func:`k_work_queue_unplug()` removes any previous block on submission to
the queue due to a previous drain operation.
Submitting a Work Item
======================
A work item is defined using a variable of type :c:struct:`k_work`. It must
be initialized by calling :c:func:`k_work_init`, unless it is defined using
:c:macro:`K_WORK_DEFINE` in which case initialization is performed at
compile-time.
An initialized work item can be submitted to the system workqueue by
calling :c:func:`k_work_submit`, or to a specified workqueue by
calling :c:func:`k_work_submit_to_queue`.
The following code demonstrates how an ISR can offload the printing
of error messages to the system workqueue. Note that if the ISR attempts
to resubmit the work item while it is still queued, the work item is left
unchanged and the associated error message will not be printed.
.. code-block:: c
struct device_info {
struct k_work work;
char name[16]
} my_device;
void my_isr(void *arg)
{
...
if (error detected) {
k_work_submit(&my_device.work);
}
...
}
void print_error(struct k_work *item)
{
struct device_info *the_device =
CONTAINER_OF(item, struct device_info, work);
printk("Got error on device %s\n", the_device->name);
}
/* initialize name info for a device */
strcpy(my_device.name, "FOO_dev");
/* initialize work item for printing device's error messages */
k_work_init(&my_device.work, print_error);
/* install my_isr() as interrupt handler for the device (not shown) */
...
The following API can be used to check the status of or synchronize with the
work item:
* :c:func:`k_work_busy_get()` returns a snapshot of flags indicating work item
state. A zero value indicates the work is not scheduled, submitted, being
executed, or otherwise still being referenced by the workqueue
infrastructure.
* :c:func:`k_work_is_pending()` is a helper that indicates ``true`` if and only
if the work is scheduled, queued, or running.
* :c:func:`k_work_flush()` may be invoked from threads to block until the work
item has completed. It returns immediately if the work is not pending.
* :c:func:`k_work_cancel()` attempts to prevent the work item from being
executed. This may or may not be successful. This is safe to invoke
from ISRs.
* :c:func:`k_work_cancel_sync()` may be invoked from threads to block until
the work completes; it will return immediately if the cancellation was
successful or not necessary (the work wasn't submitted or running). This
can be used after :c:func:`k_work_cancel()` is invoked (from an ISR)` to
confirm completion of an ISR-initiated cancellation.
Scheduling a Delayable Work Item
================================
A delayable work item is defined using a variable of type
:c:struct:`k_work_delayable`. It must be initialized by calling
:c:func:`k_work_init_delayable`.
For delayed work there are two common use cases, depending on whether a
deadline should be extended if a new event occurs. An example is collecting
data that comes in asynchronously, e.g. characters from a UART associated with
a keyboard. There are two APIs that submit work after a delay:
* :c:func:`k_work_schedule()` (or :c:func:`k_work_schedule_for_queue()`)
schedules work to be executed at a specific time or after a delay. Further
attempts to schedule the same item with this API before the delay completes
will not change the time at which the item will be submitted to its queue.
Use this if the policy is to keep collecting data until a specified delay
since the **first** unprocessed data was received;
* :c:func:`k_work_reschedule()` (or :c:func:`k_work_reschedule_for_queue()`)
unconditionally sets the deadline for the work, replacing any previous
incomplete delay and changing the destination queue if necessary. Use this
if the policy is to keep collecting data until a specified delay since the
**last** unprocessed data was received.
If the work item is not scheduled both APIs behave the same. If
:c:macro:`K_NO_WAIT` is specified as the delay the behavior is as if the item
was immediately submitted directly to the target queue, without waiting for a
minimal timeout (unless :c:func:`k_work_schedule()` is used and a previous
delay has not completed).
Both also have variants that allow
control of the queue used for submission.
The helper function :c:func:`k_work_delayable_from_work()` can be used to get
a pointer to the containing :c:struct:`k_work_delayable` from a pointer to
:c:struct:`k_work` that is passed to a work handler function.
The following additional API can be used to check the status of or synchronize
with the work item:
* :c:func:`k_work_delayable_busy_get()` is the analog to :c:func:`k_work_busy_get()`
for delayable work.
* :c:func:`k_work_delayable_is_pending()` is the analog to
:c:func:`k_work_is_pending()` for delayable work.
* :c:func:`k_work_flush_delayable()` is the analog to :c:func:`k_work_flush()`
for delayable work.
* :c:func:`k_work_cancel_delayable()` is the analog to
:c:func:`k_work_cancel()` for delayable work; similarly with
:c:func:`k_work_cancel_delayable_sync()`.
Synchronizing with Work Items
=============================
While the state of both regular and delayable work items can be determined
from any context using :c:func:`k_work_busy_get()` and
:c:func:`k_work_delayable_busy_get()` some use cases require synchronizing
with work items after they've been submitted. :c:func:`k_work_flush()`,
:c:func:`k_work_cancel_sync()`, and :c:func:`k_work_cancel_delayable_sync()`
can be invoked from thread context to wait until the requested state has been
reached.
These APIs must be provided with a :c:struct:`k_work_sync` object that has no
application-inspectable components but is needed to provide the
synchronization objects. These objects should not be allocated on a stack if
the code is expected to work on architectures with
:kconfig:option:`CONFIG_KERNEL_COHERENCE`.
Workqueue Best Practices
************************
Avoid Race Conditions
=====================
Sometimes the data a work item must process is naturally thread-safe, for
example when it's put into a :c:struct:`k_queue` by some thread and processed
in the work thread. More often external synchronization is required to avoid
data races: cases where the work thread might inspect or manipulate shared
state that's being accessed by another thread or interrupt. Such state might
be a flag indicating that work needs to be done, or a shared object that is
filled by an ISR or thread and read by the work handler.
For simple flags :ref:`atomic_v2` may be sufficient. In other cases spin
locks (:c:struct:`k_spinlock`) or thread-aware locks (:c:struct:`k_sem`,
:c:struct:`k_mutex` , ...) may be used to ensure data races don't occur.
If the selected lock mechanism can :ref:`api_term_sleep` then allowing the
work thread to sleep will starve other work queue items, which may need to
make progress in order to get the lock released. Work handlers should try to
take the lock with its no-wait path. For example:
.. code-block:: c
static void work_handler(struct work *work)
{
struct work_context *parent = CONTAINER_OF(work, struct work_context,
work_item);
if (k_mutex_lock(&parent->lock, K_NO_WAIT) != 0) {
/* NB: Submit will fail if the work item is being cancelled. */
(void)k_work_submit(work);
return;
}
/* do stuff under lock */
k_mutex_unlock(&parent->lock);
/* do stuff without lock */
}
Be aware that if the lock is held by a thread with a lower priority than the
work queue the resubmission may starve the thread that would release the lock,
causing the application to fail. Where the idiom above is required a
delayable work item is preferred, and the work should be (re-)scheduled with a
non-zero delay to allow the thread holding the lock to make progress.
Note that submitting from the work handler can fail if the work item had been
cancelled. Generally this is acceptable, since the cancellation will complete
once the handler finishes. If it is not, the code above must take other steps
to notify the application that the work could not be performed.
Work items in isolation are self-locking, so you don't need to hold an
external lock just to submit or schedule them. Even if you use external state
protected by such a lock to prevent further resubmission, it's safe to do the
resubmit as long as you're sure that eventually the item will take its lock
and check that state to determine whether it should do anything. Where a
delayable work item is being rescheduled in its handler due to inability to
take the lock some other self-locking state, such as an atomic flag set by the
application/driver when the cancel is initiated, would be required to detect
the cancellation and avoid the cancelled work item being submitted again after
the deadline.
Check Return Values
===================
All work API functions return status of the underlying operation, and in many
cases it is important to verify that the intended result was obtained.
* Submitting a work item (:c:func:`k_work_submit_to_queue`) can fail if the
work is being cancelled or the queue is not accepting new items. If this
happens the work will not be executed, which could cause a subsystem that is
animated by work handler activity to become non-responsive.
* Asynchronous cancellation (:c:func:`k_work_cancel` or
:c:func:`k_work_cancel_delayable`) can complete while the work item is still
being run by a handler. Proceeding to manipulate state shared with the work
handler will result in data races that can cause failures.
Many race conditions have been present in Zephyr code because the results of
an operation were not checked.
There may be good reason to believe that a return value indicating that the
operation did not complete as expected is not a problem. In those cases the
code should clearly document this, by (1) casting the return value to ``void``
to indicate that the result is intentionally ignored, and (2) documenting what
happens in the unexpected case. For example:
.. code-block:: c
/* If this fails, the work handler will check pub->active and
* exit without transmitting.
*/
(void)k_work_cancel_delayable(&pub->timer);
However in such a case the following code must still avoid data races, as it
cannot guarantee that the work thread is not accessing work-related state.
Don't Optimize Prematurely
==========================
The workqueue API is designed to be safe when invoked from multiple threads
and interrupts. Attempts to externally inspect a work item's state and make
decisions based on the result are likely to create new problems.
So when new work comes in, just submit it. Don't attempt to "optimize" by
checking whether the work item is already submitted by inspecting snapshot
state with :c:func:`k_work_is_pending` or :c:func:`k_work_busy_get`, or
checking for a non-zero delay from
:c:func:`k_work_delayable_remaining_get()`. Those checks are fragile: a "busy"
indication can be obsolete by the time the test is returned, and a "not-busy"
indication can also be wrong if work is submitted from multiple contexts, or
(for delayable work) if the deadline has completed but the work is still in
queued or running state.
A general best practice is to always maintain in shared state some condition
that can be checked by the handler to confirm whether there is work to be
done. This way you can use the work handler as the standard cleanup path:
rather than having to deal with cancellation and cleanup at points where items
are submitted, you may be able to have everything done in the work handler
itself.
A rare case where you could safely use :c:func:`k_work_is_pending` is as a
check to avoid invoking :c:func:`k_work_flush` or
:c:func:`k_work_cancel_sync`, if you are *certain* that nothing else might
submit the work while you're checking (generally because you're holding a lock
that prevents access to state used for submission).
Suggested Uses
**************
Use the system workqueue to defer complex interrupt-related processing from an
ISR to a shared thread. This allows the interrupt-related processing to be
done promptly without compromising the system's ability to respond to
subsequent interrupts, and does not require the application to define and
manage an additional thread to do the processing.
Configuration Options
**********************
Related configuration options:
* :kconfig:option:`CONFIG_SYSTEM_WORKQUEUE_STACK_SIZE`
* :kconfig:option:`CONFIG_SYSTEM_WORKQUEUE_PRIORITY`
* :kconfig:option:`CONFIG_SYSTEM_WORKQUEUE_NO_YIELD`
API Reference
**************
.. doxygengroup:: workqueue_apis
``` | /content/code_sandbox/doc/kernel/services/threads/workqueue.rst | restructuredtext | 2016-05-26T17:54:19 | 2024-08-16T18:09:06 | zephyr | zephyrproject-rtos/zephyr | 10,307 | 5,560 |
```restructuredtext
.. _fatal:
Fatal Errors
############
Software Errors Triggered in Source Code
****************************************
Zephyr provides several methods for inducing fatal error conditions through
either build-time checks, conditionally compiled assertions, or deliberately
invoked panic or oops conditions.
Runtime Assertions
==================
Zephyr provides some macros to perform runtime assertions which may be
conditionally compiled. Their definitions may be found in
:zephyr_file:`include/zephyr/sys/__assert.h`.
Assertions are enabled by setting the ``__ASSERT_ON`` preprocessor symbol to a
non-zero value. There are two ways to do this:
- Use the :kconfig:option:`CONFIG_ASSERT` and :kconfig:option:`CONFIG_ASSERT_LEVEL` kconfig
options.
- Add ``-D__ASSERT_ON=<level>`` to the project's CFLAGS, either on the
build command line or in a CMakeLists.txt.
The ``__ASSERT_ON`` method takes precedence over the kconfig option if both are
used.
Specifying an assertion level of 1 causes the compiler to issue warnings that
the kernel contains debug-type ``__ASSERT()`` statements; this reminder is
issued since assertion code is not normally present in a final product.
Specifying assertion level 2 suppresses these warnings.
Assertions are enabled by default when running Zephyr test cases, as
configured by the :kconfig:option:`CONFIG_TEST` option.
The policy for what to do when encountering a failed assertion is controlled
by the implementation of :c:func:`assert_post_action`. Zephyr provides
a default implementation with weak linkage which invokes a kernel oops if
the thread that failed the assertion was running in user mode, and a kernel
panic otherwise.
__ASSERT()
----------
The ``__ASSERT()`` macro can be used inside kernel and application code to
perform optional runtime checks which will induce a fatal error if the
check does not pass. The macro takes a string message which will be printed
to provide context to the assertion. In addition, the kernel will print
a text representation of the expression code that was evaluated, and the
file and line number where the assertion can be found.
For example:
.. code-block:: c
__ASSERT(foo == 0xF0CACC1A, "Invalid value of foo, got 0x%x", foo);
If at runtime ``foo`` had some unexpected value, the error produced may
look like the following:
.. code-block:: none
ASSERTION FAIL [foo == 0xF0CACC1A] @ ZEPHYR_BASE/tests/kernel/fatal/src/main.c:367
Invalid value of foo, got 0xdeadbeef
[00:00:00.000,000] <err> os: r0/a1: 0x00000004 r1/a2: 0x0000016f r2/a3: 0x00000000
[00:00:00.000,000] <err> os: r3/a4: 0x00000000 r12/ip: 0x00000000 r14/lr: 0x00000a6d
[00:00:00.000,000] <err> os: xpsr: 0x61000000
[00:00:00.000,000] <err> os: Faulting instruction address (r15/pc): 0x00009fe4
[00:00:00.000,000] <err> os: >>> ZEPHYR FATAL ERROR 4: Kernel panic
[00:00:00.000,000] <err> os: Current thread: 0x20000414 (main)
[00:00:00.000,000] <err> os: Halting system
__ASSERT_EVAL()
---------------
The ``__ASSERT_EVAL()`` macro can also be used inside kernel and application
code, with special semantics for the evaluation of its arguments.
It makes use of the ``__ASSERT()`` macro, but has some extra flexibility. It
allows the developer to specify different actions depending whether the
``__ASSERT()`` macro is enabled or not. This can be particularly useful to
prevent the compiler from generating comments (errors, warnings or remarks)
about variables that are only used with ``__ASSERT()`` being assigned a value,
but otherwise unused when the ``__ASSERT()`` macro is disabled.
Consider the following example:
.. code-block:: c
int x;
x = foo();
__ASSERT(x != 0, "foo() returned zero!");
If ``__ASSERT()`` is disabled, then 'x' is assigned a value, but never used.
This type of situation can be resolved using the __ASSERT_EVAL() macro.
.. code-block:: c
__ASSERT_EVAL ((void) foo(),
int x = foo(),
x != 0,
"foo() returned zero!");
The first parameter tells ``__ASSERT_EVAL()`` what to do if ``__ASSERT()`` is
disabled. The second parameter tells ``__ASSERT_EVAL()`` what to do if
``__ASSERT()`` is enabled. The third and fourth parameters are the parameters
it passes to ``__ASSERT()``.
__ASSERT_NO_MSG()
-----------------
The ``__ASSERT_NO_MSG()`` macro can be used to perform an assertion that
reports the failed test and its location, but lacks additional debugging
information provided to assist the user in diagnosing the problem; its use is
discouraged.
Build Assertions
================
Zephyr provides two macros for performing build-time assertion checks.
These are evaluated completely at compile-time, and are always checked.
BUILD_ASSERT()
--------------
This has the same semantics as C's ``_Static_assert`` or C++'s
``static_assert``. If the evaluation fails, a build error will be generated by
the compiler. If the compiler supports it, the provided message will be printed
to provide further context.
Unlike ``__ASSERT()``, the message must be a static string, without
:c:func:`printf()`-like format codes or extra arguments.
For example, suppose this check fails:
.. code-block:: c
BUILD_ASSERT(FOO == 2000, "Invalid value of FOO");
With GCC, the output resembles:
.. code-block:: none
tests/kernel/fatal/src/main.c: In function 'test_main':
include/toolchain/gcc.h:28:37: error: static assertion failed: "Invalid value of FOO"
#define BUILD_ASSERT(EXPR, MSG) _Static_assert(EXPR, "" MSG)
^~~~~~~~~~~~~~
tests/kernel/fatal/src/main.c:370:2: note: in expansion of macro 'BUILD_ASSERT'
BUILD_ASSERT(FOO == 2000,
^~~~~~~~~~~~~~~~
Kernel Oops
===========
A kernel oops is a software triggered fatal error invoked by
:c:func:`k_oops()`. This should be used to indicate an unrecoverable condition
in application logic.
The fatal error reason code generated will be ``K_ERR_KERNEL_OOPS``.
Kernel Panic
============
A kernel error is a software triggered fatal error invoked by
:c:func:`k_panic()`. This should be used to indicate that the Zephyr kernel is
in an unrecoverable state. Implementations of
:c:func:`k_sys_fatal_error_handler()` should not return if the kernel
encounters a panic condition, as the entire system needs to be reset.
Threads running in user mode are not permitted to invoke :c:func:`k_panic()`,
and doing so will generate a kernel oops instead. Otherwise, the fatal error
reason code generated will be ``K_ERR_KERNEL_PANIC``.
Exceptions
**********
Spurious Interrupts
===================
If the CPU receives a hardware interrupt on an interrupt line that has not had
a handler installed with ``IRQ_CONNECT()`` or :c:func:`irq_connect_dynamic()`,
then the kernel will generate a fatal error with the reason code
``K_ERR_SPURIOUS_IRQ()``.
Stack Overflows
===============
In the event that a thread pushes more data onto its execution stack than its
stack buffer provides, the kernel may be able to detect this situation and
generate a fatal error with a reason code of ``K_ERR_STACK_CHK_FAIL``.
If a thread is running in user mode, then stack overflows are always caught,
as the thread will simply not have permission to write to adjacent memory
addresses outside of the stack buffer. Because this is enforced by the
memory protection hardware, there is no risk of data corruption to memory
that the thread would not otherwise be able to write to.
If a thread is running in supervisor mode, or if :kconfig:option:`CONFIG_USERSPACE` is
not enabled, depending on configuration stack overflows may or may not be
caught. :kconfig:option:`CONFIG_HW_STACK_PROTECTION` is supported on some
architectures and will catch stack overflows in supervisor mode, including
when handling a system call on behalf of a user thread. Typically this is
implemented via dedicated CPU features, or read-only MMU/MPU guard regions
placed immediately adjacent to the stack buffer. Stack overflows caught in this
way can detect the overflow, but cannot guarantee against data corruption and
should be treated as a very serious condition impacting the health of the
entire system.
If a platform lacks memory management hardware support,
:kconfig:option:`CONFIG_STACK_SENTINEL` is a software-only stack overflow detection
feature which periodically checks if a sentinel value at the end of the stack
buffer has been corrupted. It does not require hardware support, but provides
no protection against data corruption. Since the checks are typically done at
interrupt exit, the overflow may be detected a nontrivial amount of time after
the stack actually overflowed.
Finally, Zephyr supports GCC compiler stack canaries via
:kconfig:option:`CONFIG_STACK_CANARIES`. If enabled, the compiler will insert a canary
value randomly generated at boot into function stack frames, checking that the
canary has not been overwritten at function exit. If the check fails, the
compiler invokes :c:func:`__stack_chk_fail()`, whose Zephyr implementation
invokes a fatal stack overflow error. An error in this case does not indicate
that the entire stack buffer has overflowed, but instead that the current
function stack frame has been corrupted. See the compiler documentation for
more details.
Other Exceptions
================
Any other type of unhandled CPU exception will generate an error code of
``K_ERR_CPU_EXCEPTION``.
Fatal Error Handling
********************
The policy for what to do when encountering a fatal error is determined by the
implementation of the :c:func:`k_sys_fatal_error_handler()` function. This
function has a default implementation with weak linkage that calls
``LOG_PANIC()`` to dump all pending logging messages and then unconditionally
halts the system with :c:func:`k_fatal_halt()`.
Applications are free to implement their own error handling policy by
overriding the implementation of :c:func:`k_sys_fatal_error_handler()`.
If the implementation returns, the faulting thread will be aborted and
the system will otherwise continue to function. See the documentation for
this function for additional details and constraints.
API Reference
*************
.. doxygengroup:: fatal_apis
``` | /content/code_sandbox/doc/kernel/services/other/fatal.rst | restructuredtext | 2016-05-26T17:54:19 | 2024-08-16T18:09:06 | zephyr | zephyrproject-rtos/zephyr | 10,307 | 2,411 |
```restructuredtext
.. _thread_local_storage:
Thread Local Storage (TLS)
##########################
Thread Local Storage (TLS) allows variables to be allocated on a per-thread
basis. These variables are stored in the thread stack which means every
thread has its own copy of these variables.
Zephyr currently requires toolchain support for TLS.
Configuration
*************
To enable thread local storage in Zephyr, :kconfig:option:`CONFIG_THREAD_LOCAL_STORAGE`
needs to be enabled. Note that this option may not be available if
the architecture or the SoC does not have the hidden option
:kconfig:option:`CONFIG_ARCH_HAS_THREAD_LOCAL_STORAGE` enabled, which means
the architecture or the SoC does not have the necessary code to support
thread local storage and/or the toolchain does not support TLS.
:kconfig:option:`CONFIG_ERRNO_IN_TLS` can be enabled together with
:kconfig:option:`CONFIG_ERRNO` to let the variable ``errno`` be a thread local
variable. This allows user threads to access the value of ``errno`` without
making a system call.
Declaring and Using Thread Local Variables
******************************************
The keyword ``__thread`` can be used to declare thread local variables.
For example, to declare a thread local variable in header files:
.. code-block:: c
extern __thread int i;
And to declare the actual variable in source files:
.. code-block:: c
__thread int i;
Keyword ``static`` can also be used to limit the variable within a source file:
.. code-block:: c
static __thread int j;
Using the thread local variable is the same as using other variable, for example:
.. code-block:: c
void testing(void) {
i = 10;
}
``` | /content/code_sandbox/doc/kernel/services/other/thread_local_storage.rst | restructuredtext | 2016-05-26T17:54:19 | 2024-08-16T18:09:06 | zephyr | zephyrproject-rtos/zephyr | 10,307 | 361 |
```restructuredtext
.. _atomic_v2:
Atomic Services
###############
An :dfn:`atomic variable` is one that can be read and modified
by threads and ISRs in an uninterruptible manner. It is a 32-bit variable on
32-bit machines and a 64-bit variable on 64-bit machines.
.. contents::
:local:
:depth: 2
Concepts
********
Any number of atomic variables can be defined (limited only by available RAM).
Using the kernel's atomic APIs to manipulate an atomic variable
guarantees that the desired operation occurs correctly,
even if higher priority contexts also manipulate the same variable.
The kernel also supports the atomic manipulation of a single bit
in an array of atomic variables.
Implementation
**************
Defining an Atomic Variable
===========================
An atomic variable is defined using a variable of type :c:type:`atomic_t`.
By default an atomic variable is initialized to zero. However, it can be given
a different value using :c:macro:`ATOMIC_INIT`:
.. code-block:: c
atomic_t flags = ATOMIC_INIT(0xFF);
Manipulating an Atomic Variable
===============================
An atomic variable is manipulated using the APIs listed at the end of
this section.
The following code shows how an atomic variable can be used to keep track
of the number of times a function has been invoked. Since the count is
incremented atomically, there is no risk that it will become corrupted
in mid-increment if a thread calling the function is interrupted if
by a higher priority context that also calls the routine.
.. code-block:: c
atomic_t call_count;
int call_counting_routine(void)
{
/* increment invocation counter */
atomic_inc(&call_count);
/* do rest of routine's processing */
...
}
Manipulating an Array of Atomic Variables
=========================================
An array of 32-bit atomic variables can be defined in the conventional manner.
However, you can also define an N-bit array of atomic variables using
:c:macro:`ATOMIC_DEFINE`.
A single bit in array of atomic variables can be manipulated using
the APIs listed at the end of this section that end with :c:func:`_bit`.
The following code shows how a set of 200 flag bits can be implemented
using an array of atomic variables.
.. code-block:: c
#define NUM_FLAG_BITS 200
ATOMIC_DEFINE(flag_bits, NUM_FLAG_BITS);
/* set specified flag bit & return its previous value */
int set_flag_bit(int bit_position)
{
return (int)atomic_set_bit(flag_bits, bit_position);
}
Memory Ordering
===============
For consistency and correctness, all Zephyr atomic APIs are expected
to include a full memory barrier (in the sense of e.g. "serializing"
instructions on x86, "DMB" on ARM, or a "sequentially consistent"
operation as defined by the C++ memory model) where needed by hardware
to guarantee a reliable picture across contexts. Any
architecture-specific implementations are responsible for ensuring
this behavior.
Suggested Uses
**************
Use an atomic variable to implement critical section processing that only
requires the manipulation of a single 32-bit value.
Use multiple atomic variables to implement critical section processing
on a set of flag bits in a bit array longer than 32 bits.
.. note::
Using atomic variables is typically far more efficient than using
other techniques to implement critical sections such as using a mutex
or locking interrupts.
Configuration Options
*********************
Related configuration options:
* :kconfig:option:`CONFIG_ATOMIC_OPERATIONS_BUILTIN`
* :kconfig:option:`CONFIG_ATOMIC_OPERATIONS_ARCH`
* :kconfig:option:`CONFIG_ATOMIC_OPERATIONS_C`
API Reference
*************
.. important::
All atomic services APIs can be used by both threads and ISRs.
.. doxygengroup:: atomic_apis
``` | /content/code_sandbox/doc/kernel/services/other/atomic.rst | restructuredtext | 2016-05-26T17:54:19 | 2024-08-16T18:09:06 | zephyr | zephyrproject-rtos/zephyr | 10,307 | 812 |
```restructuredtext
.. _version:
Version
#######
Kernel version handling and APIs related to kernel version being used.
API Reference
**************
.. doxygengroup:: version_apis
``` | /content/code_sandbox/doc/kernel/services/other/version.rst | restructuredtext | 2016-05-26T17:54:19 | 2024-08-16T18:09:06 | zephyr | zephyrproject-rtos/zephyr | 10,307 | 37 |
```restructuredtext
.. _float_v2:
Floating Point Services
#######################
The kernel allows threads to use floating point registers on board
configurations that support these registers.
.. note::
Floating point services are currently available only for boards
based on ARM Cortex-M SoCs supporting the Floating Point Extension,
the Intel x86 architecture, the SPARC architecture and ARCv2 SoCs
supporting the Floating Point Extension. The services provided
are architecture specific.
The kernel does not support the use of floating point registers by ISRs.
.. contents::
:local:
:depth: 2
Concepts
********
The kernel can be configured to provide only the floating point services
required by an application. Three modes of operation are supported,
which are described below. In addition, the kernel's support for the SSE
registers can be included or omitted, as desired.
No FP registers mode
====================
This mode is used when the application has no threads that use floating point
registers. It is the kernel's default floating point services mode.
If a thread uses any floating point register,
the kernel generates a fatal error condition and aborts the thread.
Unshared FP registers mode
==========================
This mode is used when the application has only a single thread
that uses floating point registers.
On x86 platforms, the kernel initializes the floating point registers so they can
be used by any thread (initialization in skipped on ARM Cortex-M platforms and
ARCv2 platforms). The floating point registers are left unchanged whenever a
context switch occurs.
.. note::
The behavior is undefined, if two or more threads attempt to use
the floating point registers, as the kernel does not attempt to detect
(or prevent) multiple threads from using these registers.
Shared FP registers mode
========================
This mode is used when the application has two or more threads that use
floating point registers. Depending upon the underlying CPU architecture,
the kernel supports one or more of the following thread sub-classes:
* non-user: A thread that cannot use any floating point registers
* FPU user: A thread that can use the standard floating point registers
* SSE user: A thread that can use both the standard floating point registers
and SSE registers
The kernel initializes and enables access to the floating point registers,
so they can be used
by any thread, then saves and restores these registers during
context switches to ensure the computations performed by each FPU user
or SSE user are not impacted by the computations performed by the other users.
ARM Cortex-M architecture (with the Floating Point Extension)
-------------------------------------------------------------
.. note::
The Shared FP registers mode is the default Floating Point
Services mode in ARM Cortex-M.
On the ARM Cortex-M architecture with the Floating Point Extension, the kernel
treats *all* threads as FPU users when shared FP registers mode is enabled.
This means that any thread is allowed to access the floating point registers.
The ARM kernel automatically detects that a given thread is using the floating
point registers the first time the thread accesses them.
Pretag a thread that intends to use the FP registers by
using one of the techniques listed below.
* A statically-created ARM thread can be pretagged by passing the
:c:macro:`K_FP_REGS` option to :c:macro:`K_THREAD_DEFINE`.
* A dynamically-created ARM thread can be pretagged by passing the
:c:macro:`K_FP_REGS` option to :c:func:`k_thread_create`.
Pretagging a thread with the :c:macro:`K_FP_REGS` option instructs the
MPU-based stack protection mechanism to properly configure the size of
the thread's guard region to always guarantee stack overflow detection,
and enable lazy stacking for the given thread upon thread creation.
During thread context switching the ARM kernel saves the *callee-saved*
floating point registers, if the switched-out thread has been using them.
Additionally, the *caller-saved* floating point registers are saved on
the thread's stack. If the switched-in thread has been using the floating
point registers, the kernel restores the *callee-saved* FP registers of
the switched-in thread and the *caller-saved* FP context is restored from
the thread's stack. Thus, the kernel does not save or restore the FP
context of threads that are not using the FP registers.
Each thread that intends to use the floating point registers must provide
an extra 72 bytes of stack space where the callee-saved FP context can
be saved.
`Lazy Stacking
<path_to_url`_
is currently enabled in Zephyr applications on ARM Cortex-M
architecture, minimizing interrupt latency, when the floating
point context is active.
When the MPU-based stack protection mechanism is not enabled, lazy stacking
is always active in the Zephyr application. When the MPU-based stack protection
is enabled, the following rules apply with respect to lazy stacking:
* Lazy stacking is activated by default on threads that are pretagged with
:c:macro:`K_FP_REGS`
* Lazy stacking is activated dynamically on threads that are not pretagged with
:c:macro:`K_FP_REGS`, as soon as the kernel detects that they are using the
floating point registers.
If an ARM thread does not require use of the floating point registers any
more, it can call :c:func:`k_float_disable`. This instructs the kernel
not to save or restore its FP context during thread context switching.
ARM64 architecture
------------------
.. note::
The Shared FP registers mode is the default Floating Point
Services mode on ARM64. The compiler is free to optimize code
using FP/SIMD registers, and library functions such as memcpy
are known to make use of them.
On the ARM64 (Aarch64) architecture the kernel treats each thread as a FPU
user on a case-by-case basis. A "lazy save" algorithm is used during context
switching which updates the floating point registers only when it is absolutely
necessary. For example, the registers are *not* saved when switching from an
FPU user to a non-user thread, and then back to the original FPU user.
FPU register usage by ISRs is supported although not recommended. When an
ISR uses floating point or SIMD registers, then the access is trapped, the
current FPU user context is saved in the thread object and the ISR is resumed
with interrupts disabled so to prevent another IRQ from interrupting the ISR
and potentially requesting FPU usage. Because ISR don't have a persistent
register context, there are no provision for saving an ISR's FPU context
either, hence the IRQ disabling.
Each thread object becomes 512 bytes larger when Shared FP registers mode
is enabled.
ARCv2 architecture
------------------
On the ARCv2 architecture, the kernel treats each thread as a non-user
or FPU user and the thread must be tagged by one of the
following techniques.
* A statically-created ARC thread can be tagged by passing the
:c:macro:`K_FP_REGS` option to :c:macro:`K_THREAD_DEFINE`.
* A dynamically-created ARC thread can be tagged by passing the
:c:macro:`K_FP_REGS` to :c:func:`k_thread_create`.
If an ARC thread does not require use of the floating point registers any
more, it can call :c:func:`k_float_disable`. This instructs the kernel
not to save or restore its FP context during thread context switching.
During thread context switching the ARC kernel saves the *callee-saved*
floating point registers, if the switched-out thread has been using them.
Additionally, the *caller-saved* floating point registers are saved on
the thread's stack. If the switched-in thread has been using the floating
point registers, the kernel restores the *callee-saved* FP registers of
the switched-in thread and the *caller-saved* FP context is restored from
the thread's stack. Thus, the kernel does not save or restore the FP
context of threads that are not using the FP registers. An extra 16 bytes
(single floating point hardware) or 32 bytes (double floating point hardware)
of stack space is required to load and store floating point registers.
RISC-V architecture
-------------------
On the RISC-V architecture the kernel treats each thread as an FPU
user on a case-by-case basis with the FPU access allocated on demand.
A "lazy save" algorithm is used during context switching which updates
the floating point registers only when it is absolutely necessary.
For example, the FPU registers are *not* saved when switching from an
FPU user to a non-user thread (or an FPU user that doesn't touch the FPU
during its scheduling slot), and then back to the original FPU user.
FPU register usage by ISRs is supported although not recommended. When an
ISR uses floating point or SIMD registers, then the access is trapped, the
current FPU user context is saved in the thread object and the ISR is resumed
with interrupts disabled so to prevent another IRQ from interrupting the ISR
and potentially requesting FPU usage. Because ISR don't have a persistent
register context, there are no provision for saving an ISR's FPU context
either, hence the IRQ disabling.
As an optimization, the FPU context is preemptively restored upon scheduling
back an "active FPU user" thread that had its FPU context saved away due to
FPU usage by another thread. Active FPU users are so designated when they
make the FPU state "dirty" during their most recent scheduling slot before
being scheduled out. So if a thread doesn't modify the FPU state within its
scheduling slot and another thread claims the FPU for itself afterwards then
that first thread will be subjected to the on-demand regime and won't have
its FPU context restored until it attempts to access it again. But if that
thread does modify the FPU before being scheduled out then it is likely to
continue using it when scheduled back in and preemptively restoring its FPU
context saves on the exception trap overhead that would occur otherwise.
Each thread object becomes 136 bytes (single-precision floating point
hardware) or 264 bytes (double-precision floating point hardware) larger
when Shared FP registers mode is enabled.
SPARC architecture
------------------
On the SPARC architecture, the kernel treats each thread as a non-user
or FPU user and the thread must be tagged by one of the
following techniques:
* A statically-created thread can be tagged by passing the
:c:macro:`K_FP_REGS` option to :c:macro:`K_THREAD_DEFINE`.
* A dynamically-created thread can be tagged by passing the
:c:macro:`K_FP_REGS` to :c:func:`k_thread_create`.
During thread context switch at exit from interrupt handler, the SPARC
kernel saves *all* floating point registers, if the FPU was enabled in
the switched-out thread. Floating point registers are saved on the thread's
stack. Floating point registers are restored when a thread context is restored
iff they were saved at the context save. Saving and restoring of the floating
point registers is synchronous and thus not lazy. The FPU is always disabled
when an ISR is called (independent of :kconfig:option:`CONFIG_FPU_SHARING`).
Floating point disabling with :c:func:`k_float_disable` is not implemented.
When :kconfig:option:`CONFIG_FPU_SHARING` is used, then 136 bytes of stack space
is required for each FPU user thread to load and store floating point
registers. No extra stack is required if :kconfig:option:`CONFIG_FPU_SHARING` is
not used.
x86 architecture
----------------
On the x86 architecture the kernel treats each thread as a non-user,
FPU user or SSE user on a case-by-case basis. A "lazy save" algorithm is used
during context switching which updates the floating point registers only when
it is absolutely necessary. For example, the registers are *not* saved when
switching from an FPU user to a non-user thread, and then back to the original
FPU user. The following table indicates the amount of additional stack space a
thread must provide so the registers can be saved properly.
=========== =============== ==========================
Thread type FP register use Extra stack space required
=========== =============== ==========================
cooperative any 0 bytes
preemptive none 0 bytes
preemptive FPU 108 bytes
preemptive SSE 464 bytes
=========== =============== ==========================
The x86 kernel automatically detects that a given thread is using
the floating point registers the first time the thread accesses them.
The thread is tagged as an SSE user if the kernel has been configured
to support the SSE registers, or as an FPU user if the SSE registers are
not supported. If this would result in a thread that is an FPU user being
tagged as an SSE user, or if the application wants to avoid the exception
handling overhead involved in auto-tagging threads, it is possible to
pretag a thread using one of the techniques listed below.
* A statically-created x86 thread can be pretagged by passing the
:c:macro:`K_FP_REGS` or :c:macro:`K_SSE_REGS` option to
:c:macro:`K_THREAD_DEFINE`.
* A dynamically-created x86 thread can be pretagged by passing the
:c:macro:`K_FP_REGS` or :c:macro:`K_SSE_REGS` option to
:c:func:`k_thread_create`.
* An already-created x86 thread can pretag itself once it has started
by passing the :c:macro:`K_FP_REGS` or :c:macro:`K_SSE_REGS` option to
:c:func:`k_float_enable`.
If an x86 thread uses the floating point registers infrequently it can call
:c:func:`k_float_disable` to remove its tagging as an FPU user or SSE user.
This eliminates the need for the kernel to take steps to preserve
the contents of the floating point registers during context switches
when there is no need to do so.
When the thread again needs to use the floating point registers it can re-tag
itself as an FPU user or SSE user by calling :c:func:`k_float_enable`.
Implementation
**************
Performing Floating Point Arithmetic
====================================
No special coding is required for a thread to use floating point arithmetic
if the kernel is properly configured.
The following code shows how a routine can use floating point arithmetic
to avoid overflow issues when computing the average of a series of integer
values.
.. code-block:: c
int average(int *values, int num_values)
{
double sum;
int i;
sum = 0.0;
for (i = 0; i < num_values; i++) {
sum += *values;
values++;
}
return (int)((sum / num_values) + 0.5);
}
Suggested Uses
**************
Use the kernel floating point services when an application needs to
perform floating point operations.
Configuration Options
*********************
To configure unshared FP registers mode, enable the :kconfig:option:`CONFIG_FPU`
configuration option and leave the :kconfig:option:`CONFIG_FPU_SHARING` configuration
option disabled.
To configure shared FP registers mode, enable both the :kconfig:option:`CONFIG_FPU`
configuration option and the :kconfig:option:`CONFIG_FPU_SHARING` configuration option.
Also, ensure that any thread that uses the floating point registers has
sufficient added stack space for saving floating point register values
during context switches, as described above.
For x86, use the :kconfig:option:`CONFIG_X86_SSE` configuration option to enable
support for SSEx instructions.
API Reference
*************
.. doxygengroup:: float_apis
``` | /content/code_sandbox/doc/kernel/services/other/float.rst | restructuredtext | 2016-05-26T17:54:19 | 2024-08-16T18:09:06 | zephyr | zephyrproject-rtos/zephyr | 10,307 | 3,335 |
```restructuredtext
.. _timers_v2:
Timers
######
A :dfn:`timer` is a kernel object that measures the passage of time
using the kernel's system clock. When a timer's specified time limit
is reached it can perform an application-defined action,
or it can simply record the expiration and wait for the application
to read its status.
.. contents::
:local:
:depth: 2
Concepts
********
Any number of timers can be defined (limited only by available RAM). Each timer
is referenced by its memory address.
A timer has the following key properties:
* A **duration** specifying the time interval before the timer
expires for the first time. This is a :c:type:`k_timeout_t` value that
may be initialized via different units.
* A **period** specifying the time interval between all timer
expirations after the first one, also a :c:type:`k_timeout_t`. It must be
non-negative. A period of ``K_NO_WAIT`` (i.e. zero) or
``K_FOREVER`` means that the timer is a one-shot timer that stops
after a single expiration. (For example then, if a timer is started
with a duration of 200 and a period of 75, it will first expire
after 200 ms and then every 75 ms after that.)
* An **expiry function** that is executed each time the timer expires.
The function is executed by the system clock interrupt handler.
If no expiry function is required a ``NULL`` function can be specified.
* A **stop function** that is executed if the timer is stopped prematurely
while running. The function is executed by the thread that stops the timer.
If no stop function is required a ``NULL`` function can be specified.
* A **status** value that indicates how many times the timer has expired
since the status value was last read.
A timer must be initialized before it can be used. This specifies its
expiry function and stop function values, sets the timer's status to zero,
and puts the timer into the **stopped** state.
A timer is **started** by specifying a duration and a period.
The timer's status is reset to zero, and then the timer enters
the **running** state and begins counting down towards expiry.
Note that the timer's duration and period parameters specify
**minimum** delays that will elapse. Because of internal system timer
precision (and potentially runtime interactions like interrupt delay)
it is possible that more time may have passed as measured by reads
from the relevant system time APIs. But at least this much time is
guaranteed to have elapsed.
When a running timer expires its status is incremented
and the timer executes its expiry function, if one exists;
If a thread is waiting on the timer, it is unblocked.
If the timer's period is zero the timer enters the stopped state;
otherwise, the timer restarts with a new duration equal to its period.
A running timer can be stopped in mid-countdown, if desired.
The timer's status is left unchanged, then the timer enters the stopped state
and executes its stop function, if one exists.
If a thread is waiting on the timer, it is unblocked.
Attempting to stop a non-running timer is permitted,
but has no effect on the timer since it is already stopped.
A running timer can be restarted in mid-countdown, if desired.
The timer's status is reset to zero, then the timer begins counting down
using the new duration and period values specified by the caller.
If a thread is waiting on the timer, it continues waiting.
A timer's status can be read directly at any time to determine how many times
the timer has expired since its status was last read.
Reading a timer's status resets its value to zero.
The amount of time remaining before the timer expires can also be read;
a value of zero indicates that the timer is stopped.
A thread may read a timer's status indirectly by **synchronizing**
with the timer. This blocks the thread until the timer's status is non-zero
(indicating that it has expired at least once) or the timer is stopped;
if the timer status is already non-zero or the timer is already stopped
the thread continues without waiting. The synchronization operation
returns the timer's status and resets it to zero.
.. note::
Only a single user should examine the status of any given timer,
since reading the status (directly or indirectly) changes its value.
Similarly, only a single thread at a time should synchronize
with a given timer. ISRs are not permitted to synchronize with timers,
since ISRs are not allowed to block.
Implementation
**************
Defining a Timer
================
A timer is defined using a variable of type :c:struct:`k_timer`.
It must then be initialized by calling :c:func:`k_timer_init`.
The following code defines and initializes a timer.
.. code-block:: c
struct k_timer my_timer;
extern void my_expiry_function(struct k_timer *timer_id);
k_timer_init(&my_timer, my_expiry_function, NULL);
Alternatively, a timer can be defined and initialized at compile time
by calling :c:macro:`K_TIMER_DEFINE`.
The following code has the same effect as the code segment above.
.. code-block:: c
K_TIMER_DEFINE(my_timer, my_expiry_function, NULL);
Using a Timer Expiry Function
=============================
The following code uses a timer to perform a non-trivial action on a periodic
basis. Since the required work cannot be done at the interrupt level,
the timer's expiry function submits a work item to the
:ref:`system workqueue <workqueues_v2>`, whose thread performs the work.
.. code-block:: c
void my_work_handler(struct k_work *work)
{
/* do the processing that needs to be done periodically */
...
}
K_WORK_DEFINE(my_work, my_work_handler);
void my_timer_handler(struct k_timer *dummy)
{
k_work_submit(&my_work);
}
K_TIMER_DEFINE(my_timer, my_timer_handler, NULL);
...
/* start a periodic timer that expires once every second */
k_timer_start(&my_timer, K_SECONDS(1), K_SECONDS(1));
Reading Timer Status
====================
The following code reads a timer's status directly to determine
if the timer has expired or not.
.. code-block:: c
K_TIMER_DEFINE(my_status_timer, NULL, NULL);
...
/* start a one-shot timer that expires after 200 ms */
k_timer_start(&my_status_timer, K_MSEC(200), K_NO_WAIT);
/* do work */
...
/* check timer status */
if (k_timer_status_get(&my_status_timer) > 0) {
/* timer has expired */
} else if (k_timer_remaining_get(&my_status_timer) == 0) {
/* timer was stopped (by someone else) before expiring */
} else {
/* timer is still running */
}
Using Timer Status Synchronization
==================================
The following code performs timer status synchronization to allow a thread
to do useful work while ensuring that a pair of protocol operations
are separated by the specified time interval.
.. code-block:: c
K_TIMER_DEFINE(my_sync_timer, NULL, NULL);
...
/* do first protocol operation */
...
/* start a one-shot timer that expires after 500 ms */
k_timer_start(&my_sync_timer, K_MSEC(500), K_NO_WAIT);
/* do other work */
...
/* ensure timer has expired (waiting for expiry, if necessary) */
k_timer_status_sync(&my_sync_timer);
/* do second protocol operation */
...
.. note::
If the thread had no other work to do it could simply sleep
between the two protocol operations, without using a timer.
Suggested Uses
**************
Use a timer to initiate an asynchronous operation after a specified
amount of time.
Use a timer to determine whether or not a specified amount of time has
elapsed. In particular, timers should be used when higher precision
and/or unit control is required than that afforded by the simpler
:c:func:`k_sleep` and :c:func:`k_usleep` calls.
Use a timer to perform other work while carrying out operations
involving time limits.
.. note::
If a thread needs to measure the time required to perform an operation
it can read the :ref:`system clock or the hardware clock <kernel_timing>`
directly, rather than using a timer.
Configuration Options
*********************
Related configuration options:
* None
API Reference
*************
.. doxygengroup:: timer_apis
``` | /content/code_sandbox/doc/kernel/services/timing/timers.rst | restructuredtext | 2016-05-26T17:54:19 | 2024-08-16T18:09:06 | zephyr | zephyrproject-rtos/zephyr | 10,307 | 1,829 |
```restructuredtext
.. _kernel_timing:
Kernel Timing
#############
Zephyr provides a robust and scalable timing framework to enable
reporting and tracking of timed events from hardware timing sources of
arbitrary precision.
Time Units
==========
Kernel time is tracked in several units which are used for different
purposes.
Real time values, typically specified in milliseconds or microseconds,
are the default presentation of time to application code. They have
the advantages of being universally portable and pervasively
understood, though they may not match the precision of the underlying
hardware perfectly.
The kernel presents a "cycle" count via the :c:func:`k_cycle_get_32`
and :c:func:`k_cycle_get_64` APIs. The intent is that this counter
represents the fastest cycle counter that the operating system is able
to present to the user (for example, a CPU cycle counter) and that the
read operation is very fast. The expectation is that very sensitive
application code might use this in a polling manner to achieve maximal
precision. The frequency of this counter is required to be steady
over time, and is available from
:c:func:`sys_clock_hw_cycles_per_sec` (which on almost all
platforms is a runtime constant that evaluates to
CONFIG_SYS_CLOCK_HW_CYCLES_PER_SEC).
For asynchronous timekeeping, the kernel defines a "ticks" concept. A
"tick" is the internal count in which the kernel does all its internal
uptime and timeout bookkeeping. Interrupts are expected to be
delivered on tick boundaries to the extent practical, and no
fractional ticks are tracked. The choice of tick rate is configurable
via :kconfig:option:`CONFIG_SYS_CLOCK_TICKS_PER_SEC`. Defaults on most
hardware platforms (ones that support setting arbitrary interrupt
timeouts) are expected to be in the range of 10 kHz, with software
emulation platforms and legacy drivers using a more traditional 100 Hz
value.
Conversion
----------
Zephyr provides an extensively enumerated conversion library with
rounding control for all time units. Any unit of "ms" (milliseconds),
"us" (microseconds), "tick", or "cyc" can be converted to any other.
Control of rounding is provided, and each conversion is available in
"floor" (round down to nearest output unit), "ceil" (round up) and
"near" (round to nearest). Finally the output precision can be
specified as either 32 or 64 bits.
For example: :c:func:`k_ms_to_ticks_ceil32` will convert a
millisecond input value to the next higher number of ticks, returning
a result truncated to 32 bits of precision; and
:c:func:`k_cyc_to_us_floor64` will convert a measured cycle count
to an elapsed number of microseconds in a full 64 bits of precision.
See the reference documentation for the full enumeration of conversion
routines.
On most platforms, where the various counter rates are integral
multiples of each other and where the output fits within a single
word, these conversions expand to a 2-4 operation sequence, requiring
full precision only where actually required and requested.
.. _kernel_timing_uptime:
Uptime
======
The kernel tracks a system uptime count on behalf of the application.
This is available at all times via :c:func:`k_uptime_get`, which
provides an uptime value in milliseconds since system boot. This is
expected to be the utility used by most portable application code.
The internal tracking, however, is as a 64 bit integer count of ticks.
Apps with precise timing requirements (that are willing to do their
own conversions to portable real time units) may access this with
:c:func:`k_uptime_ticks`.
Timeouts
========
The Zephyr kernel provides many APIs with a "timeout" parameter.
Conceptually, this indicates the time at which an event will occur.
For example:
* Kernel blocking operations like :c:func:`k_sem_take` or
:c:func:`k_queue_get` may provide a timeout after which the
routine will return with an error code if no data is available.
* Kernel :c:struct:`k_timer` objects must specify delays for
their duration and period.
* The kernel :c:struct:`k_work_delayable` API provides a timeout parameter
indicating when a work queue item will be added to the system queue.
All these values are specified using a :c:type:`k_timeout_t` value. This is
an opaque struct type that must be initialized using one of a family
of kernel timeout macros. The most common, :c:macro:`K_MSEC`, defines
a time in milliseconds after the current time.
What is meant by "current time" for relative timeouts depends on the context:
* When scheduling a relative timeout from within a timeout callback (e.g. from
within the expiry function passed to :c:func:`k_timer_init` or the work handler
passed to :c:func:`k_work_init_delayable`), "current time" is the exact time at
which the currently firing timeout was originally scheduled even if the "real
time" will already have advanced. This is to ensure that timers scheduled from
within another timer's callback will always be calculated with a precise offset
to the firing timer. It is thereby possible to fire at regular intervals without
introducing systematic clock drift over time.
* When scheduling a timeout from application context, "current time" means the
value returned by :c:func:`k_uptime_ticks` at the time at which the kernel
receives the timeout value.
Other options for timeout initialization follow the unit conventions
described above: :c:macro:`K_NSEC()`, :c:macro:`K_USEC`, :c:macro:`K_TICKS` and
:c:macro:`K_CYC()` specify timeout values that will expire after specified
numbers of nanoseconds, microseconds, ticks and cycles, respectively.
Precision of :c:type:`k_timeout_t` values is configurable, with the default
being 32 bits. Large uptime counts in non-tick units will experience
complicated rollover semantics, so it is expected that
timing-sensitive applications with long uptimes will be configured to
use a 64 bit timeout type.
Finally, it is possible to specify timeouts as absolute times since
system boot. A timeout initialized with :c:macro:`K_TIMEOUT_ABS_MS`
indicates a timeout that will expire after the system uptime reaches
the specified value. There are likewise nanosecond, microsecond,
cycles and ticks variants of this API.
Timing Internals
================
Timeout Queue
-------------
All Zephyr :c:type:`k_timeout_t` events specified using the API above are
managed in a single, global queue of events. Each event is stored in
a double-linked list, with an attendant delta count in ticks from the
previous event. The action to take on an event is specified as a
callback function pointer provided by the subsystem requesting the
event, along with a :c:struct:`_timeout` tracking struct that is
expected to be embedded within subsystem-defined data structures (for
example: a :c:struct:`wait_q` struct, or a :c:type:`k_tid_t` thread struct).
Note that all variant units passed via a :c:type:`k_timeout_t` are converted
to ticks once on insertion into the list. There no
multiple-conversion steps internal to the kernel, so precision is
guaranteed at the tick level no matter how many events exist or how
long a timeout might be.
Note that the list structure means that the CPU work involved in
managing large numbers of timeouts is quadratic in the number of
active timeouts. The API design of the timeout queue was intended to
permit a more scalable backend data structure, but no such
implementation exists currently.
Timer Drivers
-------------
Kernel timing at the tick level is driven by a timer driver with a
comparatively simple API.
* The driver is expected to be able to "announce" new ticks to the
kernel via the :c:func:`sys_clock_announce` call, which passes an integer
number of ticks that have elapsed since the last announce call (or
system boot). These calls can occur at any time, but the driver is
expected to attempt to ensure (to the extent practical given
interrupt latency interactions) that they occur near tick boundaries
(i.e. not "halfway through" a tick), and most importantly that they
be correct over time and subject to minimal skew vs. other counters
and real world time.
* The driver is expected to provide a :c:func:`sys_clock_set_timeout` call
to the kernel which indicates how many ticks may elapse before the
kernel must receive an announce call to trigger registered timeouts.
It is legal to announce new ticks before that moment (though they
must be correct) but delay after that will cause events to be
missed. Note that the timeout value passed here is in a delta from
current time, but that does not absolve the driver of the
requirement to provide ticks at a steady rate over time. Naive
implementations of this function are subject to bugs where the
fractional tick gets "reset" incorrectly and causes clock skew.
* The driver is expected to provide a :c:func:`sys_clock_elapsed` call which
provides a current indication of how many ticks have elapsed (as
compared to a real world clock) since the last call to
:c:func:`sys_clock_announce`, which the kernel needs to test newly
arriving timeouts for expiration.
Note that a natural implementation of this API results in a "tickless"
kernel, which receives and processes timer interrupts only for
registered events, relying on programmable hardware counters to
provide irregular interrupts. But a traditional, "ticked" or "dumb"
counter driver can be trivially implemented also:
* The driver can receive interrupts at a regular rate corresponding to
the OS tick rate, calling :c:func:`sys_clock_announce` with an argument of one
each time.
* The driver can ignore calls to :c:func:`sys_clock_set_timeout`, as every
tick will be announced regardless of timeout status.
* The driver can return zero for every call to :c:func:`sys_clock_elapsed`
as no more than one tick can be detected as having elapsed (because
otherwise an interrupt would have been received).
SMP Details
-----------
In general, the timer API described above does not change when run in
a multiprocessor context. The kernel will internally synchronize all
access appropriately, and ensure that all critical sections are small
and minimal. But some notes are important to detail:
* Zephyr is agnostic about which CPU services timer interrupts. It is
not illegal (though probably undesirable in some circumstances) to
have every timer interrupt handled on a single processor. Existing
SMP architectures implement symmetric timer drivers.
* The :c:func:`sys_clock_announce` call is expected to be globally
synchronized at the driver level. The kernel does not do any
per-CPU tracking, and expects that if two timer interrupts fire near
simultaneously, that only one will provide the current tick count to
the timing subsystem. The other may legally provide a tick count of
zero if no ticks have elapsed. It should not "skip" the announce
call because of timeslicing requirements (see below).
* Some SMP hardware uses a single, global timer device, others use a
per-CPU counter. The complexity here (for example: ensuring counter
synchronization between CPUs) is expected to be managed by the
driver, not the kernel.
* The next timeout value passed back to the driver via
:c:func:`sys_clock_set_timeout` is done identically for every CPU.
So by default, every CPU will see simultaneous timer interrupts for
every event, even though by definition only one of them should see a
non-zero ticks argument to :c:func:`sys_clock_announce`. This is probably
a correct default for timing sensitive applications (because it
minimizes the chance that an errant ISR or interrupt lock will delay
a timeout), but may be a performance problem in some cases. The
current design expects that any such optimization is the
responsibility of the timer driver.
Time Slicing
------------
An auxiliary job of the timing subsystem is to provide tick counters
to the scheduler that allow implementation of time slicing of threads.
A thread time-slice cannot be a timeout value, as it does not reflect
a global expiration but instead a per-CPU value that needs to be
tracked independently on each CPU in an SMP context.
Because there may be no other hardware available to drive timeslicing,
Zephyr multiplexes the existing timer driver. This means that the
value passed to :c:func:`sys_clock_set_timeout` may be clamped to a
smaller value than the current next timeout when a time sliced thread
is currently scheduled.
Subsystems that keep millisecond APIs
-------------------------------------
In general, code like this will port just like applications code will.
Millisecond values from the user may be treated any way the subsystem
likes, and then converted into kernel timeouts using
:c:macro:`K_MSEC()` at the point where they are presented to the
kernel.
Obviously this comes at the cost of not being able to use new
features, like the higher precision timeout constructors or absolute
timeouts. But for many subsystems with simple needs, this may be
acceptable.
One complexity is :c:macro:`K_FOREVER`. Subsystems that might have
been able to accept this value to their millisecond API in the past no
longer can, because it is no longer an integral type. Such code
will need to use a different, integer-valued token to represent
"forever". :c:macro:`K_NO_WAIT` has the same typesafety concern too,
of course, but as it is (and has always been) simply a numerical zero,
it has a natural porting path.
Subsystems using ``k_timeout_t``
--------------------------------
Ideally, code that takes a "timeout" parameter specifying a time to
wait should be using the kernel native abstraction where possible.
But :c:type:`k_timeout_t` is opaque, and needs to be converted before
it can be inspected by an application.
Some conversions are simple. Code that needs to test for
:c:macro:`K_FOREVER` can simply use the :c:macro:`K_TIMEOUT_EQ()`
macro to test the opaque struct for equality and take special action.
The more complicated case is when the subsystem needs to take a
timeout and loop, waiting for it to finish while doing some processing
that may require multiple blocking operations on underlying kernel
code. For example, consider this design:
.. code-block:: c
void my_wait_for_event(struct my_subsys *obj, int32_t timeout_in_ms)
{
while (true) {
uint32_t start = k_uptime_get_32();
if (is_event_complete(obj)) {
return;
}
/* Wait for notification of state change */
k_sem_take(obj->sem, timeout_in_ms);
/* Subtract elapsed time */
timeout_in_ms -= (k_uptime_get_32() - start);
}
}
This code requires that the timeout value be inspected, which is no
longer possible. For situations like this, the new API provides the
internal :c:func:`sys_timepoint_calc` and :c:func:`sys_timepoint_timeout` routines
that converts an arbitrary timeout to and from a timepoint value based on
an uptime tick at which it will expire. So such a loop might look like:
.. code-block:: c
void my_wait_for_event(struct my_subsys *obj, k_timeout_t timeout)
{
/* Compute the end time from the timeout */
k_timepoint_t end = sys_timepoint_calc(timeout);
do {
if (is_event_complete(obj)) {
return;
}
/* Update timeout with remaining time */
timeout = sys_timepoint_timeout(end);
/* Wait for notification of state change */
k_sem_take(obj->sem, timeout);
} while (!K_TIMEOUT_EQ(timeout, K_NO_WAIT));
}
Note that :c:func:`sys_timepoint_calc` accepts special values :c:macro:`K_FOREVER`
and :c:macro:`K_NO_WAIT`, and works identically for absolute timeouts as well
as conventional ones. Conversely, :c:func:`sys_timepoint_timeout` may return
:c:macro:`K_FOREVER` or :c:macro:`K_NO_WAIT` if those were used to create
the timepoint, the later also being returned if the timepoint is now in the
past. For simple cases, :c:func:`sys_timepoint_expired` can be used as well.
But some care is still required for subsystems that use those. Note that
delta timeouts need to be interpreted relative to a "current time",
and obviously that time is the time of the call to
:c:func:`sys_timepoint_calc`. But the user expects that the time is
the time they passed the timeout to you. Care must be taken to call
this function just once, as synchronously as possible to the timeout
creation in user code. It should not be used on a "stored" timeout
value, and should never be called iteratively in a loop.
API Reference
*************
.. doxygengroup:: clock_apis
``` | /content/code_sandbox/doc/kernel/services/timing/clocks.rst | restructuredtext | 2016-05-26T17:54:19 | 2024-08-16T18:09:06 | zephyr | zephyrproject-rtos/zephyr | 10,307 | 3,760 |
```restructuredtext
.. _message_queues_v2:
Message Queues
##############
A :dfn:`message queue` is a kernel object that implements a simple
message queue, allowing threads and ISRs to asynchronously send and receive
fixed-size data items.
.. contents::
:local:
:depth: 2
Concepts
********
Any number of message queues can be defined (limited only by available RAM).
Each message queue is referenced by its memory address.
A message queue has the following key properties:
* A **ring buffer** of data items that have been sent but not yet received.
* A **data item size**, measured in bytes.
* A **maximum quantity** of data items that can be queued in the ring buffer.
A message queue must be initialized before it can be used.
This sets its ring buffer to empty.
A data item can be **sent** to a message queue by a thread or an ISR.
The data item pointed at by the sending thread is copied to a waiting thread,
if one exists; otherwise the item is copied to the message queue's ring buffer,
if space is available. In either case, the size of the data area being sent
*must* equal the message queue's data item size.
If a thread attempts to send a data item when the ring buffer is full,
the sending thread may choose to wait for space to become available.
Any number of sending threads may wait simultaneously when the ring buffer
is full; when space becomes available
it is given to the highest priority sending thread that has waited the longest.
A data item can be **received** from a message queue by a thread.
The data item is copied to the area specified by the receiving thread;
the size of the receiving area *must* equal the message queue's data item size.
If a thread attempts to receive a data item when the ring buffer is empty,
the receiving thread may choose to wait for a data item to be sent.
Any number of receiving threads may wait simultaneously when the ring buffer
is empty; when a data item becomes available it is given to
the highest priority receiving thread that has waited the longest.
A thread can also **peek** at the message on the head of a message queue without
removing it from the queue.
The data item is copied to the area specified by the receiving thread;
the size of the receiving area *must* equal the message queue's data item size.
.. note::
The kernel does allow an ISR to receive an item from a message queue,
however the ISR must not attempt to wait if the message queue is empty.
.. note::
Alignment of the message queue's ring buffer is not necessary.
The underlying implementation uses :c:func:`memcpy` (which is
alignment-agnostic) and does not expose any internal pointers.
Implementation
**************
Defining a Message Queue
========================
A message queue is defined using a variable of type :c:struct:`k_msgq`.
It must then be initialized by calling :c:func:`k_msgq_init`.
The following code defines and initializes an empty message queue
that is capable of holding 10 items, each of which is 12 bytes long.
.. code-block:: c
struct data_item_type {
uint32_t field1;
uint32_t field2;
uint32_t field3;
};
char my_msgq_buffer[10 * sizeof(struct data_item_type)];
struct k_msgq my_msgq;
k_msgq_init(&my_msgq, my_msgq_buffer, sizeof(struct data_item_type), 10);
Alternatively, a message queue can be defined and initialized at compile time
by calling :c:macro:`K_MSGQ_DEFINE`.
The following code has the same effect as the code segment above. Observe
that the macro defines both the message queue and its buffer.
.. code-block:: c
K_MSGQ_DEFINE(my_msgq, sizeof(struct data_item_type), 10, 1);
Writing to a Message Queue
==========================
A data item is added to a message queue by calling :c:func:`k_msgq_put`.
The following code builds on the example above, and uses the message queue
to pass data items from a producing thread to one or more consuming threads.
If the message queue fills up because the consumers can't keep up, the
producing thread throws away all existing data so the newer data can be saved.
.. code-block:: c
void producer_thread(void)
{
struct data_item_type data;
while (1) {
/* create data item to send (e.g. measurement, timestamp, ...) */
data = ...
/* send data to consumers */
while (k_msgq_put(&my_msgq, &data, K_NO_WAIT) != 0) {
/* message queue is full: purge old data & try again */
k_msgq_purge(&my_msgq);
}
/* data item was successfully added to message queue */
}
}
Reading from a Message Queue
============================
A data item is taken from a message queue by calling :c:func:`k_msgq_get`.
The following code builds on the example above, and uses the message queue
to process data items generated by one or more producing threads. Note that
the return value of :c:func:`k_msgq_get` should be tested as ``-ENOMSG``
can be returned due to :c:func:`k_msgq_purge`.
.. code-block:: c
void consumer_thread(void)
{
struct data_item_type data;
while (1) {
/* get a data item */
k_msgq_get(&my_msgq, &data, K_FOREVER);
/* process data item */
...
}
}
Peeking into a Message Queue
============================
A data item is read from a message queue by calling :c:func:`k_msgq_peek`.
The following code peeks into the message queue to read the data item at the
head of the queue that is generated by one or more producing threads.
.. code-block:: c
void consumer_thread(void)
{
struct data_item_type data;
while (1) {
/* read a data item by peeking into the queue */
k_msgq_peek(&my_msgq, &data);
/* process data item */
...
}
}
Suggested Uses
**************
Use a message queue to transfer small data items between threads
in an asynchronous manner.
.. note::
A message queue can be used to transfer large data items, if desired.
However, this can increase interrupt latency as interrupts are locked
while a data item is written or read. The time to write or read a data item
increases linearly with its size since the item is copied in its entirety
to or from the buffer in memory. For this reason, it is usually preferable
to transfer large data items by exchanging a pointer to the data item,
rather than the data item itself.
A synchronous transfer can be achieved by using the kernel's mailbox
object type.
Configuration Options
*********************
Related configuration options:
* None.
API Reference
*************
.. doxygengroup:: msgq_apis
``` | /content/code_sandbox/doc/kernel/services/data_passing/message_queues.rst | restructuredtext | 2016-05-26T17:54:19 | 2024-08-16T18:09:06 | zephyr | zephyrproject-rtos/zephyr | 10,307 | 1,496 |
```restructuredtext
.. _stacks_v2:
Stacks
######
A :dfn:`stack` is a kernel object that implements a traditional
last in, first out (LIFO) queue, allowing threads and ISRs
to add and remove a limited number of integer data values.
.. contents::
:local:
:depth: 2
Concepts
********
Any number of stacks can be defined (limited only by available RAM). Each stack
is referenced by its memory address.
A stack has the following key properties:
* A **queue** of integer data values that have been added but not yet removed.
The queue is implemented using an array of stack_data_t values
and must be aligned on a native word boundary.
The stack_data_t type corresponds to the native word size i.e. 32 bits or
64 bits depending on the CPU architecture and compilation mode.
* A **maximum quantity** of data values that can be queued in the array.
A stack must be initialized before it can be used. This sets its queue to empty.
A data value can be **added** to a stack by a thread or an ISR.
The value is given directly to a waiting thread, if one exists;
otherwise the value is added to the LIFO's queue.
.. note::
If :kconfig:option:`CONFIG_NO_RUNTIME_CHECKS` is enabled, the kernel will *not* detect
and prevent attempts to add a data value to a stack that has already reached
its maximum quantity of queued values. Adding a data value to a stack that is
already full will result in array overflow, and lead to unpredictable behavior.
A data value may be **removed** from a stack by a thread.
If the stack's queue is empty a thread may choose to wait for it to be given.
Any number of threads may wait on an empty stack simultaneously.
When a data item is added, it is given to the highest priority thread
that has waited longest.
.. note::
The kernel does allow an ISR to remove an item from a stack, however
the ISR must not attempt to wait if the stack is empty.
Implementation
**************
Defining a Stack
================
A stack is defined using a variable of type :c:struct:`k_stack`.
It must then be initialized by calling :c:func:`k_stack_init` or
:c:func:`k_stack_alloc_init`. In the latter case, a buffer is not
provided and it is instead allocated from the calling thread's resource
pool.
The following code defines and initializes an empty stack capable of holding
up to ten word-sized data values.
.. code-block:: c
#define MAX_ITEMS 10
stack_data_t my_stack_array[MAX_ITEMS];
struct k_stack my_stack;
k_stack_init(&my_stack, my_stack_array, MAX_ITEMS);
Alternatively, a stack can be defined and initialized at compile time
by calling :c:macro:`K_STACK_DEFINE`.
The following code has the same effect as the code segment above. Observe
that the macro defines both the stack and its array of data values.
.. code-block:: c
K_STACK_DEFINE(my_stack, MAX_ITEMS);
Pushing to a Stack
==================
A data item is added to a stack by calling :c:func:`k_stack_push`.
The following code builds on the example above, and shows how a thread
can create a pool of data structures by saving their memory addresses
in a stack.
.. code-block:: c
/* define array of data structures */
struct my_buffer_type {
int field1;
...
};
struct my_buffer_type my_buffers[MAX_ITEMS];
/* save address of each data structure in a stack */
for (int i = 0; i < MAX_ITEMS; i++) {
k_stack_push(&my_stack, (stack_data_t)&my_buffers[i]);
}
Popping from a Stack
====================
A data item is taken from a stack by calling :c:func:`k_stack_pop`.
The following code builds on the example above, and shows how a thread
can dynamically allocate an unused data structure.
When the data structure is no longer required, the thread must push
its address back on the stack to allow the data structure to be reused.
.. code-block:: c
struct my_buffer_type *new_buffer;
k_stack_pop(&buffer_stack, (stack_data_t *)&new_buffer, K_FOREVER);
new_buffer->field1 = ...
Suggested Uses
**************
Use a stack to store and retrieve integer data values in a "last in,
first out" manner, when the maximum number of stored items is known.
Configuration Options
*********************
Related configuration options:
* None.
API Reference
*************
.. doxygengroup:: stack_apis
``` | /content/code_sandbox/doc/kernel/services/data_passing/stacks.rst | restructuredtext | 2016-05-26T17:54:19 | 2024-08-16T18:09:06 | zephyr | zephyrproject-rtos/zephyr | 10,307 | 995 |
```restructuredtext
.. _lifos_v2:
LIFOs
#####
A :dfn:`LIFO` is a kernel object that implements a traditional
last in, first out (LIFO) queue, allowing threads and ISRs
to add and remove data items of any size.
.. contents::
:local:
:depth: 2
Concepts
********
Any number of LIFOs can be defined (limited only by available RAM). Each LIFO is
referenced by its memory address.
A LIFO has the following key properties:
* A **queue** of data items that have been added but not yet removed.
The queue is implemented as a simple linked list.
A LIFO must be initialized before it can be used. This sets its queue to empty.
LIFO data items must be aligned on a word boundary, as the kernel reserves
the first word of an item for use as a pointer to the next data item in the
queue. Consequently, a data item that holds N bytes of application data
requires N+4 (or N+8) bytes of memory. There are no alignment or reserved
space requirements for data items if they are added with
:c:func:`k_lifo_alloc_put`, instead additional memory is temporarily
allocated from the calling thread's resource pool.
.. note::
LIFO data items are restricted to single active instance across all LIFO
data queues. Any attempt to re-add a LIFO data item to a queue before
it has been removed from the queue to which it was previously added will
result in undefined behavior.
A data item may be **added** to a LIFO by a thread or an ISR.
The item is given directly to a waiting thread, if one exists;
otherwise the item is added to the LIFO's queue.
There is no limit to the number of items that may be queued.
A data item may be **removed** from a LIFO by a thread. If the LIFO's queue
is empty a thread may choose to wait for a data item to be given.
Any number of threads may wait on an empty LIFO simultaneously.
When a data item is added, it is given to the highest priority thread
that has waited longest.
.. note::
The kernel does allow an ISR to remove an item from a LIFO, however
the ISR must not attempt to wait if the LIFO is empty.
Implementation
**************
Defining a LIFO
===============
A LIFO is defined using a variable of type :c:struct:`k_lifo`.
It must then be initialized by calling :c:func:`k_lifo_init`.
The following defines and initializes an empty LIFO.
.. code-block:: c
struct k_lifo my_lifo;
k_lifo_init(&my_lifo);
Alternatively, an empty LIFO can be defined and initialized at compile time
by calling :c:macro:`K_LIFO_DEFINE`.
The following code has the same effect as the code segment above.
.. code-block:: c
K_LIFO_DEFINE(my_lifo);
Writing to a LIFO
=================
A data item is added to a LIFO by calling :c:func:`k_lifo_put`.
The following code builds on the example above, and uses the LIFO
to send data to one or more consumer threads.
.. code-block:: c
struct data_item_t {
void *LIFO_reserved; /* 1st word reserved for use by LIFO */
...
};
struct data_item_t tx data;
void producer_thread(int unused1, int unused2, int unused3)
{
while (1) {
/* create data item to send */
tx_data = ...
/* send data to consumers */
k_lifo_put(&my_lifo, &tx_data);
...
}
}
A data item can be added to a LIFO with :c:func:`k_lifo_alloc_put`.
With this API, there is no need to reserve space for the kernel's use in
the data item, instead additional memory will be allocated from the calling
thread's resource pool until the item is read.
Reading from a LIFO
===================
A data item is removed from a LIFO by calling :c:func:`k_lifo_get`.
The following code builds on the example above, and uses the LIFO
to obtain data items from a producer thread,
which are then processed in some manner.
.. code-block:: c
void consumer_thread(int unused1, int unused2, int unused3)
{
struct data_item_t *rx_data;
while (1) {
rx_data = k_lifo_get(&my_lifo, K_FOREVER);
/* process LIFO data item */
...
}
}
Suggested Uses
**************
Use a LIFO to asynchronously transfer data items of arbitrary size
in a "last in, first out" manner.
Configuration Options
*********************
Related configuration options:
* None.
API Reference
*************
.. doxygengroup:: lifo_apis
``` | /content/code_sandbox/doc/kernel/services/data_passing/lifos.rst | restructuredtext | 2016-05-26T17:54:19 | 2024-08-16T18:09:06 | zephyr | zephyrproject-rtos/zephyr | 10,307 | 1,053 |
```restructuredtext
.. _smp_arch:
Symmetric Multiprocessing
#########################
On multiprocessor architectures, Zephyr supports the use of multiple
physical CPUs running Zephyr application code. This support is
"symmetric" in the sense that no specific CPU is treated specially by
default. Any processor is capable of running any Zephyr thread, with
access to all standard Zephyr APIs supported.
No special application code needs to be written to take advantage of
this feature. If there are two Zephyr application threads runnable on
a supported dual processor device, they will both run simultaneously.
SMP configuration is controlled under the :kconfig:option:`CONFIG_SMP` kconfig
variable. This must be set to "y" to enable SMP features, otherwise
a uniprocessor kernel will be built. In general the platform default
will have enabled this anywhere it's supported. When enabled, the
number of physical CPUs available is visible at build time as
:kconfig:option:`CONFIG_MP_MAX_NUM_CPUS`. Likewise, the default for this will be the
number of available CPUs on the platform and it is not expected that
typical apps will change it. But it is legal and supported to set
this to a smaller (but obviously not larger) number for special
purposes (e.g. for testing, or to reserve a physical CPU for running
non-Zephyr code).
Synchronization
***************
At the application level, core Zephyr IPC and synchronization
primitives all behave identically under an SMP kernel. For example
semaphores used to implement blocking mutual exclusion continue to be
a proper application choice.
At the lowest level, however, Zephyr code has often used the
:c:func:`irq_lock`/:c:func:`irq_unlock` primitives to implement fine grained
critical sections using interrupt masking. These APIs continue to
work via an emulation layer (see below), but the masking technique
does not: the fact that your CPU will not be interrupted while you are
in your critical section says nothing about whether a different CPU
will be running simultaneously and be inspecting or modifying the same
data!
Spinlocks
=========
SMP systems provide a more constrained :c:func:`k_spin_lock` primitive
that not only masks interrupts locally, as done by :c:func:`irq_lock`, but
also atomically validates that a shared lock variable has been
modified before returning to the caller, "spinning" on the check if
needed to wait for the other CPU to exit the lock. The default Zephyr
implementation of :c:func:`k_spin_lock` and :c:func:`k_spin_unlock` is built
on top of the pre-existing :c:struct:`atomic_` layer (itself usually
implemented using compiler intrinsics), though facilities exist for
architectures to define their own for performance reasons.
One important difference between IRQ locks and spinlocks is that the
earlier API was naturally recursive: the lock was global, so it was
legal to acquire a nested lock inside of a critical section.
Spinlocks are separable: you can have many locks for separate
subsystems or data structures, preventing CPUs from contending on a
single global resource. But that means that spinlocks must not be
used recursively. Code that holds a specific lock must not try to
re-acquire it or it will deadlock (it is perfectly legal to nest
**distinct** spinlocks, however). A validation layer is available to
detect and report bugs like this.
When used on a uniprocessor system, the data component of the spinlock
(the atomic lock variable) is unnecessary and elided. Except for the
recursive semantics above, spinlocks in single-CPU contexts produce
identical code to legacy IRQ locks. In fact the entirety of the
Zephyr core kernel has now been ported to use spinlocks exclusively.
Legacy irq_lock() emulation
===========================
For the benefit of applications written to the uniprocessor locking
API, :c:func:`irq_lock` and :c:func:`irq_unlock` continue to work compatibly on
SMP systems with identical semantics to their legacy versions. They
are implemented as a single global spinlock, with a nesting count and
the ability to be atomically reacquired on context switch into locked
threads. The kernel will ensure that only one thread across all CPUs
can hold the lock at any time, that it is released on context switch,
and that it is re-acquired when necessary to restore the lock state
when a thread is switched in. Other CPUs will spin waiting for the
release to happen.
The overhead involved in this process has measurable performance
impact, however. Unlike uniprocessor apps, SMP apps using
:c:func:`irq_lock` are not simply invoking a very short (often ~1
instruction) interrupt masking operation. That, and the fact that the
IRQ lock is global, means that code expecting to be run in an SMP
context should be using the spinlock API wherever possible.
CPU Mask
********
It is often desirable for real time applications to deliberately
partition work across physical CPUs instead of relying solely on the
kernel scheduler to decide on which threads to execute. Zephyr
provides an API, controlled by the :kconfig:option:`CONFIG_SCHED_CPU_MASK`
kconfig variable, which can associate a specific set of CPUs with each
thread, indicating on which CPUs it can run.
By default, new threads can run on any CPU. Calling
:c:func:`k_thread_cpu_mask_disable` with a particular CPU ID will prevent
that thread from running on that CPU in the future. Likewise
:c:func:`k_thread_cpu_mask_enable` will re-enable execution. There are also
:c:func:`k_thread_cpu_mask_clear` and :c:func:`k_thread_cpu_mask_enable_all` APIs
available for convenience. For obvious reasons, these APIs are
illegal if called on a runnable thread. The thread must be blocked or
suspended, otherwise an ``-EINVAL`` will be returned.
Note that when this feature is enabled, the scheduler algorithm
involved in doing the per-CPU mask test requires that the list be
traversed in full. The kernel does not keep a per-CPU run queue.
That means that the performance benefits from the
:kconfig:option:`CONFIG_SCHED_SCALABLE` and :kconfig:option:`CONFIG_SCHED_MULTIQ`
scheduler backends cannot be realized. CPU mask processing is
available only when :kconfig:option:`CONFIG_SCHED_DUMB` is the selected
backend. This requirement is enforced in the configuration layer.
SMP Boot Process
****************
A Zephyr SMP kernel begins boot identically to a uniprocessor kernel.
Auxiliary CPUs begin in a disabled state in the architecture layer.
All standard kernel initialization, including device initialization,
happens on a single CPU before other CPUs are brought online.
Just before entering the application :c:func:`main` function, the kernel
calls :c:func:`z_smp_init` to launch the SMP initialization process. This
enumerates over the configured CPUs, calling into the architecture
layer using :c:func:`arch_cpu_start` for each one. This function is
passed a memory region to use as a stack on the foreign CPU (in
practice it uses the area that will become that CPU's interrupt
stack), the address of a local :c:func:`smp_init_top` callback function to
run on that CPU, and a pointer to a "start flag" address which will be
used as an atomic signal.
The local SMP initialization (:c:func:`smp_init_top`) on each CPU is then
invoked by the architecture layer. Note that interrupts are still
masked at this point. This routine is responsible for calling
:c:func:`smp_timer_init` to set up any needed stat in the timer driver. On
many architectures the timer is a per-CPU device and needs to be
configured specially on auxiliary CPUs. Then it waits (spinning) for
the atomic "start flag" to be released in the main thread, to
guarantee that all SMP initialization is complete before any Zephyr
application code runs, and finally calls :c:func:`z_swap` to transfer
control to the appropriate runnable thread via the standard scheduler
API.
.. figure:: smpinit.svg
:align: center
:alt: SMP Initialization
:figclass: align-center
Example SMP initialization process, showing a configuration with
two CPUs and two app threads which begin operating simultaneously.
Interprocessor Interrupts
*************************
When running in multiprocessor environments, it is occasionally the
case that state modified on the local CPU needs to be synchronously
handled on a different processor.
One example is the Zephyr :c:func:`k_thread_abort` API, which cannot return
until the thread that had been aborted is no longer runnable. If it
is currently running on another CPU, that becomes difficult to
implement.
Another is low power idle. It is a firm requirement on many devices
that system idle be implemented using a low-power mode with as many
interrupts (including periodic timer interrupts) disabled or deferred
as is possible. If a CPU is in such a state, and on another CPU a
thread becomes runnable, the idle CPU has no way to "wake up" to
handle the newly-runnable load.
So where possible, Zephyr SMP architectures should implement an
interprocessor interrupt. The current framework is very simple: the
architecture provides at least a :c:func:`arch_sched_broadcast_ipi` call,
which when invoked will flag an interrupt on all CPUs (except the current one,
though that is allowed behavior). If the architecture supports directed IPIs
(see :kconfig:option:`CONFIG_ARCH_HAS_DIRECTED_IPIS`), then the
architecture also provides a :c:func:`arch_sched_directed_ipi` call, which
when invoked will flag an interrupt on the specified CPUs. When an interrupt is
flagged on the CPUs, the :c:func:`z_sched_ipi` function implemented in the
scheduler will get invoked on those CPUs. The expectation is that these
APIs will evolve over time to encompass more functionality (e.g. cross-CPU
calls), and that the scheduler-specific calls here will be implemented in
terms of a more general framework.
Note that not all SMP architectures will have a usable IPI mechanism
(either missing, or just undocumented/unimplemented). In those cases
Zephyr provides fallback behavior that is correct, but perhaps
suboptimal.
Using this, :c:func:`k_thread_abort` becomes only slightly more
complicated in SMP: for the case where a thread is actually running on
another CPU (we can detect this atomically inside the scheduler), we
broadcast an IPI and spin, waiting for the thread to either become
"DEAD" or for it to re-enter the queue (in which case we terminate it
the same way we would have in uniprocessor mode). Note that the
"aborted" check happens on any interrupt exit, so there is no special
handling needed in the IPI per se. This allows us to implement a
reasonable fallback when IPI is not available: we can simply spin,
waiting until the foreign CPU receives any interrupt, though this may
be a much longer time!
Likewise idle wakeups are trivially implementable with an empty IPI
handler. If a thread is added to an empty run queue (i.e. there may
have been idle CPUs), we broadcast an IPI. A foreign CPU will then be
able to see the new thread when exiting from the interrupt and will
switch to it if available.
Without an IPI, however, a low power idle that requires an interrupt
will not work to synchronously run new threads. The workaround in
that case is more invasive: Zephyr will **not** enter the system idle
handler and will instead spin in its idle loop, testing the scheduler
state at high frequency (not spinning on it though, as that would
involve severe lock contention) for new threads. The expectation is
that power constrained SMP applications are always going to provide an
IPI, and this code will only be used for testing purposes or on
systems without power consumption requirements.
IPI Cascades
============
The kernel can not control the order in which IPIs are processed by the CPUs
in the system. In general, this is not an issue and a single set of IPIs is
sufficient to trigger a reschedule on the N CPUs that results with them
scheduling the highest N priority ready threads to execute. When CPU masking
is used, there may be more than one valid set of threads (not to be confused
with an optimal set of threads) that can be scheduled on the N CPUs and a
single set of IPIs may be insufficient to result in any of these valid sets.
.. note::
When CPU masking is not in play, the optimal set of threads is the same
as the valid set of threads. However when CPU masking is in play, there
may be more than one valid set--one of which may be optimal.
To better illustrate the distinction, consider a 2-CPU system with ready
threads T1 and T2 at priorities 1 and 2 respectively. Let T2 be pinned to
CPU0 and T1 not be pinned. If CPU0 is executing T2 and CPU1 executing T1,
then this set is is both valid and optimal. However, if CPU0 is executing
T1 and CPU1 is idling, then this too would be valid though not optimal.
In those cases where a single set of IPIs is not sufficient to generate a valid
set, the resulting set of executing threads are expected to be close to a valid
set, and subsequent IPIs can generally be expected to correct the situation
soon. However, for cases where neither the approximation nor the delay are
acceptable, enabling :kconfig:option:`CONFIG_SCHED_IPI_CASCADE` will allow the
kernel to generate cascading IPIs until the kernel has selected a valid set of
ready threads for the CPUs.
There are three types of costs/penalties associated with the IPI cascades--and
for these reasons they are disabled by default. The first is a cost incurred
by the CPU producing the IPI when a new thread preempts the old thread as checks
must be done to compare the old thread against the threads executing on the
other CPUs. The second is a cost incurred by the CPUs receiving the IPIs as
they must be processed. The third is the apparent sputtering of a thread as it
"winks in" and then "winks out" due to cascades stemming from the
aforementioned first cost.
SMP Kernel Internals
********************
In general, Zephyr kernel code is SMP-agnostic and, like application
code, will work correctly regardless of the number of CPUs available.
But in a few areas there are notable changes in structure or behavior.
Per-CPU data
============
Many elements of the core kernel data need to be implemented for each
CPU in SMP mode. For example, the ``_current`` thread pointer obviously
needs to reflect what is running locally, there are many threads
running concurrently. Likewise a kernel-provided interrupt stack
needs to be created and assigned for each physical CPU, as does the
interrupt nesting count used to detect ISR state.
These fields are now moved into a separate struct :c:struct:`_cpu` instance
within the :c:struct:`_kernel` struct, which has a ``cpus[]`` array indexed by ID.
Compatibility fields are provided for legacy uniprocessor code trying
to access the fields of ``cpus[0]`` using the older syntax and assembly
offsets.
Note that an important requirement on the architecture layer is that
the pointer to this CPU struct be available rapidly when in kernel
context. The expectation is that :c:func:`arch_curr_cpu` will be
implemented using a CPU-provided register or addressing mode that can
store this value across arbitrary context switches or interrupts and
make it available to any kernel-mode code.
Similarly, where on a uniprocessor system Zephyr could simply create a
global "idle thread" at the lowest priority, in SMP we may need one
for each CPU. This makes the internal predicate test for "_is_idle()"
in the scheduler, which is a hot path performance environment, more
complicated than simply testing the thread pointer for equality with a
known static variable. In SMP mode, idle threads are distinguished by
a separate field in the thread struct.
Switch-based context switching
==============================
The traditional Zephyr context switch primitive has been :c:func:`z_swap`.
Unfortunately, this function takes no argument specifying a thread to
switch to. The expectation has always been that the scheduler has
already made its preemption decision when its state was last modified
and cached the resulting "next thread" pointer in a location where
architecture context switch primitives can find it via a simple struct
offset. That technique will not work in SMP, because the other CPU
may have modified scheduler state since the current CPU last exited
the scheduler (for example: it might already be running that cached
thread!).
Instead, the SMP "switch to" decision needs to be made synchronously
with the swap call, and as we don't want per-architecture assembly
code to be handling scheduler internal state, Zephyr requires a
somewhat lower-level context switch primitives for SMP systems:
:c:func:`arch_switch` is always called with interrupts masked, and takes
exactly two arguments. The first is an opaque (architecture defined)
handle to the context to which it should switch, and the second is a
pointer to such a handle into which it should store the handle
resulting from the thread that is being switched out.
The kernel then implements a portable :c:func:`z_swap` implementation on top
of this primitive which includes the relevant scheduler logic in a
location where the architecture doesn't need to understand it.
Similarly, on interrupt exit, switch-based architectures are expected
to call :c:func:`z_get_next_switch_handle` to retrieve the next thread to
run from the scheduler. The argument to :c:func:`z_get_next_switch_handle`
is either the interrupted thread's "handle" reflecting the same opaque type
used by :c:func:`arch_switch`, or NULL if that thread cannot be released
to the scheduler just yet. The choice between a handle value or NULL
depends on the way CPU interrupt mode is implemented.
Architectures with a large CPU register file would typically preserve only
the caller-saved registers on the current thread's stack when interrupted
in order to minimize interrupt latency, and preserve the callee-saved
registers only when :c:func:`arch_switch` is called to minimize context
switching latency. Such architectures must use NULL as the argument to
:c:func:`z_get_next_switch_handle` to determine if there is a new thread
to schedule, and follow through with their own :c:func:`arch_switch` or
derivative if so, or directly leave interrupt mode otherwise.
In the former case it is up to that switch code to store the handle
resulting from the thread that is being switched out in that thread's
"switch_handle" field after its context has fully been saved.
Architectures whose entry in interrupt mode already preserves the entire
thread state may pass that thread's handle directly to
:c:func:`z_get_next_switch_handle` and be done in one step.
Note that while SMP requires :kconfig:option:`CONFIG_USE_SWITCH`, the reverse is not
true. A uniprocessor architecture built with :kconfig:option:`CONFIG_SMP` set to No might
still decide to implement its context switching using
:c:func:`arch_switch`.
API Reference
**************
.. doxygengroup:: spinlock_apis
``` | /content/code_sandbox/doc/kernel/services/smp/smp.rst | restructuredtext | 2016-05-26T17:54:19 | 2024-08-16T18:09:06 | zephyr | zephyrproject-rtos/zephyr | 10,307 | 4,275 |
```restructuredtext
.. _queues:
Queues
######
A Queue in Zephyr is a kernel object that implements a traditional queue, allowing
threads and ISRs to add and remove data items of any size. The queue is similar
to a FIFO and serves as the underlying implementation for both :ref:`k_fifo
<fifos_v2>` and :ref:`k_lifo <lifos_v2>`. For more information on usage see
:ref:`k_fifo <fifos_v2>`.
Configuration Options
*********************
Related configuration options:
* None
API Reference
*************
.. doxygengroup:: queue_apis
``` | /content/code_sandbox/doc/kernel/services/data_passing/queues.rst | restructuredtext | 2016-05-26T17:54:19 | 2024-08-16T18:09:06 | zephyr | zephyrproject-rtos/zephyr | 10,307 | 132 |
```restructuredtext
.. _fifos_v2:
FIFOs
#####
A :dfn:`FIFO` is a kernel object that implements a traditional
first in, first out (FIFO) queue, allowing threads and ISRs
to add and remove data items of any size.
.. contents::
:local:
:depth: 2
Concepts
********
Any number of FIFOs can be defined (limited only by available RAM). Each FIFO is
referenced by its memory address.
A FIFO has the following key properties:
* A **queue** of data items that have been added but not yet removed.
The queue is implemented as a simple linked list.
A FIFO must be initialized before it can be used. This sets its queue to empty.
FIFO data items must be aligned on a word boundary, as the kernel reserves
the first word of an item for use as a pointer to the next data item in
the queue. Consequently, a data item that holds N bytes of application
data requires N+4 (or N+8) bytes of memory. There are no alignment or
reserved space requirements for data items if they are added with
:c:func:`k_fifo_alloc_put`, instead additional memory is temporarily
allocated from the calling thread's resource pool.
.. note::
FIFO data items are restricted to single active instance across all FIFO
data queues. Any attempt to re-add a FIFO data item to a queue before
it has been removed from the queue to which it was previously added will
result in undefined behavior.
A data item may be **added** to a FIFO by a thread or an ISR.
The item is given directly to a waiting thread, if one exists;
otherwise the item is added to the FIFO's queue.
There is no limit to the number of items that may be queued.
A data item may be **removed** from a FIFO by a thread. If the FIFO's queue
is empty a thread may choose to wait for a data item to be given.
Any number of threads may wait on an empty FIFO simultaneously.
When a data item is added, it is given to the highest priority thread
that has waited longest.
.. note::
The kernel does allow an ISR to remove an item from a FIFO, however
the ISR must not attempt to wait if the FIFO is empty.
If desired, **multiple data items** can be added to a FIFO in a single operation
if they are chained together into a singly-linked list. This capability can be
useful if multiple writers are adding sets of related data items to the FIFO,
as it ensures the data items in each set are not interleaved with other data
items. Adding multiple data items to a FIFO is also more efficient than adding
them one at a time, and can be used to guarantee that anyone who removes
the first data item in a set will be able to remove the remaining data items
without waiting.
Implementation
**************
Defining a FIFO
===============
A FIFO is defined using a variable of type :c:struct:`k_fifo`.
It must then be initialized by calling :c:func:`k_fifo_init`.
The following code defines and initializes an empty FIFO.
.. code-block:: c
struct k_fifo my_fifo;
k_fifo_init(&my_fifo);
Alternatively, an empty FIFO can be defined and initialized at compile time
by calling :c:macro:`K_FIFO_DEFINE`.
The following code has the same effect as the code segment above.
.. code-block:: c
K_FIFO_DEFINE(my_fifo);
Writing to a FIFO
=================
A data item is added to a FIFO by calling :c:func:`k_fifo_put`.
The following code builds on the example above, and uses the FIFO
to send data to one or more consumer threads.
.. code-block:: c
struct data_item_t {
void *fifo_reserved; /* 1st word reserved for use by FIFO */
...
};
struct data_item_t tx_data;
void producer_thread(int unused1, int unused2, int unused3)
{
while (1) {
/* create data item to send */
tx_data = ...
/* send data to consumers */
k_fifo_put(&my_fifo, &tx_data);
...
}
}
Additionally, a singly-linked list of data items can be added to a FIFO
by calling :c:func:`k_fifo_put_list` or :c:func:`k_fifo_put_slist`.
Finally, a data item can be added to a FIFO with :c:func:`k_fifo_alloc_put`.
With this API, there is no need to reserve space for the kernel's use in
the data item, instead additional memory will be allocated from the calling
thread's resource pool until the item is read.
Reading from a FIFO
===================
A data item is removed from a FIFO by calling :c:func:`k_fifo_get`.
The following code builds on the example above, and uses the FIFO
to obtain data items from a producer thread,
which are then processed in some manner.
.. code-block:: c
void consumer_thread(int unused1, int unused2, int unused3)
{
struct data_item_t *rx_data;
while (1) {
rx_data = k_fifo_get(&my_fifo, K_FOREVER);
/* process FIFO data item */
...
}
}
Suggested Uses
**************
Use a FIFO to asynchronously transfer data items of arbitrary size
in a "first in, first out" manner.
Configuration Options
*********************
Related configuration options:
* None
API Reference
*************
.. doxygengroup:: fifo_apis
``` | /content/code_sandbox/doc/kernel/services/data_passing/fifos.rst | restructuredtext | 2016-05-26T17:54:19 | 2024-08-16T18:09:06 | zephyr | zephyrproject-rtos/zephyr | 10,307 | 1,171 |
```restructuredtext
.. _pipes_v2:
Pipes
#####
A :dfn:`pipe` is a kernel object that allows a thread to send a byte stream
to another thread. Pipes can be used to synchronously transfer chunks of data
in whole or in part.
.. contents::
:local:
:depth: 2
Concepts
********
The pipe can be configured with a ring buffer which holds data that has been
sent but not yet received; alternatively, the pipe may have no ring buffer.
Any number of pipes can be defined (limited only by available RAM). Each pipe is
referenced by its memory address.
A pipe has the following key property:
* A **size** that indicates the size of the pipe's ring buffer. Note that a
size of zero defines a pipe with no ring buffer.
A pipe must be initialized before it can be used. The pipe is initially empty.
Data is synchronously **sent** either in whole or in part to a pipe by a
thread. If the specified minimum number of bytes can not be immediately
satisfied, then the operation will either fail immediately or attempt to send
as many bytes as possible and then pend in the hope that the send can be
completed later. Accepted data is either copied to the pipe's ring buffer
or directly to the waiting reader(s).
Data is synchronously **received** from a pipe by a thread. If the specified
minimum number of bytes can not be immediately satisfied, then the operation
will either fail immediately or attempt to receive as many bytes as possible
and then pend in the hope that the receive can be completed later. Accepted
data is either copied from the pipe's ring buffer or directly from the
waiting sender(s).
Data may also be **flushed** from a pipe by a thread. Flushing can be performed
either on the entire pipe or on only its ring buffer. Flushing the entire pipe
is equivalent to reading all the information in the ring buffer **and** waiting
to be written into a giant temporary buffer which is then discarded. Flushing
the ring buffer is equivalent to reading **only** the data in the ring buffer
into a temporary buffer which is then discarded. Flushing the ring buffer does
not guarantee that the ring buffer will stay empty; flushing it may allow a
pended writer to fill the ring buffer.
.. note::
Flushing does not in practice allocate or use additional buffers.
.. note::
The kernel does allow for an ISR to flush a pipe from an ISR. It also
allows it to send/receive data to/from one provided it does not attempt
to wait for space/data.
Implementation
**************
A pipe is defined using a variable of type :c:struct:`k_pipe` and an
optional character buffer of type ``unsigned char``. It must then be
initialized by calling :c:func:`k_pipe_init`.
The following code defines and initializes an empty pipe that has a ring
buffer capable of holding 100 bytes and is aligned to a 4-byte boundary.
.. code-block:: c
unsigned char __aligned(4) my_ring_buffer[100];
struct k_pipe my_pipe;
k_pipe_init(&my_pipe, my_ring_buffer, sizeof(my_ring_buffer));
Alternatively, a pipe can be defined and initialized at compile time by
calling :c:macro:`K_PIPE_DEFINE`.
The following code has the same effect as the code segment above. Observe
that macro defines both the pipe and its ring buffer.
.. code-block:: c
K_PIPE_DEFINE(my_pipe, 100, 4);
Writing to a Pipe
=================
Data is added to a pipe by calling :c:func:`k_pipe_put`.
The following code builds on the example above, and uses the pipe to pass
data from a producing thread to one or more consuming threads. If the pipe's
ring buffer fills up because the consumers can't keep up, the producing thread
waits for a specified amount of time.
.. code-block:: c
struct message_header {
...
};
void producer_thread(void)
{
unsigned char *data;
size_t total_size;
size_t bytes_written;
int rc;
...
while (1) {
/* Craft message to send in the pipe */
data = ...;
total_size = ...;
/* send data to the consumers */
rc = k_pipe_put(&my_pipe, data, total_size, &bytes_written,
sizeof(struct message_header), K_NO_WAIT);
if (rc < 0) {
/* Incomplete message header sent */
...
} else if (bytes_written < total_size) {
/* Some of the data was sent */
...
} else {
/* All data sent */
...
}
}
}
Reading from a Pipe
===================
Data is read from the pipe by calling :c:func:`k_pipe_get`.
The following code builds on the example above, and uses the pipe to
process data items generated by one or more producing threads.
.. code-block:: c
void consumer_thread(void)
{
unsigned char buffer[120];
size_t bytes_read;
struct message_header *header = (struct message_header *)buffer;
while (1) {
rc = k_pipe_get(&my_pipe, buffer, sizeof(buffer), &bytes_read,
sizeof(*header), K_MSEC(100));
if ((rc < 0) || (bytes_read < sizeof (*header))) {
/* Incomplete message header received */
...
} else if (header->num_data_bytes + sizeof(*header) > bytes_read) {
/* Only some data was received */
...
} else {
/* All data was received */
...
}
}
}
Use a pipe to send streams of data between threads.
.. note::
A pipe can be used to transfer long streams of data if desired. However
it is often preferable to send pointers to large data items to avoid
copying the data.
Flushing a Pipe's Buffer
========================
Data is flushed from the pipe's ring buffer by calling
:c:func:`k_pipe_buffer_flush`.
The following code builds on the examples above, and flushes the pipe's
buffer.
.. code-block:: c
void monitor_thread(void)
{
while (1) {
...
/* Pipe buffer contains stale data. Flush it. */
k_pipe_buffer_flush(&my_pipe);
...
}
}
Flushing a Pipe
===============
All data in the pipe is flushed by calling :c:func:`k_pipe_flush`.
The following code builds on the examples above, and flushes all the
data in the pipe.
.. code-block:: c
void monitor_thread(void)
{
while (1) {
...
/* Critical error detected. Flush the entire pipe to reset it. */
k_pipe_flush(&my_pipe);
...
}
}
Suggested uses
**************
Use a pipe to send streams of data between threads.
.. note::
A pipe can be used to transfer long streams of data if desired. However it
is often preferable to send pointers to large data items to avoid copying
the data. Copying large data items will negatively impact interrupt latency
as a spinlock is held while copying that data.
Configuration Options
*********************
Related configuration options:
* :kconfig:option:`CONFIG_PIPES`
API Reference
*************
.. doxygengroup:: pipe_apis
``` | /content/code_sandbox/doc/kernel/services/data_passing/pipes.rst | restructuredtext | 2016-05-26T17:54:19 | 2024-08-16T18:09:06 | zephyr | zephyrproject-rtos/zephyr | 10,307 | 1,568 |
```restructuredtext
.. _mailboxes_v2:
Mailboxes
#########
A :dfn:`mailbox` is a kernel object that provides enhanced message queue
capabilities that go beyond the capabilities of a message queue object.
A mailbox allows threads to send and receive messages of any size
synchronously or asynchronously.
.. contents::
:local:
:depth: 2
Concepts
********
Any number of mailboxes can be defined (limited only by available RAM). Each
mailbox is referenced by its memory address.
A mailbox has the following key properties:
* A **send queue** of messages that have been sent but not yet received.
* A **receive queue** of threads that are waiting to receive a message.
A mailbox must be initialized before it can be used. This sets both of its
queues to empty.
A mailbox allows threads, but not ISRs, to exchange messages.
A thread that sends a message is known as the **sending thread**,
while a thread that receives the message is known as the **receiving thread**.
Each message may be received by only one thread (i.e. point-to-multipoint and
broadcast messaging is not supported).
Messages exchanged using a mailbox are handled non-anonymously,
allowing both threads participating in an exchange to know
(and even specify) the identity of the other thread.
Message Format
==============
A **message descriptor** is a data structure that specifies where a message's
data is located, and how the message is to be handled by the mailbox.
Both the sending thread and the receiving thread supply a message descriptor
when accessing a mailbox. The mailbox uses the message descriptors to perform
a message exchange between compatible sending and receiving threads.
The mailbox also updates certain message descriptor fields during the exchange,
allowing both threads to know what has occurred.
A mailbox message contains zero or more bytes of **message data**.
The size and format of the message data is application-defined, and can vary
from one message to the next.
A **message buffer** is an area of memory provided by the thread that sends or
receives the message data. An array or structure variable can often be used for
this purpose.
A message that has neither form of message data is called an **empty message**.
.. note::
A message whose message buffer exists, but contains zero bytes of actual
data, is *not* an empty message.
Message Lifecycle
=================
The life cycle of a message is straightforward. A message is created when
it is given to a mailbox by the sending thread. The message is then owned
by the mailbox until it is given to a receiving thread. The receiving thread
may retrieve the message data when it receives the message from the mailbox,
or it may perform data retrieval during a second, subsequent mailbox operation.
Only when data retrieval has occurred is the message deleted by the mailbox.
Thread Compatibility
====================
A sending thread can specify the address of the thread to which the message
is sent, or send it to any thread by specifying :c:macro:`K_ANY`.
Likewise, a receiving thread can specify the address of the thread from which
it wishes to receive a message, or it can receive a message from any thread
by specifying :c:macro:`K_ANY`.
A message is exchanged only when the requirements of both the sending thread
and receiving thread are satisfied; such threads are said to be **compatible**.
For example, if thread A sends a message to thread B (and only thread B)
it will be received by thread B if thread B tries to receive a message
from thread A or if thread B tries to receive from any thread.
The exchange will not occur if thread B tries to receive a message
from thread C. The message can never be received by thread C,
even if it tries to receive a message from thread A (or from any thread).
Message Flow Control
====================
Mailbox messages can be exchanged **synchronously** or **asynchronously**.
In a synchronous exchange, the sending thread blocks until the message
has been fully processed by the receiving thread. In an asynchronous exchange,
the sending thread does not wait until the message has been received
by another thread before continuing; this allows the sending thread to do
other work (such as gather data that will be used in the next message)
*before* the message is given to a receiving thread and fully processed.
The technique used for a given message exchange is determined
by the sending thread.
The synchronous exchange technique provides an implicit form of flow control,
preventing a sending thread from generating messages faster than they can be
consumed by receiving threads. The asynchronous exchange technique provides an
explicit form of flow control, which allows a sending thread to determine
if a previously sent message still exists before sending a subsequent message.
Implementation
**************
Defining a Mailbox
==================
A mailbox is defined using a variable of type :c:struct:`k_mbox`.
It must then be initialized by calling :c:func:`k_mbox_init`.
The following code defines and initializes an empty mailbox.
.. code-block:: c
struct k_mbox my_mailbox;
k_mbox_init(&my_mailbox);
Alternatively, a mailbox can be defined and initialized at compile time
by calling :c:macro:`K_MBOX_DEFINE`.
The following code has the same effect as the code segment above.
.. code-block:: c
K_MBOX_DEFINE(my_mailbox);
Message Descriptors
===================
A message descriptor is a structure of type :c:struct:`k_mbox_msg`.
Only the fields listed below should be used; any other fields are for
internal mailbox use only.
*info*
A 32-bit value that is exchanged by the message sender and receiver,
and whose meaning is defined by the application. This exchange is
bi-directional, allowing the sender to pass a value to the receiver
during any message exchange, and allowing the receiver to pass a value
to the sender during a synchronous message exchange.
*size*
The message data size, in bytes. Set it to zero when sending an empty
message, or when sending a message buffer with no actual data. When
receiving a message, set it to the maximum amount of data desired, or to
zero if the message data is not wanted. The mailbox updates this field with
the actual number of data bytes exchanged once the message is received.
*tx_data*
A pointer to the sending thread's message buffer. Set it to ``NULL``
when sending an empty message. Leave this field uninitialized when
receiving a message.
*tx_target_thread*
The address of the desired receiving thread. Set it to :c:macro:`K_ANY`
to allow any thread to receive the message. Leave this field uninitialized
when receiving a message. The mailbox updates this field with
the actual receiver's address once the message is received.
*rx_source_thread*
The address of the desired sending thread. Set it to :c:macro:`K_ANY`
to receive a message sent by any thread. Leave this field uninitialized
when sending a message. The mailbox updates this field
with the actual sender's address when the message is put into
the mailbox.
Sending a Message
=================
A thread sends a message by first creating its message data, if any.
Next, the sending thread creates a message descriptor that characterizes
the message to be sent, as described in the previous section.
Finally, the sending thread calls a mailbox send API to initiate the
message exchange. The message is immediately given to a compatible receiving
thread, if one is currently waiting. Otherwise, the message is added
to the mailbox's send queue.
Any number of messages may exist simultaneously on a send queue.
The messages in the send queue are sorted according to the priority
of the sending thread. Messages of equal priority are sorted so that
the oldest message can be received first.
For a synchronous send operation, the operation normally completes when a
receiving thread has both received the message and retrieved the message data.
If the message is not received before the waiting period specified by the
sending thread is reached, the message is removed from the mailbox's send queue
and the send operation fails. When a send operation completes successfully
the sending thread can examine the message descriptor to determine
which thread received the message, how much data was exchanged,
and the application-defined info value supplied by the receiving thread.
.. note::
A synchronous send operation may block the sending thread indefinitely,
even when the thread specifies a maximum waiting period.
The waiting period only limits how long the mailbox waits
before the message is received by another thread. Once a message is received
there is *no* limit to the time the receiving thread may take to retrieve
the message data and unblock the sending thread.
For an asynchronous send operation, the operation always completes immediately.
This allows the sending thread to continue processing regardless of whether the
message is given to a receiving thread immediately or added to the send queue.
The sending thread may optionally specify a semaphore that the mailbox gives
when the message is deleted by the mailbox, for example, when the message
has been received and its data retrieved by a receiving thread.
The use of a semaphore allows the sending thread to easily implement
a flow control mechanism that ensures that the mailbox holds no more than
an application-specified number of messages from a sending thread
(or set of sending threads) at any point in time.
.. note::
A thread that sends a message asynchronously has no way to determine
which thread received the message, how much data was exchanged, or the
application-defined info value supplied by the receiving thread.
Sending an Empty Message
------------------------
This code uses a mailbox to synchronously pass 4 byte random values
to any consuming thread that wants one. The message "info" field is
large enough to carry the information being exchanged, so the data
portion of the message isn't used.
.. code-block:: c
void producer_thread(void)
{
struct k_mbox_msg send_msg;
while (1) {
/* generate random value to send */
uint32_t random_value = sys_rand32_get();
/* prepare to send empty message */
send_msg.info = random_value;
send_msg.size = 0;
send_msg.tx_data = NULL;
send_msg.tx_target_thread = K_ANY;
/* send message and wait until a consumer receives it */
k_mbox_put(&my_mailbox, &send_msg, K_FOREVER);
}
}
Sending Data Using a Message Buffer
-----------------------------------
This code uses a mailbox to synchronously pass variable-sized requests
from a producing thread to any consuming thread that wants it.
The message "info" field is used to exchange information about
the maximum size message buffer that each thread can handle.
.. code-block:: c
void producer_thread(void)
{
char buffer[100];
int buffer_bytes_used;
struct k_mbox_msg send_msg;
while (1) {
/* generate data to send */
...
buffer_bytes_used = ... ;
memcpy(buffer, source, buffer_bytes_used);
/* prepare to send message */
send_msg.info = buffer_bytes_used;
send_msg.size = buffer_bytes_used;
send_msg.tx_data = buffer;
send_msg.tx_target_thread = K_ANY;
/* send message and wait until a consumer receives it */
k_mbox_put(&my_mailbox, &send_msg, K_FOREVER);
/* info, size, and tx_target_thread fields have been updated */
/* verify that message data was fully received */
if (send_msg.size < buffer_bytes_used) {
printf("some message data dropped during transfer!");
printf("receiver only had room for %d bytes", send_msg.info);
}
}
}
Receiving a Message
===================
A thread receives a message by first creating a message descriptor that
characterizes the message it wants to receive. It then calls one of the
mailbox receive APIs. The mailbox searches its send queue and takes the message
from the first compatible thread it finds. If no compatible thread exists,
the receiving thread may choose to wait for one. If no compatible thread
appears before the waiting period specified by the receiving thread is reached,
the receive operation fails.
Once a receive operation completes successfully the receiving thread
can examine the message descriptor to determine which thread sent the message,
how much data was exchanged,
and the application-defined info value supplied by the sending thread.
Any number of receiving threads may wait simultaneously on a mailboxes'
receive queue. The threads are sorted according to their priority;
threads of equal priority are sorted so that the one that started waiting
first can receive a message first.
.. note::
Receiving threads do not always receive messages in a first in, first out
(FIFO) order, due to the thread compatibility constraints specified by the
message descriptors. For example, if thread A waits to receive a message
only from thread X and then thread B waits to receive a message from
thread Y, an incoming message from thread Y to any thread will be given
to thread B and thread A will continue to wait.
The receiving thread controls both the quantity of data it retrieves from an
incoming message and where the data ends up. The thread may choose to take
all of the data in the message, to take only the initial part of the data,
or to take no data at all. Similarly, the thread may choose to have the data
copied into a message buffer of its choice.
The following sections outline various approaches a receiving thread may use
when retrieving message data.
Retrieving Data at Receive Time
-------------------------------
The most straightforward way for a thread to retrieve message data is to
specify a message buffer when the message is received. The thread indicates
both the location of the message buffer (which must not be ``NULL``)
and its size.
The mailbox copies the message's data to the message buffer as part of the
receive operation. If the message buffer is not big enough to contain all of the
message's data, any uncopied data is lost. If the message is not big enough
to fill all of the buffer with data, the unused portion of the message buffer is
left unchanged. In all cases the mailbox updates the receiving thread's
message descriptor to indicate how many data bytes were copied (if any).
The immediate data retrieval technique is best suited for small messages
where the maximum size of a message is known in advance.
The following code uses a mailbox to process variable-sized requests from any
producing thread, using the immediate data retrieval technique. The message
"info" field is used to exchange information about the maximum size
message buffer that each thread can handle.
.. code-block:: c
void consumer_thread(void)
{
struct k_mbox_msg recv_msg;
char buffer[100];
int i;
int total;
while (1) {
/* prepare to receive message */
recv_msg.info = 100;
recv_msg.size = 100;
recv_msg.rx_source_thread = K_ANY;
/* get a data item, waiting as long as needed */
k_mbox_get(&my_mailbox, &recv_msg, buffer, K_FOREVER);
/* info, size, and rx_source_thread fields have been updated */
/* verify that message data was fully received */
if (recv_msg.info != recv_msg.size) {
printf("some message data dropped during transfer!");
printf("sender tried to send %d bytes", recv_msg.info);
}
/* compute sum of all message bytes (from 0 to 100 of them) */
total = 0;
for (i = 0; i < recv_msg.size; i++) {
total += buffer[i];
}
}
}
Retrieving Data Later Using a Message Buffer
--------------------------------------------
A receiving thread may choose to defer message data retrieval at the time
the message is received, so that it can retrieve the data into a message buffer
at a later time.
The thread does this by specifying a message buffer location of ``NULL``
and a size indicating the maximum amount of data it is willing to retrieve
later.
The mailbox does not copy any message data as part of the receive operation.
However, the mailbox still updates the receiving thread's message descriptor
to indicate how many data bytes are available for retrieval.
The receiving thread must then respond as follows:
* If the message descriptor size is zero, then either the sender's message
contained no data or the receiving thread did not want to receive any data.
The receiving thread does not need to take any further action, since
the mailbox has already completed data retrieval and deleted the message.
* If the message descriptor size is non-zero and the receiving thread still
wants to retrieve the data, the thread must call :c:func:`k_mbox_data_get`
and supply a message buffer large enough to hold the data. The mailbox copies
the data into the message buffer and deletes the message.
* If the message descriptor size is non-zero and the receiving thread does *not*
want to retrieve the data, the thread must call :c:func:`k_mbox_data_get`
and specify a message buffer of ``NULL``. The mailbox deletes
the message without copying the data.
The subsequent data retrieval technique is suitable for applications where
immediate retrieval of message data is undesirable. For example, it can be
used when memory limitations make it impractical for the receiving thread to
always supply a message buffer capable of holding the largest possible
incoming message.
The following code uses a mailbox's deferred data retrieval mechanism
to get message data from a producing thread only if the message meets
certain criteria, thereby eliminating unneeded data copying. The message
"info" field supplied by the sender is used to classify the message.
.. code-block:: c
void consumer_thread(void)
{
struct k_mbox_msg recv_msg;
char buffer[10000];
while (1) {
/* prepare to receive message */
recv_msg.size = 10000;
recv_msg.rx_source_thread = K_ANY;
/* get message, but not its data */
k_mbox_get(&my_mailbox, &recv_msg, NULL, K_FOREVER);
/* get message data for only certain types of messages */
if (is_message_type_ok(recv_msg.info)) {
/* retrieve message data and delete the message */
k_mbox_data_get(&recv_msg, buffer);
/* process data in "buffer" */
...
} else {
/* ignore message data and delete the message */
k_mbox_data_get(&recv_msg, NULL);
}
}
}
Suggested Uses
**************
Use a mailbox to transfer data items between threads whenever the capabilities
of a message queue are insufficient.
Configuration Options
*********************
Related configuration options:
* :kconfig:option:`CONFIG_NUM_MBOX_ASYNC_MSGS`
API Reference
*************
.. doxygengroup:: mailbox_apis
``` | /content/code_sandbox/doc/kernel/services/data_passing/mailboxes.rst | restructuredtext | 2016-05-26T17:54:19 | 2024-08-16T18:09:06 | zephyr | zephyrproject-rtos/zephyr | 10,307 | 3,962 |
```restructuredtext
.. _ring_buffers_v2:
Ring Buffers
############
A :dfn:`ring buffer` is a circular buffer, whose contents are stored in
first-in-first-out order.
For circumstances where an application needs to implement asynchronous
"streaming" copying of data, Zephyr provides a ``struct ring_buf``
abstraction to manage copies of such data in and out of a shared
buffer of memory.
Two content data modes are supported:
* **Byte mode**: raw bytes can be enqueued and dequeued.
* **Data item mode**: Multiple 32-bit word data items with metadata
can be enqueued and dequeued from the ring buffer in
chunks of up to 1020 bytes. Each data item also has two
associated metadata values: a type identifier and a 16-bit
integer value, both of which are application-specific.
While the underlying data structure is the same, it is not
legal to mix these two modes on a single ring buffer instance. A ring
buffer initialized with a byte count must be used only with the
"bytes" API, one initialized with a word count must use the "items"
calls.
.. contents::
:local:
:depth: 2
Concepts
********
Any number of ring buffers can be defined (limited only by available RAM). Each
ring buffer is referenced by its memory address.
A ring buffer has the following key properties:
* A **data buffer** of bytes or 32-bit words. The data buffer contains the raw
bytes or 32-bit words that have been added to the ring buffer but not yet
removed.
* A **data buffer size**, measured in bytes or 32-bit words. This governs
the maximum amount of data (including possible metadata values) the ring
buffer can hold.
A ring buffer must be initialized before it can be used. This sets its
data buffer to empty.
A ``struct ring_buf`` may be placed anywhere in user-accessible
memory, and must be initialized with :c:func:`ring_buf_init` or
:c:func:`ring_buf_item_init` before use. This must be provided a region
of user-controlled memory for use as the buffer itself. Note carefully that the units of the size of the
buffer passed change (either bytes or words) depending on how the ring
buffer will be used later. Macros for combining these steps in a
single static declaration exist for convenience.
:c:macro:`RING_BUF_DECLARE` will declare and statically initialize a ring
buffer with a specified byte count, where
:c:macro:`RING_BUF_ITEM_DECLARE` will declare and statically
initialize a buffer with a given count of 32 bit words.
:c:macro:`RING_BUF_ITEM_SIZEOF` will compute the size in 32-bit words
corresponding to a type or an expression. Note: rounds up if the size is
not a multiple of 32 bits.
"Bytes" data may be copied into the ring buffer using
:c:func:`ring_buf_put`, passing a data pointer and byte count. These
bytes will be copied into the buffer in order, as many as will fit in
the allocated buffer. The total number of bytes copied (which may be
fewer than provided) will be returned. Likewise :c:func:`ring_buf_get`
will copy bytes out of the ring buffer in the order that they were
written, into a user-provided buffer, returning the number of bytes
that were transferred.
To avoid multiply-copied-data situations, a "claim" API exists for
byte mode. :c:func:`ring_buf_put_claim` takes a byte size value from the
user and returns a pointer to memory internal to the ring buffer that
can be used to receive those bytes, along with a size of the
contiguous internal region (which may be smaller than requested). The
user can then copy data into that region at a later time without
assembling all the bytes in a single region first. When complete,
:c:func:`ring_buf_put_finish` can be used to signal the buffer that the
transfer is complete, passing the number of bytes actually
transferred. At this point a new transfer can be initiated.
Similarly, :c:func:`ring_buf_get_claim` returns a pointer to internal ring
buffer data from which the user can read without making a verbatim
copy, and :c:func:`ring_buf_get_finish` signals the buffer with how many
bytes have been consumed and allows for a new transfer to begin.
"Items" mode works similarly to bytes mode, except that all transfers
are in units of 32 bit words and all memory is assumed to be aligned
on 32 bit boundaries. The write and read operations are
:c:func:`ring_buf_item_put` and :c:func:`ring_buf_item_get`, and work
otherwise identically to the bytes mode APIs. There no "claim" API
provided for items mode. One important difference is that unlike
:c:func:`ring_buf_put`, :c:func:`ring_buf_item_put` will not do a partial
transfer; it will return an error in the case where the provided data
does not fit in its entirety.
The user can manage the capacity of a ring buffer without modifying it
using either :c:func:`ring_buf_space_get` or :c:func:`ring_buf_item_space_get`
which returns the number of free bytes or free 32-bit item words respectively,
or by testing the :c:func:`ring_buf_is_empty` predicate.
Finally, a :c:func:`ring_buf_reset` call exists to immediately empty a
ring buffer, discarding the tracking of any bytes or items already
written to the buffer. It does not modify the memory contents of the
buffer itself, however.
Byte mode
=========
A **byte mode** ring buffer instance is declared using
:c:macro:`RING_BUF_DECLARE()` and accessed using:
:c:func:`ring_buf_put_claim`, :c:func:`ring_buf_put_finish`,
:c:func:`ring_buf_get_claim`, :c:func:`ring_buf_get_finish`,
:c:func:`ring_buf_put` and :c:func:`ring_buf_get`.
Data can be copied into the ring buffer (see
:c:func:`ring_buf_put`) or ring buffer memory can be used
directly by the user. In the latter case, the operation is split into three stages:
1. allocating the buffer (:c:func:`ring_buf_put_claim`) when
user requests the destination location where data can be written.
#. writing the data by the user (e.g. buffer written by DMA).
#. indicating the amount of data written to the provided buffer
(:c:func:`ring_buf_put_finish`). The amount
can be less than or equal to the allocated amount.
Data can be retrieved from a ring buffer through copying
(see :c:func:`ring_buf_get`) or accessed directly by address. In the latter
case, the operation is split into three stages:
1. retrieving source location with valid data written to a ring buffer
(see :c:func:`ring_buf_get_claim`).
#. processing data
#. freeing processed data (see :c:func:`ring_buf_get_finish`).
The amount freed can be less than or equal or to the retrieved amount.
Data item mode
==============
A **data item mode** ring buffer instance is declared using
:c:macro:`RING_BUF_ITEM_DECLARE()` and accessed using
:c:func:`ring_buf_item_put` and :c:func:`ring_buf_item_get`.
A ring buffer **data item** is an array of 32-bit words from 0 to 1020 bytes
in length. When a data item is **enqueued** (:c:func:`ring_buf_item_put`)
its contents are copied to the data buffer, along with its associated metadata
values (which occupy one additional 32-bit word). If the ring buffer has
insufficient space to hold the new data item the enqueue operation fails.
A data item is **dequeued** (:c:func:`ring_buf_item_get`) from a ring
buffer by removing the oldest enqueued item. The contents of the dequeued data
item, as well as its two metadata values, are copied to areas supplied by the
retriever. If the ring buffer is empty, or if the data array supplied by the
retriever is not large enough to hold the data item's data, the dequeue
operation fails.
Concurrency
===========
The ring buffer APIs do not provide any concurrency control.
Depending on usage (particularly with respect to number of concurrent
readers/writers) applications may need to protect the ring buffer with
mutexes and/or use semaphores to notify consumers that there is data to
read.
For the trivial case of one producer and one consumer, concurrency
control shouldn't be needed.
Internal Operation
==================
Data streamed through a ring buffer is always written to the next byte
within the buffer, wrapping around to the first element after reaching
the end, thus the "ring" structure. Internally, the ``struct
ring_buf`` contains its own buffer pointer and its size, and also a
set of "head" and "tail" indices representing where the next read and write
operations may occur.
This boundary is invisible to the user using the normal put/get APIs,
but becomes a barrier to the "claim" API, because obviously no
contiguous region can be returned that crosses the end of the buffer.
This can be surprising to application code, and produce performance
artifacts when transfers need to happen close to the end of the
buffer, as the number of calls to claim/finish needs to double for such
transfers.
Implementation
**************
Defining a Ring Buffer
======================
A ring buffer is defined using a variable of type :c:struct:`ring_buf`.
It must then be initialized by calling :c:func:`ring_buf_init` or
:c:func:`ring_buf_item_init`.
The following code defines and initializes an empty **data item mode** ring
buffer (which is part of a larger data structure). The ring buffer's data buffer
is capable of holding 64 words of data and metadata information.
.. code-block:: c
#define MY_RING_BUF_WORDS 64
struct my_struct {
struct ring_buf rb;
uint32_t buffer[MY_RING_BUF_WORDS];
...
};
struct my_struct ms;
void init_my_struct {
ring_buf_item_init(&ms.rb, MY_RING_BUF_WORDS, ms.buffer);
...
}
Alternatively, a ring buffer can be defined and initialized at compile time
using one of two macros at file scope. Each macro defines both the ring
buffer itself and its data buffer.
The following code defines a **data item mode** ring buffer:
.. code-block:: c
#define MY_RING_BUF_WORDS 93
RING_BUF_ITEM_DECLARE(my_ring_buf, MY_RING_BUF_WORDS);
The following code defines a ring buffer intended to be used for raw bytes:
.. code-block:: c
#define MY_RING_BUF_BYTES 93
RING_BUF_DECLARE(my_ring_buf, MY_RING_BUF_BYTES);
Enqueuing Data
==============
Bytes are copied to a **byte mode** ring buffer by calling
:c:func:`ring_buf_put`.
.. code-block:: c
uint8_t my_data[MY_RING_BUF_BYTES];
uint32_t ret;
ret = ring_buf_put(&ring_buf, my_data, MY_RING_BUF_BYTES);
if (ret != MY_RING_BUF_BYTES) {
/* not enough room, partial copy. */
...
}
Data can be added to a **byte mode** ring buffer by directly accessing the
ring buffer's memory. For example:
.. code-block:: c
uint32_t size;
uint32_t rx_size;
uint8_t *data;
int err;
/* Allocate buffer within a ring buffer memory. */
size = ring_buf_put_claim(&ring_buf, &data, MY_RING_BUF_BYTES);
/* Work directly on a ring buffer memory. */
rx_size = uart_rx(data, size);
/* Indicate amount of valid data. rx_size can be equal or less than size. */
err = ring_buf_put_finish(&ring_buf, rx_size);
if (err != 0) {
/* This shouldn't happen unless rx_size > size */
...
}
A data item is added to a ring buffer by calling
:c:func:`ring_buf_item_put`.
.. code-block:: c
uint32_t data[MY_DATA_WORDS];
int ret;
ret = ring_buf_item_put(&ring_buf, TYPE_FOO, 0, data, MY_DATA_WORDS);
if (ret == -EMSGSIZE) {
/* not enough room for the data item */
...
}
If the data item requires only the type or application-specific integer value
(i.e. it has no data array), a size of 0 and data pointer of :c:macro:`NULL`
can be specified.
.. code-block:: c
int ret;
ret = ring_buf_item_put(&ring_buf, TYPE_BAR, 17, NULL, 0);
if (ret == -EMSGSIZE) {
/* not enough room for the data item */
...
}
Retrieving Data
===============
Data bytes are copied out from a **byte mode** ring buffer by calling
:c:func:`ring_buf_get`. For example:
.. code-block:: c
uint8_t my_data[MY_DATA_BYTES];
size_t ret;
ret = ring_buf_get(&ring_buf, my_data, sizeof(my_data));
if (ret != sizeof(my_data)) {
/* Fewer bytes copied. */
} else {
/* Requested amount of bytes retrieved. */
...
}
Data can be retrieved from a **byte mode** ring buffer by direct
operations on the ring buffer's memory. For example:
.. code-block:: c
uint32_t size;
uint32_t proc_size;
uint8_t *data;
int err;
/* Get buffer within a ring buffer memory. */
size = ring_buf_get_claim(&ring_buf, &data, MY_RING_BUF_BYTES);
/* Work directly on a ring buffer memory. */
proc_size = process(data, size);
/* Indicate amount of data that can be freed. proc_size can be equal or less
* than size.
*/
err = ring_buf_get_finish(&ring_buf, proc_size);
if (err != 0) {
/* proc_size exceeds amount of valid data in a ring buffer. */
...
}
A data item is removed from a ring buffer by calling
:c:func:`ring_buf_item_get`.
.. code-block:: c
uint32_t my_data[MY_DATA_WORDS];
uint16_t my_type;
uint8_t my_value;
uint8_t my_size;
int ret;
my_size = MY_DATA_WORDS;
ret = ring_buf_item_get(&ring_buf, &my_type, &my_value, my_data, &my_size);
if (ret == -EMSGSIZE) {
printk("Buffer is too small, need %d uint32_t\n", my_size);
} else if (ret == -EAGAIN) {
printk("Ring buffer is empty\n");
} else {
printk("Got item of type %u value &u of size %u dwords\n",
my_type, my_value, my_size);
...
}
Configuration Options
*********************
Related configuration options:
* :kconfig:option:`CONFIG_RING_BUFFER`: Enable ring buffer.
API Reference
*************
The following ring buffer APIs are provided by :zephyr_file:`include/zephyr/sys/ring_buffer.h`:
.. doxygengroup:: ring_buffer_apis
``` | /content/code_sandbox/doc/kernel/data_structures/ring_buffers.rst | restructuredtext | 2016-05-26T17:54:19 | 2024-08-16T18:09:06 | zephyr | zephyrproject-rtos/zephyr | 10,307 | 3,340 |
```restructuredtext
.. _mpsc_pbuf:
Multi Producer Single Consumer Packet Buffer
============================================
A :dfn:`Multi Producer Single Consumer Packet Buffer (MPSC_PBUF)` is a circular
buffer, whose contents are stored in first-in-first-out order. Variable size
packets are stored in the buffer. Packet buffer works under assumption that there
is a single context that consumes the data. However, it is possible that another
context may interfere to flush the data and never come back (panic case).
Packet is produced in two steps: first requested amount of data is allocated,
producer fills the data and commits it. Consuming a packet is also performed in
two steps: consumer claims the packet, gets pointer to it and length and later
on packet is freed. This approach reduces memory copying.
A :dfn:`MPSC Packet Buffer` has the following key properties:
* Allocate, commit scheme used for packet producing.
* Claim, free scheme used for packet consuming.
* Allocator ensures that contiguous memory of requested length is allocated.
* Following policies can be applied when requested space cannot be allocated:
* **Overwrite** - oldest entries are dropped until requested amount of memory can
be allocated. For each dropped packet user callback is called.
* **No overwrite** - When requested amount of space cannot be allocated,
allocation fails.
* Dedicated, optimized API for storing short packets.
* Allocation with timeout.
Internals
---------
Each packet in the buffer contains ``MPSC_PBUF`` specific header which is used
for internal management. Header consists of 2 bit flags. In order to optimize
memory usage, header can be added on top of the user header using
:c:macro:`MPSC_PBUF_HDR` and remaining bits in the first word can be application
specific. Header consists of following flags:
* valid - bit set to one when packet contains valid user packet
* busy - bit set when packet is being consumed (claimed but not free)
Header state:
+-------+------+----------------------+
| valid | busy | description |
+-------+------+----------------------+
| 0 | 0 | space is free |
+-------+------+----------------------+
| 1 | 0 | valid packet |
+-------+------+----------------------+
| 1 | 1 | claimed valid packet |
+-------+------+----------------------+
| 0 | 1 | internal skip packet |
+-------+------+----------------------+
Packet buffer space contains free space, valid user packets and internal skip
packets. Internal skip packets indicates padding, e.g. at the end of the buffer.
Allocation
^^^^^^^^^^
Using pairs for read and write indexes, available space is determined. If
space can be allocated, temporary write index is moved and pointer to a space
within buffer is returned. Packet header is reset. If allocation required
wrapping of the write index, a skip packet is added to the end of buffer. If
space cannot be allocated and overwrite is disabled then ``NULL`` pointer is
returned or context blocks if allocation was with timeout.
Allocation with overwrite
^^^^^^^^^^^^^^^^^^^^^^^^^
If overwrite is enabled, oldest packets are dropped until requested amount of
space can be allocated. When packets are dropped ``busy`` flag is checked in the
header to ensure that currently consumed packet is not overwritten. In that case,
skip packet is added before busy packet and packets following the busy packet
are dropped. When busy packet is being freed, such situation is detected and
packet is converted to skip packet to avoid double processing.
Usage
-----
Packet header definition
^^^^^^^^^^^^^^^^^^^^^^^^
Packet header details can be found in :zephyr_file:`include/zephyr/sys/mpsc_packet.h`.
API functions can be found in :zephyr_file:`include/zephyr/sys/mpsc_pbuf.h`. Headers
are split to avoid include spam when declaring the packet.
User header structure must start with internal header:
.. code-block:: c
#include <zephyr/sys/mpsc_packet.h>
struct foo_header {
MPSC_PBUF_HDR;
uint32_t length: 32 - MPSC_PBUF_HDR_BITS;
};
Packet buffer configuration
^^^^^^^^^^^^^^^^^^^^^^^^^^^
Configuration structure contains buffer details, configuration flags and
callbacks. Following callbacks are used by the packet buffer:
* Drop notification - callback called whenever a packet is dropped due to
overwrite.
* Get packet length - callback to determine packet length
Packet producing
^^^^^^^^^^^^^^^^
Standard, two step method:
.. code-block:: c
foo_packet *packet = mpsc_pbuf_alloc(buffer, len, K_NO_WAIT);
fill_data(packet);
mpsc_pbuf_commit(buffer, packet);
Performance optimized storing of small packets:
* 32 bit word packet
* 32 bit word with pointer packet
Note that since packets are written by value, they should already contain
``valid`` bit set in the header.
.. code-block:: c
mpsc_pbuf_put_word(buffer, data);
mpsc_pbuf_put_word_ext(buffer, data, ptr);
Packet consuming
^^^^^^^^^^^^^^^^
Two step method:
.. code-block:: c
foo_packet *packet = mpsc_pbuf_claim(buffer);
process(packet);
mpsc_pbuf_free(buffer, packet);
``` | /content/code_sandbox/doc/kernel/data_structures/mpsc_pbuf.rst | restructuredtext | 2016-05-26T17:54:19 | 2024-08-16T18:09:06 | zephyr | zephyrproject-rtos/zephyr | 10,307 | 1,107 |
```restructuredtext
.. _data_structures:
Data Structures
###############
Zephyr provides a library of common general purpose data structures
used within the kernel, but useful by application code in general.
These include list and balanced tree structures for storing ordered
data, and a ring buffer for managing "byte stream" data in a clean
way.
Note that in general, the collections are implemented as "intrusive"
data structures. The "node" data is the only struct used by the
library code, and it does not store a pointer or other metadata to
indicate what user data is "owned" by that node. Instead, the
expectation is that the node will be itself embedded within a
user-defined struct. Macros are provided to retrieve a user struct
address from the embedded node pointer in a clean way. The purpose
behind this design is to allow the collections to be used in contexts
where dynamic allocation is disallowed (i.e. there is no need to
allocate node objects because the memory is provided by the user).
Note also that these libraries are uniformly unsynchronized; access to
them is not threadsafe by default. These are data structures, not
synchronization primitives. The expectation is that any locking
needed will be provided by the user.
.. toctree::
:maxdepth: 1
slist.rst
dlist.rst
mpsc_pbuf.rst
spsc_pbuf.rst
rbtree.rst
ring_buffers.rst
mpsc_lockfree.rst
spsc_lockfree.rst
``` | /content/code_sandbox/doc/kernel/data_structures/index.rst | restructuredtext | 2016-05-26T17:54:19 | 2024-08-16T18:09:06 | zephyr | zephyrproject-rtos/zephyr | 10,307 | 335 |
```restructuredtext
.. _spsc_lockfree:
Single Producer Single Consumer Lock Free Queue
===============================================
A :dfn:`Single Producer Single Consumer Lock Free Queue (SPSC)` is a lock free
atomic ring buffer based queue.
API Reference
*************
.. doxygengroup:: spsc_lockfree
``` | /content/code_sandbox/doc/kernel/data_structures/spsc_lockfree.rst | restructuredtext | 2016-05-26T17:54:19 | 2024-08-16T18:09:06 | zephyr | zephyrproject-rtos/zephyr | 10,307 | 63 |
```restructuredtext
.. _mpsc_lockfree:
Multi Producer Single Consumer Lock Free Queue
==============================================
A :dfn:`Multi Producer Single Consumer Lock Free Queue (MPSC)` is an lockfree
intrusive queue based on atomic pointer swaps as described by Dmitry Vyukov
at `1024cores <path_to_url`_.
API Reference
*************
.. doxygengroup:: mpsc_lockfree
``` | /content/code_sandbox/doc/kernel/data_structures/mpsc_lockfree.rst | restructuredtext | 2016-05-26T17:54:19 | 2024-08-16T18:09:06 | zephyr | zephyrproject-rtos/zephyr | 10,307 | 85 |
```restructuredtext
.. _dlist_api:
Double-linked List
==================
Similar to the single-linked list in many respects, Zephyr includes a
double-linked implementation. This provides the same algorithmic
behavior for all the existing slist operations, but also allows for
constant-time removal and insertion (at all points: before or after
the head, tail or any internal node). To do this, the list stores two
pointers per node, and thus has somewhat higher runtime code and
memory space needs.
A :c:type:`sys_dlist_t` struct may be instantiated by the user in any
accessible memory. It must be initialized with :c:func:`sys_dlist_init`
or :c:macro:`SYS_DLIST_STATIC_INIT` before use. The :c:type:`sys_dnode_t` struct
is expected to be provided by the user for any nodes added to the
list (typically embedded within the struct to be tracked, as described
above). It must be initialized in zeroed/bss memory or with
:c:func:`sys_dnode_init` before use.
Primitive operations may retrieve the head/tail of a list and the
next/prev pointers of a node with :c:func:`sys_dlist_peek_head`,
:c:func:`sys_dlist_peek_tail`, :c:func:`sys_dlist_peek_next` and
:c:func:`sys_dlist_peek_prev`. These can all return NULL where
appropriate (i.e. for empty lists, or nodes at the endpoints of the
list).
A dlist can be modified in constant time by removing a node with
:c:func:`sys_dlist_remove`, by adding a node to the head or tail of a list
with :c:func:`sys_dlist_prepend` and :c:func:`sys_dlist_append`, or by
inserting a node before an existing node with :c:func:`sys_dlist_insert`.
As for slist, each node in a dlist can be processed in a natural code
block style using :c:macro:`SYS_DLIST_FOR_EACH_NODE`. This macro also
exists in a "FROM_NODE" form which allows for iterating from a known
starting point, a "SAFE" variant that allows for removing the node
being inspected within the code block, a "CONTAINER" style that
provides the pointer to a containing struct instead of the raw node,
and a "CONTAINER_SAFE" variant that provides both properties.
Convenience utilities provided by dlist include
:c:func:`sys_dlist_insert_at`, which inserts a node that linearly searches
through a list to find the right insertion point, which is provided by
the user as a C callback function pointer, and
:c:func:`sys_dnode_is_linked`, which will affirmatively return whether or
not a node is currently linked into a dlist or not (via an
implementation that has zero overhead vs. the normal list processing).
Double-linked List Internals
----------------------------
Internally, the dlist implementation is minimal: the :c:type:`sys_dlist_t`
struct contains "head" and "tail" pointer fields, the :c:type:`sys_dnode_t`
contains "prev" and "next" pointers, and no other data is stored. But
in practice the two structs are internally identical, and the list
struct is inserted as a node into the list itself. This allows for a
very clean symmetry of operations:
* An empty list has backpointers to itself in the list struct, which
can be trivially detected.
* The head and tail of the list can be detected by comparing the
prev/next pointers of a node vs. the list struct address.
* An insertion or deletion never needs to check for the special case
of inserting at the head or tail. There are never any NULL pointers
within the list to be avoided. Exactly the same operations are run,
without tests or branches, for all list modification primitives.
Effectively, a dlist of N nodes can be thought of as a "ring" of "N+1"
nodes, where one node represents the list tracking struct.
.. figure:: dlist.png
:align: center
:alt: dlist example
:figclass: align-center
A dlist containing three elements. Note that the list struct
appears as a fourth "element" in the list.
.. figure:: dlist-single.png
:align: center
:alt: single-element dlist example
:figclass: align-center
An dlist containing just one element.
.. figure:: dlist-empty.png
:align: center
:alt: dlist example
:figclass: align-center
An empty dlist.
Doubly-linked List API Reference
--------------------------------
.. doxygengroup:: doubly-linked-list_apis
``` | /content/code_sandbox/doc/kernel/data_structures/dlist.rst | restructuredtext | 2016-05-26T17:54:19 | 2024-08-16T18:09:06 | zephyr | zephyrproject-rtos/zephyr | 10,307 | 1,033 |
```restructuredtext
.. _spsc_pbuf:
Single Producer Single Consumer Packet Buffer
=============================================
A :dfn:`Single Producer Single Consumer Packet Buffer (SPSC_PBUF)` is a circular
buffer, whose contents are stored in first-in-first-out order. Variable size
packets are stored in the buffer. Packet buffer works under assumption that there
is a single context that produces packets and a single context that consumes the
data.
Implementation is focused on performance and memory footprint.
Packets are added to the buffer using :c:func:`spsc_pbuf_write` which copies a
data into the buffer. If the buffer is full error is returned.
Packets are copied out of the buffer using :c:func:`spsc_pbuf_read`.
``` | /content/code_sandbox/doc/kernel/data_structures/spsc_pbuf.rst | restructuredtext | 2016-05-26T17:54:19 | 2024-08-16T18:09:06 | zephyr | zephyrproject-rtos/zephyr | 10,307 | 154 |
```restructuredtext
.. _slist_api:
Single-linked List
==================
Zephyr provides a :c:type:`sys_slist_t` type for storing simple
singly-linked list data (i.e. data where each list element stores a
pointer to the next element, but not the previous one). This supports
constant-time access to the first (head) and last (tail) elements of
the list, insertion before the head and after the tail of the list and
constant time removal of the head. Removal of subsequent nodes
requires access to the "previous" pointer and thus can only be
performed in linear time by searching the list.
The :c:type:`sys_slist_t` struct may be instantiated by the user in any
accessible memory. It should be initialized with either
:c:func:`sys_slist_init` or by static assignment from SYS_SLIST_STATIC_INIT
before use. Its interior fields are opaque and should not be accessed
by user code.
The end nodes of a list may be retrieved with
:c:func:`sys_slist_peek_head` and :c:func:`sys_slist_peek_tail`, which will
return NULL if the list is empty, otherwise a pointer to a
:c:type:`sys_snode_t` struct.
The :c:type:`sys_snode_t` struct represents the data to be inserted. In
general, it is expected to be allocated/controlled by the user,
usually embedded within a struct which is to be added to the list.
The container struct pointer may be retrieved from a list node using
:c:macro:`SYS_SLIST_CONTAINER`, passing it the struct name of the
containing struct and the field name of the node. Internally, the
:c:type:`sys_snode_t` struct contains only a next pointer, which may be
accessed with :c:func:`sys_slist_peek_next`.
Lists may be modified by adding a single node at the head or tail with
:c:func:`sys_slist_prepend` and :c:func:`sys_slist_append`. They may also
have a node added to an interior point with :c:func:`sys_slist_insert`,
which inserts a new node after an existing one. Similarly
:c:func:`sys_slist_remove` will remove a node given a pointer to its
predecessor. These operations are all constant time.
Convenience routines exist for more complicated modifications to a
list. :c:func:`sys_slist_merge_slist` will append an entire list to an
existing one. :c:func:`sys_slist_append_list` will append a bounded
subset of an existing list in constant time. And
:c:func:`sys_slist_find_and_remove` will search a list (in linear time)
for a given node and remove it if present.
Finally the slist implementation provides a set of "for each" macros
that allows for iterating over a list in a natural way without needing
to manually traverse the next pointers. :c:macro:`SYS_SLIST_FOR_EACH_NODE`
will enumerate every node in a list given a local variable to store
the node pointer. :c:macro:`SYS_SLIST_FOR_EACH_NODE_SAFE` behaves
similarly, but has a more complicated implementation that requires an
extra scratch variable for storage and allows the user to delete the
iterated node during the iteration. Each of those macros also exists
in a "container" variant (:c:macro:`SYS_SLIST_FOR_EACH_CONTAINER` and
:c:macro:`SYS_SLIST_FOR_EACH_CONTAINER_SAFE`) which assigns a local
variable of a type that matches the user's container struct and not
the node struct, performing the required offsets internally. And
:c:macro:`SYS_SLIST_ITERATE_FROM_NODE` exists to allow for enumerating a
node and all its successors only, without inspecting the earlier part
of the list.
Single-linked List Internals
----------------------------
The slist code is designed to be minimal and conventional.
Internally, a :c:type:`sys_slist_t` struct is nothing more than a pair of
"head" and "tail" pointer fields. And a :c:type:`sys_snode_t` stores only a
single "next" pointer.
.. figure:: slist.png
:align: center
:alt: slist example
:figclass: align-center
An slist containing three elements.
.. figure:: slist-empty.png
:align: center
:alt: empty slist example
:figclass: align-center
An empty slist
The specific implementation of the list code, however, is done with an
internal "Z_GENLIST" template API which allows for extracting those
fields from arbitrary structures and emits an arbitrarily named set of
functions. This allows for implementing more complicated
single-linked list variants using the same basic primitives. The
genlist implementor is responsible for a custom implementation of the
primitive operations only: an "init" step for each struct, and a "get"
and "set" primitives for each of head, tail and next pointers on their
relevant structs. These inline functions are passed as parameters to
the genlist macro expansion.
Only one such variant, sflist, exists in Zephyr at the moment.
Flagged List
------------
The :c:type:`sys_sflist_t` is implemented using the described genlist
template API. With the exception of symbol naming ("sflist" instead
of "slist") and the additional API described next, it operates in all
ways identically to the slist API.
It adds the ability to associate exactly two bits of user defined
"flags" with each list node. These can be accessed and modified with
:c:func:`sys_sfnode_flags_get` and :c:func:`sys_sfnode_flags_set`.
Internally, the flags are stored unioned with the bottom bits of the
next pointer and incur no SRAM storage overhead when compared with the
simpler slist code.
Single-linked List API Reference
--------------------------------
.. doxygengroup:: single-linked-list_apis
Flagged List API Reference
--------------------------------
.. doxygengroup:: flagged-single-linked-list_apis
``` | /content/code_sandbox/doc/kernel/data_structures/slist.rst | restructuredtext | 2016-05-26T17:54:19 | 2024-08-16T18:09:06 | zephyr | zephyrproject-rtos/zephyr | 10,307 | 1,325 |
```restructuredtext
.. _rbtree_api:
Balanced Red/Black Tree
=======================
For circumstances where sorted containers may become large at runtime,
a list becomes problematic due to algorithmic costs of searching it.
For these situations, Zephyr provides a balanced tree implementation
which has runtimes on search and removal operations bounded at
O(log2(N)) for a tree of size N. This is implemented using a
conventional red/black tree as described by multiple academic sources.
The :c:struct:`rbtree` tracking struct for a rbtree may be initialized
anywhere in user accessible memory. It should contain only zero bits
before first use. No specific initialization API is needed or
required.
Unlike a list, where position is explicit, the ordering of nodes
within an rbtree must be provided as a predicate function by the user.
A function of type :c:func:`rb_lessthan_t` should be assigned to the
``lessthan_fn`` field of the :c:struct:`rbtree` struct before any tree
operations are attempted. This function should, as its name suggests,
return a boolean True value if the first node argument is "less than"
the second in the ordering desired by the tree. Note that "equal" is
not allowed, nodes within a tree must have a single fixed order for
the algorithm to work correctly.
As with the slist and dlist containers, nodes within an rbtree are
represented as a :c:struct:`rbnode` structure which exists in
user-managed memory, typically embedded within the data structure
being tracked in the tree. Unlike the list code, the data within an
rbnode is entirely opaque. It is not possible for the user to extract
the binary tree topology and "manually" traverse the tree as it is for
a list.
Nodes can be inserted into a tree with :c:func:`rb_insert` and removed
with :c:func:`rb_remove`. Access to the "first" and "last" nodes within a
tree (in the sense of the order defined by the comparison function) is
provided by :c:func:`rb_get_min` and :c:func:`rb_get_max`. There is also a
predicate, :c:func:`rb_contains`, which returns a boolean True if the
provided node pointer exists as an element within the tree. As
described above, all of these routines are guaranteed to have at most
log time complexity in the size of the tree.
There are two mechanisms provided for enumerating all elements in an
rbtree. The first, :c:func:`rb_walk`, is a simple callback implementation
where the caller specifies a C function pointer and an untyped
argument to be passed to it, and the tree code calls that function for
each node in order. This has the advantage of a very simple
implementation, at the cost of a somewhat more cumbersome API for the
user (not unlike ISO C's :c:func:`bsearch` routine). It is a recursive
implementation, however, and is thus not always available in
environments that forbid the use of unbounded stack techniques like
recursion.
There is also a :c:macro:`RB_FOR_EACH` iterator provided, which, like the
similar APIs for the lists, works to iterate over a list in a more
natural way, using a nested code block instead of a callback. It is
also nonrecursive, though it requires log-sized space on the stack by
default (however, this can be configured to use a fixed/maximally size
buffer instead where needed to avoid the dynamic allocation). As with
the lists, this is also available in a :c:macro:`RB_FOR_EACH_CONTAINER`
variant which enumerates using a pointer to a container field and not
the raw node pointer.
Tree Internals
--------------
As described, the Zephyr rbtree implementation is a conventional
red/black tree as described pervasively in academic sources. Low
level details about the algorithm are out of scope for this document,
as they match existing conventions. This discussion will be limited
to details notable or specific to the Zephyr implementation.
The core invariant guaranteed by the tree is that the path from the root of
the tree to any leaf is no more than twice as long as the path to any
other leaf. This is achieved by associating one bit of "color" with
each node, either red or black, and enforcing a rule that no red child
can be a child of another red child (i.e. that the number of black
nodes on any path to the root must be the same, and that no more than
that number of "extra" red nodes may be present). This rule is
enforced by a set of rotation rules used to "fix" trees following
modification.
.. figure:: rbtree.png
:align: center
:alt: rbtree example
:figclass: align-center
A maximally unbalanced rbtree with a black height of two. No more
nodes can be added underneath the rightmost node without
rebalancing.
These rotations are conceptually implemented on top of a primitive
that "swaps" the position of one node with another in the list.
Typical implementations effect this by simply swapping the nodes
internal "data" pointers, but because the Zephyr :c:struct:`rbnode` is
intrusive, that cannot work. Zephyr must include somewhat more
elaborate code to handle the edge cases (for example, one swapped node
can be the root, or the two may already be parent/child).
The :c:struct:`rbnode` struct for a Zephyr rbtree contains only two
pointers, representing the "left", and "right" children of a node
within the binary tree. Traversal of a tree for rebalancing following
modification, however, routinely requires the ability to iterate
"upwards" from a node as well. It is very common for red/black trees
in the industry to store a third "parent" pointer for this purpose.
Zephyr avoids this requirement by building a "stack" of node pointers
locally as it traverses downward through the tree and updating it
appropriately as modifications are made. So a Zephyr rbtree can be
implemented with no more runtime storage overhead than a dlist.
These properties, of a balanced tree data structure that works with
only two pointers of data per node and that works without any need for
a memory allocation API, are quite rare in the industry and are
somewhat unique to Zephyr.
Red/Black Tree API Reference
--------------------------------
.. doxygengroup:: rbtree_apis
``` | /content/code_sandbox/doc/kernel/data_structures/rbtree.rst | restructuredtext | 2016-05-26T17:54:19 | 2024-08-16T18:09:06 | zephyr | zephyrproject-rtos/zephyr | 10,307 | 1,430 |
```restructuredtext
.. _sys_mem_blocks:
Memory Blocks Allocator
#######################
The Memory Blocks Allocator allows memory blocks to be dynamically
allocated from a designated memory region, where:
* All memory blocks have a single fixed size.
* Multiple blocks can be allocated or freed at the same time.
* A group of blocks allocated together may not be contiguous.
This is useful for operations such as scatter-gather DMA transfers.
* Bookkeeping of allocated blocks is done outside of the associated
buffer (unlike memory slab). This allows the buffer to reside in
memory regions where these can be powered down to conserve energy.
.. contents::
:local:
:depth: 2
Concepts
********
Any number of Memory Blocks Allocator can be defined (limited only by
available RAM). Each allocator is referenced by its memory address.
A memory blocks allocator has the following key properties:
* The **block size** of each block, measured in bytes.
It must be at least 4N bytes long, where N is greater than 0.
* The **number of blocks** available for allocation.
It must be greater than zero.
* A **buffer** that provides the memory for the memory slab's blocks.
It must be at least "block size" times "number of blocks" bytes long.
* A **blocks bitmap** to keep track of which block has been allocated.
The buffer must be aligned to an N-byte boundary, where N is a power of 2
larger than 2 (i.e. 4, 8, 16, ...). To ensure that all memory blocks in
the buffer are similarly aligned to this boundary, the block size must
also be a multiple of N.
Due to the use of internal bookkeeping structures and their creation,
each memory blocks allocator must be declared and defined at compile time.
Internal Operation
==================
Each buffer associated with an allocator is an array of fixed-size blocks,
with no wasted space between the blocks.
The memory blocks allocator keeps track of unallocated blocks using
a bitmap.
Memory Blocks Allocator
***********************
Internally, the memory blocks allocator uses a bitmap to keep track of
which blocks have been allocated. Each allocator, utilizing
the ``sys_bitarray`` interface, gets memory blocks one by one from
the backing buffer up to the requested number of blocks.
All the metadata about an allocator is stored outside of the backing
buffer. This allows the memory region of the backing buffer to be
powered down to conserve energy, as the allocator code never touches
the content of the buffer.
Multi Memory Blocks Allocator Group
***********************************
The Multi Memory Blocks Allocator Group utility functions provide
a convenient to manage a group of allocators. A custom allocator
choosing function is used to choose which allocator to use among
this group.
An allocator group should be initialized at runtime via
:c:func:`sys_multi_mem_blocks_init`. Each allocator can then be
added via :c:func:`sys_multi_mem_blocks_add_allocator`.
To allocate memory blocks from group,
:c:func:`sys_multi_mem_blocks_alloc` is called with an opaque
"configuration" parameter. This parameter is passed directly to
the allocator choosing function so that an appropriate allocator
can be chosen. After an allocator is chosen, memory blocks are
allocated via :c:func:`sys_mem_blocks_alloc`.
Allocated memory blocks can be freed via
:c:func:`sys_multi_mem_blocks_free`. The caller does not need to
pass a configuration parameter. The allocator code matches
the passed in memory addresses to find the correct allocator
and then memory blocks are freed via :c:func:`sys_mem_blocks_free`.
Usage
*****
Defining a Memory Blocks Allocator
==================================
A memory blocks allocator is defined using a variable of type
:c:type:`sys_mem_blocks_t`. It needs to be defined and initialized
at compile time by calling :c:macro:`SYS_MEM_BLOCKS_DEFINE`.
The following code defines and initializes a memory blocks allocator
which has 4 blocks that are 64 bytes long, each of which is aligned
to a 4-byte boundary:
.. code-block:: c
SYS_MEM_BLOCKS_DEFINE(allocator, 64, 4, 4);
Similarly, you can define a memory blocks allocator in private scope:
.. code-block:: c
SYS_MEM_BLOCKS_DEFINE_STATIC(static_allocator, 64, 4, 4);
A pre-defined buffer can also be provided to the allocator where
the buffer can be placed separately. Note that the alignment of
the buffer needs to be done at its definition.
.. code-block:: c
uint8_t __aligned(4) backing_buffer[64 * 4];
SYS_MEM_BLOCKS_DEFINE_WITH_EXT_BUF(allocator, 64, 4, backing_buffer);
Allocating Memory Blocks
========================
Memory blocks can be allocated by calling :c:func:`sys_mem_blocks_alloc`.
.. code-block:: c
int ret;
uintptr_t blocks[2];
ret = sys_mem_blocks_alloc(allocator, 2, blocks);
If ``ret == 0``, the array ``blocks`` will contain an array of memory
addresses pointing to the allocated blocks.
Releasing a Memory Block
========================
Memory blocks are released by calling :c:func:`sys_mem_blocks_free`.
The following code builds on the example above which allocates 2 memory blocks,
then releases them once they are no longer needed.
.. code-block:: c
int ret;
uintptr_t blocks[2];
ret = sys_mem_blocks_alloc(allocator, 2, blocks);
... /* perform some operations on the allocated memory blocks */
ret = sys_mem_blocks_free(allocator, 2, blocks);
Using Multi Memory Blocks Allocator Group
=========================================
The following code demonstrates how to initialize an allocator group:
.. code-block:: c
sys_mem_blocks_t *choice_fn(struct sys_multi_mem_blocks *group, void *cfg)
{
...
}
SYS_MEM_BLOCKS_DEFINE(allocator0, 64, 4, 4);
SYS_MEM_BLOCKS_DEFINE(allocator1, 64, 4, 4);
static sys_multi_mem_blocks_t alloc_group;
sys_multi_mem_blocks_init(&alloc_group, choice_fn);
sys_multi_mem_blocks_add_allocator(&alloc_group, &allocator0);
sys_multi_mem_blocks_add_allocator(&alloc_group, &allocator1);
To allocate and free memory blocks from the group:
.. code-block:: c
int ret;
uintptr_t blocks[1];
size_t blk_size;
ret = sys_multi_mem_blocks_alloc(&alloc_group, UINT_TO_POINTER(0),
1, blocks, &blk_size);
ret = sys_multi_mem_blocks_free(&alloc_group, 1, blocks);
API Reference
*************
.. doxygengroup:: mem_blocks_apis
``` | /content/code_sandbox/doc/kernel/memory_management/sys_mem_blocks.rst | restructuredtext | 2016-05-26T17:54:19 | 2024-08-16T18:09:06 | zephyr | zephyrproject-rtos/zephyr | 10,307 | 1,412 |
```restructuredtext
.. _memory_management_api:
Memory Management
#################
The following contains various topics regarding memory management.
.. toctree::
:maxdepth: 1
heap.rst
shared_multi_heap.rst
slabs.rst
sys_mem_blocks.rst
demand_paging.rst
virtual_memory.rst
``` | /content/code_sandbox/doc/kernel/memory_management/index.rst | restructuredtext | 2016-05-26T17:54:19 | 2024-08-16T18:09:06 | zephyr | zephyrproject-rtos/zephyr | 10,307 | 72 |
```restructuredtext
.. _heap_v2:
Memory Heaps
############
Zephyr provides a collection of utilities that allow threads to
dynamically allocate memory.
Synchronized Heap Allocator
***************************
Creating a Heap
===============
The simplest way to define a heap is statically, with the
:c:macro:`K_HEAP_DEFINE` macro. This creates a static :c:struct:`k_heap` variable
with a given name that manages a memory region of the
specified size.
Heaps can also be created to manage arbitrary regions of
application-controlled memory using :c:func:`k_heap_init`.
Allocating Memory
=================
Memory can be allocated from a heap using :c:func:`k_heap_alloc`,
passing it the address of the heap object and the number of bytes
desired. This functions similarly to standard C ``malloc()``,
returning a NULL pointer on an allocation failure.
The heap supports blocking operation, allowing threads to go to sleep
until memory is available. The final argument is a
:c:type:`k_timeout_t` timeout value indicating how long the thread may
sleep before returning, or else one of the constant timeout values
:c:macro:`K_NO_WAIT` or :c:macro:`K_FOREVER`.
Releasing Memory
================
Memory allocated with :c:func:`k_heap_alloc` must be released using
:c:func:`k_heap_free`. Similar to standard C ``free()``, the pointer
provided must be either a ``NULL`` value or a pointer previously
returned by :c:func:`k_heap_alloc` for the same heap. Freeing a
``NULL`` value is defined to have no effect.
Low Level Heap Allocator
************************
The underlying implementation of the :c:struct:`k_heap`
abstraction is provided a data structure named :c:struct:`sys_heap`. This
implements exactly the same allocation semantics, but
provides no kernel synchronization tools. It is available for
applications that want to manage their own blocks of memory in
contexts (for example, userspace) where synchronization is unavailable
or more complicated. Unlike ``k_heap``, all calls to any ``sys_heap``
functions on a single heap must be serialized by the caller.
Simultaneous use from separate threads is disallowed.
Implementation
==============
Internally, the ``sys_heap`` memory block is partitioned into "chunks"
of 8 bytes. All allocations are made out of a contiguous region of
chunks. The first chunk of every allocation or unused block is
prefixed by a chunk header that stores the length of the chunk, the
length of the next lower ("left") chunk in physical memory, a bit
indicating whether the chunk is in use, and chunk-indexed link
pointers to the previous and next chunk in a "free list" to which
unused chunks are added.
The heap code takes reasonable care to avoid fragmentation. Free
block lists are stored in "buckets" by their size, each bucket storing
blocks within one power of two (i.e. a bucket for blocks of 3-4
chunks, another for 5-8, 9-16, etc...) this allows new allocations to
be made from the smallest/most-fragmented blocks available. Also, as
allocations are freed and added to the heap, they are automatically
combined with adjacent free blocks to prevent fragmentation.
All metadata is stored at the beginning of the contiguous block of
heap memory, including the variable-length list of bucket list heads
(which depend on heap size). The only external memory required is the
:c:struct:`sys_heap` structure itself.
The ``sys_heap`` functions are unsynchronized. Care must be taken by
any users to prevent concurrent access. Only one context may be
inside one of the API functions at a time.
The heap code takes care to present high performance and reliable
latency. All ``sys_heap`` API functions are guaranteed to complete
within constant time. On typical architectures, they will all
complete within 1-200 cycles. One complexity is that the search of
the minimum bucket size for an allocation (the set of free blocks that
"might fit") has a compile-time upper bound of iterations to prevent
unbounded list searches, at the expense of some fragmentation
resistance. This :kconfig:option:`CONFIG_SYS_HEAP_ALLOC_LOOPS` value may be
chosen by the user at build time, and defaults to a value of 3.
Multi-Heap Wrapper Utility
**************************
The ``sys_heap`` utility requires that all managed memory be in a
single contiguous block. It is common for complicated microcontroller
applications to have more complicated memory setups that they still
want to manage dynamically as a "heap". For example, the memory might
exist as separate discontiguous regions, different areas may have
different cache, performance or power behavior, peripheral devices may
only be able to perform DMA to certain regions, etc...
For those situations, Zephyr provides a ``sys_multi_heap`` utility.
Effectively this is a simple wrapper around a set of one or more
``sys_heap`` objects. It should be initialized after its child heaps
via :c:func:`sys_multi_heap_init`, after which each heap can be added
to the managed set via :c:func:`sys_multi_heap_add_heap`. No
destruction utility is provided; just as for ``sys_heap``,
applications that want to destroy a multi heap should simply ensure
all allocated blocks are freed (or at least will never be used again)
and repurpose the underlying memory for another usage.
It has a single pair of allocation entry points,
:c:func:`sys_multi_heap_alloc` and
:c:func:`sys_multi_heap_aligned_alloc`. These behave identically to
the ``sys_heap`` functions with similar names, except that they also
accept an opaque "configuration" parameter. This pointer is
uninspected by the multi heap code itself; instead it is passed to a
callback function provided at initialization time. This
application-provided callback is responsible for doing the underlying
allocation from one of the managed heaps, and may use the
configuration parameter in any way it likes to make that decision.
When unused, a multi heap may be freed via
:c:func:`sys_multi_heap_free`. The application does not need to pass
a configuration parameter. Memory allocated from any of the managed
``sys_heap`` objects may be freed with in the same way.
System Heap
***********
The :dfn:`system heap` is a predefined memory allocator that allows
threads to dynamically allocate memory from a common memory region in
a :c:func:`malloc`-like manner.
Only a single system heap is defined. Unlike other heaps or memory
pools, the system heap cannot be directly referenced using its
memory address.
The size of the system heap is configurable to arbitrary sizes,
subject to space availability.
A thread can dynamically allocate a chunk of heap memory by calling
:c:func:`k_malloc`. The address of the allocated chunk is
guaranteed to be aligned on a multiple of pointer sizes. If a suitable
chunk of heap memory cannot be found ``NULL`` is returned.
When the thread is finished with a chunk of heap memory it can release
the chunk back to the system heap by calling :c:func:`k_free`.
Defining the Heap Memory Pool
=============================
The size of the heap memory pool is specified using the
:kconfig:option:`CONFIG_HEAP_MEM_POOL_SIZE` configuration option.
By default, the heap memory pool size is zero bytes. This value instructs
the kernel not to define the heap memory pool object. The maximum size is limited
by the amount of available memory in the system. The project build will fail in
the link stage if the size specified can not be supported.
In addition, each subsystem (board, driver, library, etc) can set a custom
requirement by defining a Kconfig option with the prefix
``HEAP_MEM_POOL_ADD_SIZE_`` (this value is in bytes). If multiple subsystems
specify custom values, the sum of these will be used as the minimum requirement.
If the application tries to set a value that's less than the minimum value, this
will be ignored and the minimum value will be used instead.
To force a smaller than minimum value to be used, the application may enable the
:kconfig:option:`CONFIG_HEAP_MEM_POOL_IGNORE_MIN` option. This can be useful
when optimizing the heap size and the minimum requirement can be more accurately
determined for a specific application.
Allocating Memory
=================
A chunk of heap memory is allocated by calling :c:func:`k_malloc`.
The following code allocates a 200 byte chunk of heap memory, then fills it
with zeros. A warning is issued if a suitable chunk is not obtained.
.. code-block:: c
char *mem_ptr;
mem_ptr = k_malloc(200);
if (mem_ptr != NULL)) {
memset(mem_ptr, 0, 200);
...
} else {
printf("Memory not allocated");
}
Releasing Memory
================
A chunk of heap memory is released by calling :c:func:`k_free`.
The following code allocates a 75 byte chunk of memory, then releases it
once it is no longer needed.
.. code-block:: c
char *mem_ptr;
mem_ptr = k_malloc(75);
... /* use memory block */
k_free(mem_ptr);
Suggested Uses
==============
Use the heap memory pool to dynamically allocate memory in a
:c:func:`malloc`-like manner.
Configuration Options
=====================
Related configuration options:
* :kconfig:option:`CONFIG_HEAP_MEM_POOL_SIZE`
API Reference
=============
.. doxygengroup:: heap_apis
.. doxygengroup:: low_level_heap_allocator
.. doxygengroup:: multi_heap_wrapper
Heap listener
*************
.. doxygengroup:: heap_listener_apis
``` | /content/code_sandbox/doc/kernel/memory_management/heap.rst | restructuredtext | 2016-05-26T17:54:19 | 2024-08-16T18:09:06 | zephyr | zephyrproject-rtos/zephyr | 10,307 | 2,095 |
```restructuredtext
.. _memory_management_shared_multi_heap:
Shared Multi Heap
#################
The shared multi-heap memory pool manager uses the multi-heap allocator to
manage a set of reserved memory regions with different capabilities /
attributes (cacheable, non-cacheable, etc...).
All the different regions can be added at run-time to the shared multi-heap
pool providing an opaque "attribute" value (an integer or enum value) that can
be used by drivers or applications to request memory with certain capabilities.
This framework is commonly used as follow:
1. At boot time some platform code initialize the shared multi-heap framework
using :c:func:`shared_multi_heap_pool_init()` and add the memory regions to
the pool with :c:func:`shared_multi_heap_add()`, possibly gathering the
needed information for the regions from the DT.
2. Each memory region encoded in a :c:struct:`shared_multi_heap_region`
structure. This structure is also carrying an opaque and user-defined
integer value that is used to define the region capabilities (for example:
cacheability, cpu affinity, etc...)
.. code-block:: c
// Init the shared multi-heap pool
shared_multi_heap_pool_init()
// Fill the struct with the data for cacheable memory
struct shared_multi_heap_region cacheable_r0 = {
.addr = addr_r0,
.size = size_r0,
.attr = SMH_REG_ATTR_CACHEABLE,
};
// Add the region to the pool
shared_multi_heap_add(&cacheable_r0, NULL);
// Add another cacheable region
struct shared_multi_heap_region cacheable_r1 = {
.addr = addr_r1,
.size = size_r1,
.attr = SMH_REG_ATTR_CACHEABLE,
};
shared_multi_heap_add(&cacheable_r0, NULL);
// Add a non-cacheable region
struct shared_multi_heap_region non_cacheable_r2 = {
.addr = addr_r2,
.size = size_r2,
.attr = SMH_REG_ATTR_NON_CACHEABLE,
};
shared_multi_heap_add(&non_cacheable_r2, NULL);
3. When a driver or application needs some dynamic memory with a certain
capability, it can use :c:func:`shared_multi_heap_alloc()` (or the aligned
version) to request the memory by using the opaque parameter to select the
correct set of attributes for the needed memory. The framework will take
care of selecting the correct heap (thus memory region) to carve memory
from, based on the opaque parameter and the runtime state of the heaps
(available memory, heap state, etc...)
.. code-block:: c
// Allocate 4K from cacheable memory
shared_multi_heap_alloc(SMH_REG_ATTR_CACHEABLE, 0x1000);
// Allocate 4K from non-cacheable memory
shared_multi_heap_alloc(SMH_REG_ATTR_NON_CACHEABLE, 0x1000);
Adding new attributes
*********************
The API does not enforce any attributes, but at least it defines the two most
common ones: :c:enumerator:`SMH_REG_ATTR_CACHEABLE` and :c:enumerator:`SMH_REG_ATTR_NON_CACHEABLE`.
.. doxygengroup:: shared_multi_heap
``` | /content/code_sandbox/doc/kernel/memory_management/shared_multi_heap.rst | restructuredtext | 2016-05-26T17:54:19 | 2024-08-16T18:09:06 | zephyr | zephyrproject-rtos/zephyr | 10,307 | 697 |
```restructuredtext
.. _memory_slabs_v2:
Memory Slabs
############
A :dfn:`memory slab` is a kernel object that allows memory blocks
to be dynamically allocated from a designated memory region.
All memory blocks in a memory slab have a single fixed size,
allowing them to be allocated and released efficiently
and avoiding memory fragmentation concerns.
.. contents::
:local:
:depth: 2
Concepts
********
Any number of memory slabs can be defined (limited only by available RAM). Each
memory slab is referenced by its memory address.
A memory slab has the following key properties:
* The **block size** of each block, measured in bytes.
It must be at least 4N bytes long, where N is greater than 0.
* The **number of blocks** available for allocation.
It must be greater than zero.
* A **buffer** that provides the memory for the memory slab's blocks.
It must be at least "block size" times "number of blocks" bytes long.
The memory slab's buffer must be aligned to an N-byte boundary, where
N is a power of 2 larger than 2 (i.e. 4, 8, 16, ...). To ensure that
all memory blocks in the buffer are similarly aligned to this boundary,
the block size must also be a multiple of N.
A memory slab must be initialized before it can be used. This marks all of
its blocks as unused.
A thread that needs to use a memory block simply allocates it from a memory
slab. When the thread finishes with a memory block,
it must release the block back to the memory slab so the block can be reused.
If all the blocks are currently in use, a thread can optionally wait
for one to become available.
Any number of threads may wait on an empty memory slab simultaneously;
when a memory block becomes available, it is given to the highest-priority
thread that has waited the longest.
Unlike a heap, more than one memory slab can be defined, if needed. This
allows for a memory slab with smaller blocks and others with larger-sized
blocks. Alternatively, a memory pool object may be used.
Internal Operation
==================
A memory slab's buffer is an array of fixed-size blocks,
with no wasted space between the blocks.
The memory slab keeps track of unallocated blocks using a linked list;
the first 4 bytes of each unused block provide the necessary linkage.
Implementation
**************
Defining a Memory Slab
======================
A memory slab is defined using a variable of type :c:type:`k_mem_slab`.
It must then be initialized by calling :c:func:`k_mem_slab_init`.
The following code defines and initializes a memory slab that has 6 blocks
that are 400 bytes long, each of which is aligned to a 4-byte boundary.
.. code-block:: c
struct k_mem_slab my_slab;
char __aligned(4) my_slab_buffer[6 * 400];
k_mem_slab_init(&my_slab, my_slab_buffer, 400, 6);
Alternatively, a memory slab can be defined and initialized at compile time
by calling :c:macro:`K_MEM_SLAB_DEFINE`.
The following code has the same effect as the code segment above. Observe
that the macro defines both the memory slab and its buffer.
.. code-block:: c
K_MEM_SLAB_DEFINE(my_slab, 400, 6, 4);
Similarly, you can define a memory slab in private scope:
.. code-block:: c
K_MEM_SLAB_DEFINE_STATIC(my_slab, 400, 6, 4);
Allocating a Memory Block
=========================
A memory block is allocated by calling :c:func:`k_mem_slab_alloc`.
The following code builds on the example above, and waits up to 100 milliseconds
for a memory block to become available, then fills it with zeroes.
A warning is printed if a suitable block is not obtained.
.. code-block:: c
char *block_ptr;
if (k_mem_slab_alloc(&my_slab, (void **)&block_ptr, K_MSEC(100)) == 0) {
memset(block_ptr, 0, 400);
...
} else {
printf("Memory allocation time-out");
}
Releasing a Memory Block
========================
A memory block is released by calling :c:func:`k_mem_slab_free`.
The following code builds on the example above, and allocates a memory block,
then releases it once it is no longer needed.
.. code-block:: c
char *block_ptr;
k_mem_slab_alloc(&my_slab, (void **)&block_ptr, K_FOREVER);
... /* use memory block pointed at by block_ptr */
k_mem_slab_free(&my_slab, (void *)block_ptr);
Suggested Uses
**************
Use a memory slab to allocate and free memory in fixed-size blocks.
Use memory slab blocks when sending large amounts of data from one thread
to another, to avoid unnecessary copying of the data.
Configuration Options
*********************
Related configuration options:
* :kconfig:option:`CONFIG_MEM_SLAB_TRACE_MAX_UTILIZATION`
API Reference
*************
.. doxygengroup:: mem_slab_apis
``` | /content/code_sandbox/doc/kernel/memory_management/slabs.rst | restructuredtext | 2016-05-26T17:54:19 | 2024-08-16T18:09:06 | zephyr | zephyrproject-rtos/zephyr | 10,307 | 1,108 |
```restructuredtext
.. _memory_management_api_demand_paging:
Demand Paging
#############
Demand paging provides a mechanism where data is only brought into physical
memory as required by current execution context. The physical memory is
conceptually divided in page-sized page frames as regions to hold data.
* When the processor tries to access data and the data page exists in
one of the page frames, the execution continues without any interruptions.
* When the processor tries to access the data page that does not exist
in any page frames, a page fault occurs. The paging code then brings in
the corresponding data page from backing store into physical memory if
there is a free page frame. If there is no more free page frames,
the eviction algorithm is invoked to select a data page to be paged out,
thus freeing up a page frame for new data to be paged in. If this data
page has been modified after it is first paged in, the data will be
written back into the backing store. If no modifications is done or
after written back into backing store, the data page is now considered
paged out and the corresponding page frame is now free. The paging code
then invokes the backing store to page in the data page corresponding to
the location of the requested data. The backing store copies that data
page into the free page frame. Now the data page is in physical memory
and execution can continue.
There are functions where paging in and out can be invoked manually
using :c:func:`k_mem_page_in()` and :c:func:`k_mem_page_out()`.
:c:func:`k_mem_page_in()` can be used to page in data pages
in anticipation that they are required in the near future. This is used to
minimize number of page faults as these data pages are already in physical
memory, and thus minimizing latency. :c:func:`k_mem_page_out()` can be
used to page out data pages where they are not going to be accessed for
a considerable amount of time. This frees up page frames so that the next
page in can be executed faster as the paging code does not need to invoke
the eviction algorithm.
Terminology
***********
Data Page
A data page is a page-sized region of data. It may exist in a page frame,
or be paged out to some backing store. Its location can always be looked
up in the CPU's page tables (or equivalent) by virtual address.
The data type will always be ``void *`` or in some cases ``uint8_t *``
when doing pointer arithmetic.
Page Frame
A page frame is a page-sized physical memory region in RAM. It is a
container where a data page may be placed. It is always referred to by
physical address. Zephyr has a convention of using ``uintptr_t`` for physical
addresses. For every page frame, a ``struct k_mem_page_frame`` is instantiated to
store metadata. Flags for each page frame:
* ``K_MEM_PAGE_FRAME_FREE`` indicates a page frame is unused and on the list of
free page frames. When this flag is set, none of the other flags are
meaningful and they must not be modified.
* ``K_MEM_PAGE_FRAME_PINNED`` indicates a page frame is pinned in memory
and should never be paged out.
* ``K_MEM_PAGE_FRAME_RESERVED`` indicates a physical page reserved by hardware
and should not be used at all.
* ``K_MEM_PAGE_FRAME_MAPPED`` is set when a physical page is mapped to
virtual memory address.
* ``K_MEM_PAGE_FRAME_BUSY`` indicates a page frame is currently involved in
a page-in/out operation.
* ``K_MEM_PAGE_FRAME_BACKED`` indicates a page frame has a clean copy
in the backing store.
K_MEM_SCRATCH_PAGE
The virtual address of a special page provided to the backing store to:
* Copy a data page from ``k_MEM_SCRATCH_PAGE`` to the specified location; or,
* Copy a data page from the provided location to ``K_MEM_SCRATCH_PAGE``.
This is used as an intermediate page for page in/out operations. This
scratch needs to be mapped read/write for backing store code to access.
However the data page itself may only be mapped as read-only in virtual
address space. If this page is provided as-is to backing store,
the data page must be re-mapped as read/write which has security
implications as the data page is no longer read-only to other parts of
the application.
Paging Statistics
*****************
Paging statistics can be obtained via various function calls when
:kconfig:option:`CONFIG_DEMAND_PAGING_TIMING_HISTOGRAM_NUM_BINS` is enabled:
* Overall statistics via :c:func:`k_mem_paging_stats_get()`
* Per-thread statistics via :c:func:`k_mem_paging_thread_stats_get()`
if :kconfig:option:`CONFIG_DEMAND_PAGING_THREAD_STATS` is enabled
* Execution time histogram can be obtained when
:kconfig:option:`CONFIG_DEMAND_PAGING_TIMING_HISTOGRAM` is enabled, and
:kconfig:option:`CONFIG_DEMAND_PAGING_TIMING_HISTOGRAM_NUM_BINS` is defined.
Note that the timing is highly dependent on the architecture,
SoC or board. It is highly recommended that
``k_mem_paging_eviction_histogram_bounds[]`` and
``k_mem_paging_backing_store_histogram_bounds[]``
be defined for a particular application.
* Execution time histogram of eviction algorithm via
:c:func:`k_mem_paging_histogram_eviction_get()`
* Execution time histogram of backing store doing page-in via
:c:func:`k_mem_paging_histogram_backing_store_page_in_get()`
* Execution time histogram of backing store doing page-out via
:c:func:`k_mem_paging_histogram_backing_store_page_out_get()`
Eviction Algorithm
******************
The eviction algorithm is used to determine which data page and its
corresponding page frame can be paged out to free up a page frame
for the next page in operation. There are two functions which are
called from the kernel paging code:
* :c:func:`k_mem_paging_eviction_init()` is called to initialize
the eviction algorithm. This is called at ``POST_KERNEL``.
* :c:func:`k_mem_paging_eviction_select()` is called to select
a data page to evict. A function argument ``dirty`` is written to
signal the caller whether the selected data page has been modified
since it is first paged in. If the ``dirty`` bit is returned
as set, the paging code signals to the backing store to write
the data page back into storage (thus updating its content).
The function returns a pointer to the page frame corresponding to
the selected data page.
Currently, a NRU (Not-Recently-Used) eviction algorithm has been
implemented as a sample. This is a very simple algorithm which
ranks each data page on whether they have been accessed and modified.
The selection is based on this ranking.
To implement a new eviction algorithm, the two functions mentioned
above must be implemented.
Backing Store
*************
Backing store is responsible for paging in/out data page between
their corresponding page frames and storage. These are the functions
which must be implemented:
* :c:func:`k_mem_paging_backing_store_init()` is called to
initialized the backing store at ``POST_KERNEL``.
* :c:func:`k_mem_paging_backing_store_location_get()` is called to
reserve a backing store location so a data page can be paged out.
This ``location`` token is passed to
:c:func:`k_mem_paging_backing_store_page_out()` to perform actual
page out operation.
* :c:func:`k_mem_paging_backing_store_location_free()` is called to
free a backing store location (the ``location`` token) which can
then be used for subsequent page out operation.
* :c:func:`k_mem_paging_backing_store_page_in()` copies a data page
from the backing store location associated with the provided
``location`` token to the page pointed by ``K_MEM_SCRATCH_PAGE``.
* :c:func:`k_mem_paging_backing_store_page_out()` copies a data page
from ``K_MEM_SCRATCH_PAGE`` to the backing store location associated
with the provided ``location`` token.
* :c:func:`k_mem_paging_backing_store_page_finalize()` is invoked after
:c:func:`k_mem_paging_backing_store_page_in()` so that the page frame
struct may be updated for internal accounting. This can be
a no-op.
To implement a new backing store, the functions mentioned above
must be implemented.
:c:func:`k_mem_paging_backing_store_page_finalize()` can be an empty
function if so desired.
API Reference
*************
.. doxygengroup:: mem-demand-paging
Eviction Algorithm APIs
=======================
.. doxygengroup:: mem-demand-paging-eviction
Backing Store APIs
==================
.. doxygengroup:: mem-demand-paging-backing-store
``` | /content/code_sandbox/doc/kernel/memory_management/demand_paging.rst | restructuredtext | 2016-05-26T17:54:19 | 2024-08-16T18:09:06 | zephyr | zephyrproject-rtos/zephyr | 10,307 | 1,988 |
```restructuredtext
.. _memory_management_api_virtual_memory:
Virtual Memory
##############
Virtual memory (VM) in Zephyr provides developers with the ability to fine tune
access to memory. To utilize virtual memory, the platform must support
Memory Management Unit (MMU) and it must be enabled in the build. Due to
the target of Zephyr mainly being embedded systems, virtual memory
support in Zephyr differs a bit from that in traditional operating
systems:
Mapping of Kernel Image
Default is to do 1:1 mapping for the kernel image (including code and data)
between physical and virtual memory address spaces, if demand paging
is not enabled. Deviation from this requires careful manipulation of
linker script.
Secondary Storage
Basic virtual memory support does not utilize secondary storage to
extend usable memory. The maximum usable memory is the same as
the physical memory.
* :ref:`memory_management_api_demand_paging` enables utilizing
secondary storage as a backing store for virtual memory, thus
allowing larger usable memory than the available physical memory.
Note that demand paging needs to be explicitly enabled.
* Although the virtual memory space can be larger than physical
memory space, without enabling demand paging, all virtually
mapped memory must be backed by physical memory.
Kconfigs
********
Required
========
These are the Kconfigs that need to be enabled or defined for kernel to support
virtual memory.
* :kconfig:option:`CONFIG_MMU`: must be enabled for virtual memory support in
kernel.
* :kconfig:option:`CONFIG_MMU_PAGE_SIZE`: size of a memory page. Default is 4KB.
* :kconfig:option:`CONFIG_KERNEL_VM_BASE`: base address of virtual address space.
* :kconfig:option:`CONFIG_KERNEL_VM_SIZE`: size of virtual address space.
Default is 8MB.
* :kconfig:option:`CONFIG_KERNEL_VM_OFFSET`: kernel image starts at this offset
from :kconfig:option:`CONFIG_KERNEL_VM_BASE`.
Optional
========
* :kconfig:option:`CONFIG_KERNEL_DIRECT_MAP`: permits 1:1 mappings between
virtual and physical addresses, instead of kernel choosing addresses within
the virtual address space. This is useful for mapping device MMIO regions for
more precise access control.
Memory Map Overview
*******************
This is an overview of the memory map of the virtual memory address space.
Note that the ``Z_*`` macros, which are used in code, may have different
meanings depending on architecture and Kconfigs, which will be explained
below.
.. code-block:: none
:emphasize-lines: 1, 3, 9, 22, 24
+--------------+ <- K_MEM_VIRT_RAM_START
| Undefined VM | <- architecture specific reserved area
+--------------+ <- K_MEM_KERNEL_VIRT_START
| Mapping for |
| main kernel |
| image |
| |
| |
+--------------+ <- K_MEM_VM_FREE_START
| |
| Unused, |
| Available VM |
| |
|..............| <- grows downward as more mappings are made
| Mapping |
+--------------+
| Mapping |
+--------------+
| ... |
+--------------+
| Mapping |
+--------------+ <- memory mappings start here
| Reserved | <- special purpose virtual page(s) of size K_MEM_VM_RESERVED
+--------------+ <- K_MEM_VIRT_RAM_END
* ``K_MEM_VIRT_RAM_START`` is the beginning of the virtual memory address space.
This needs to be page aligned. Currently, it is the same as
:kconfig:option:`CONFIG_KERNEL_VM_BASE`.
* ``K_MEM_VIRT_RAM_SIZE`` is the size of the virtual memory address space.
This needs to be page aligned. Currently, it is the same as
:kconfig:option:`CONFIG_KERNEL_VM_SIZE`.
* ``K_MEM_VIRT_RAM_END`` is simply (``K_MEM_VIRT_RAM_START`` + ``K_MEM_VIRT_RAM_SIZE``).
* ``K_MEM_KERNEL_VIRT_START`` is the same as ``z_mapped_start`` specified in the linker
script. This is the virtual address of the beginning of the kernel image at
boot time.
* ``K_MEM_KERNEL_VIRT_END`` is the same as ``z_mapped_end`` specified in the linker
script. This is the virtual address of the end of the kernel image at boot time.
* ``K_MEM_VM_FREE_START`` is the beginning of the virtual address space where addresses
can be allocated for memory mapping. This depends on whether
:kconfig:option:`CONFIG_ARCH_MAPS_ALL_RAM` is enabled.
* If it is enabled, which means all physical memory are mapped in virtual
memory address space, and it is the same as
(:kconfig:option:`CONFIG_SRAM_BASE_ADDRESS` + :kconfig:option:`CONFIG_SRAM_SIZE`).
* If it is disabled, ``K_MEM_VM_FREE_START`` is the same ``K_MEM_KERNEL_VIRT_END`` which
is the end of the kernel image.
* ``K_MEM_VM_RESERVED`` is an area reserved to support kernel functions. For example,
some addresses are reserved to support demand paging.
Virtual Memory Mappings
***********************
Setting up Mappings at Boot
===========================
In general, most supported architectures set up the memory mappings at boot as
following:
* ``.text`` section is read-only and executable. It is accessible in
both kernel and user modes.
* ``.rodata`` section is read-only and non-executable. It is accessible
in both kernel and user modes.
* Other kernel sections, such as ``.data``, ``.bss`` and ``.noinit``, are
read-write and non-executable. They are only accessible in kernel mode.
* Stacks for user mode threads are automatically granted read-write access
to their corresponding user mode threads during thread creation.
* Global variables, by default, are not accessible to user mode threads.
Refer to :ref:`Memory Domains and Partitions<memory_domain>` on how to
use global variables in user mode threads, and on how to share data
between user mode threads.
Caching modes for these mappings are architecture specific. They can be
none, write-back, or write-through.
Note that SoCs have their own additional mappings required to boot where
these mappings are defined under their own SoC configurations. These mappings
usually include device MMIO regions needed to setup the hardware.
Mapping Anonymous Memory
========================
The unused physical memory can be mapped in virtual address space on demand.
This is conceptually similar to memory allocation from heap, but these
mappings must be aligned on page size and have finer access control.
* :c:func:`k_mem_map` can be used to map unused physical memory:
* The requested size must be multiple of page size.
* The address returned is inside the virtual address space between
``K_MEM_VM_FREE_START`` and ``K_MEM_VIRT_RAM_END``.
* The mapped region is not guaranteed to be physically contiguous in memory.
* Guard pages immediately before and after the mapped virtual region are
automatically allocated to catch access issue due to buffer underrun
or overrun.
* The mapped region can be unmapped (i.e. freed) via :c:func:`k_mem_unmap`:
* Caution must be exercised to give the pass the same region size to
both :c:func:`k_mem_map` and :c:func:`k_mem_unmap`. The unmapping
function does not check if it is a valid mapped region before unmapping.
API Reference
*************
.. doxygengroup:: kernel_memory_management
``` | /content/code_sandbox/doc/kernel/memory_management/virtual_memory.rst | restructuredtext | 2016-05-26T17:54:19 | 2024-08-16T18:09:06 | zephyr | zephyrproject-rtos/zephyr | 10,307 | 1,656 |
```restructuredtext
.. _iterable_sections_api:
Iterable Sections
#################
This page contains the reference documentation for the iterable sections APIs,
which can be used for defining iterable areas of equally-sized data structures,
that can be iterated on using :c:macro:`STRUCT_SECTION_FOREACH`.
Usage
*****
Iterable section elements are typically used by defining the data structure and
associated initializer in a common header file, so that they can be
instantiated anywhere in the code base.
.. code-block:: c
struct my_data {
int a, b;
};
#define DEFINE_DATA(name, _a, _b) \
STRUCT_SECTION_ITERABLE(my_data, name) = { \
.a = _a, \
.b = _b, \
}
...
DEFINE_DATA(d1, 1, 2);
DEFINE_DATA(d2, 3, 4);
DEFINE_DATA(d3, 5, 6);
Then the linker has to be setup to place the structure in a
contiguous segment using one of the linker macros such as
:c:macro:`ITERABLE_SECTION_RAM` or :c:macro:`ITERABLE_SECTION_ROM`. Custom
linker snippets are normally declared using one of the
``zephyr_linker_sources()`` CMake functions, using the appropriate section
identifier, ``DATA_SECTIONS`` for RAM structures and ``SECTIONS`` for ROM ones.
.. code-block:: cmake
# CMakeLists.txt
zephyr_linker_sources(DATA_SECTIONS iterables.ld)
.. code-block:: c
# iterables.ld
ITERABLE_SECTION_RAM(my_data, 4)
The data can then be accessed using :c:macro:`STRUCT_SECTION_FOREACH`.
.. code-block:: c
STRUCT_SECTION_FOREACH(my_data, data) {
printk("%p: a: %d, b: %d\n", data, data->a, data->b);
}
.. note::
The linker is going to place the entries sorted by name, so the example
above would visit ``d1``, ``d2`` and ``d3`` in that order, regardless of how
they were defined in the code.
API Reference
*************
.. doxygengroup:: iterable_section_apis
``` | /content/code_sandbox/doc/kernel/iterable_sections/index.rst | restructuredtext | 2016-05-26T17:54:19 | 2024-08-16T18:09:06 | zephyr | zephyrproject-rtos/zephyr | 10,307 | 471 |
```restructuredtext
.. _timing_functions:
Executing Time Functions
########################
The timing functions can be used to obtain execution time of
a section of code to aid in analysis and optimization.
Please note that the timing functions may use a different timer
than the default kernel timer, where the timer being used is
specified by architecture, SoC or board configuration.
Configuration
*************
To allow using the timing functions, :kconfig:option:`CONFIG_TIMING_FUNCTIONS`
needs to be enabled.
Usage
*****
To gather timing information:
1. Call :c:func:`timing_init` to initialize the timer.
2. Call :c:func:`timing_start` to signal the start of gathering of
timing information. This usually starts the timer.
3. Call :c:func:`timing_counter_get` to mark the start of code
execution.
4. Call :c:func:`timing_counter_get` to mark the end of code
execution.
5. Call :c:func:`timing_cycles_get` to get the number of timer cycles
between start and end of code execution.
6. Call :c:func:`timing_cycles_to_ns` with total number of cycles
to convert number of cycles to nanoseconds.
7. Repeat from step 3 to gather timing information for other
blocks of code.
8. Call :c:func:`timing_stop` to signal the end of gathering of
timing information. This usually stops the timer.
Example
-------
This shows an example on how to use the timing functions:
.. code-block:: c
#include <zephyr/timing/timing.h>
void gather_timing(void)
{
timing_t start_time, end_time;
uint64_t total_cycles;
uint64_t total_ns;
timing_init();
timing_start();
start_time = timing_counter_get();
code_execution_to_be_measured();
end_time = timing_counter_get();
total_cycles = timing_cycles_get(&start_time, &end_time);
total_ns = timing_cycles_to_ns(total_cycles);
timing_stop();
}
API documentation
*****************
.. doxygengroup:: timing_api
.. doxygengroup:: timing_api_arch
.. doxygengroup:: timing_api_soc
.. doxygengroup:: timing_api_board
``` | /content/code_sandbox/doc/kernel/timing_functions/index.rst | restructuredtext | 2016-05-26T17:54:19 | 2024-08-16T18:09:06 | zephyr | zephyrproject-rtos/zephyr | 10,307 | 477 |
```restructuredtext
.. _object_cores_api:
Object Cores
############
Object cores are a kernel debugging tool that can be used to both identify and
perform operations on registered objects.
.. contents::
:local:
:depth: 2
Object Core Concepts
********************
Each instance of an object embeds an object core field named `obj_core`.
Objects of the same type are linked together via their respective object
cores to form a singly linked list. Each object core also links to the their
respective object type. Each object type contains a singly linked list
linking together all the object cores of that type. Object types are also
linked together via a singly linked list. Together, this can allow debugging
tools to traverse all the objects in the system.
Object cores have been integrated into following kernel objects:
* :ref:`Condition Variables <condvar>`
* :ref:`Events <events>`
* :ref:`FIFOs <fifos_v2>` and :ref:`LIFOs <lifos_v2>`
* :ref:`Mailboxes <mailboxes_v2>`
* :ref:`Memory Slabs <memory_slabs_v2>`
* :ref:`Message Queues <message_queues_v2>`
* :ref:`Mutexes <mutexes_v2>`
* :ref:`Pipes <pipes_v2>`
* :ref:`Semaphores <semaphores_v2>`
* :ref:`Threads <threads_v2>`
* :ref:`Timers <timers_v2>`
* :ref:`System Memory Blocks <sys_mem_blocks>`
Developers are free to integrate them if desired into other objects within
their projects.
Object Core Statistics Concepts
*******************************
A variety of kernel objects allow for the gathering and reporting of statistics.
Object cores provide a uniform means to retrieve that information via object
core statistics. When enabled, the object type contains a pointer to a
statistics descriptor that defines the various operations that have been
enabled for interfacing with the object's statistics. Additionally, the object
core contains a pointer to the "raw" statistical information associated with
that object. Raw data is the raw, unmanipulated data associated with the
statistics. Queried data may be "raw", but it may also have been manipulated in
some way by calculation (such as determining an average).
The following table indicates both what objects have been integrated into the
object core statistics as well as the structures used for both "raw" and
"queried" data.
===================== ============================== ==============================
Object Raw Data Type Query Data Type
===================== ============================== ==============================
struct mem_slab struct mem_slab_info struct sys_memory_stats
struct sys_mem_blocks struct sys_mem_blocks_info struct sys_memory_stats
struct k_thread struct k_cycle_stats struct k_thread_runtime_stats
struct _cpu struct k_cycle_stats struct k_thread_runtime_stats
struct z_kernel struct k_cycle_stats[num CPUs] struct k_thread_runtime_stats
===================== ============================== ==============================
Implementation
**************
Defining a New Object Type
==========================
An object type is defined using a global variable of type
:c:struct:`k_obj_type`. It must be initialized before any objects of that type
are initialized. The following code shows how a new object type can be
initialized for use with object cores and object core statistics.
.. code-block:: c
/* Unique object type ID */
#define K_OBJ_TYPE_MY_NEW_TYPE K_OBJ_TYPE_ID_GEN("UNIQ")
struct k_obj_type my_obj_type;
struct my_obj_type_raw_info {
...
};
struct my_obj_type_query_stats {
...
};
struct my_new_obj {
...
struct k_obj_core obj_core;
struct my_obj_type_raw_info info;
};
struct k_obj_core_stats_desc my_obj_type_stats_desc = {
.raw_size = sizeof(struct my_obj_type_raw_stats),
.query_size = sizeof(struct my_obj_type_query_stats),
.raw = my_obj_type_stats_raw,
.query = my_obj_type_stats_query,
.reset = my_obj_type_stats_reset,
.disable = NULL, /* Stats gathering is always on */
.enable = NULL, /* Stats gathering is always on */
};
void my_obj_type_init(void)
{
z_obj_type_init(&my_obj_type, K_OBJ_TYPE_MY_NEW_TYPE,
offsetof(struct my_new_obj, obj_core);
k_obj_type_stats_init(&my_obj_type, &my_obj_type_stats_desc);
}
Initializing a New Object Core
==============================
Kernel objects that have already been integrated into the object core framework
automatically have their object cores initialized when the object is
initialized. However, developers that wish to add their own objects into the
framework need to both initialize the object core and link it. The following
code builds on the example above and initializes the object core.
.. code-block:: c
void my_new_obj_init(struct my_new_obj *new_obj)
{
...
k_obj_core_init(K_OBJ_CORE(new_obj), &my_obj_type);
k_obj_core_link(K_OBJ_CORE(new_obj));
k_obj_core_stats_register(K_OBJ_CORE(new_obj), &new_obj->raw_stats,
sizeof(struct my_obj_type_raw_info));
}
Walking a List of Object Cores
==============================
Two routines exist for walking the list of object cores linked to an object
type. These are :c:func:`k_obj_type_walk_locked` and
:c:func:`k_obj_type_walk_unlocked`. The following code builds upon the example
above and prints the addresses of all the objects of that new object type.
.. code-block:: c
int walk_op(struct k_obj_core *obj_core, void *data)
{
uint8_t *ptr;
ptr = obj_core;
ptr -= obj_core->type->obj_core_offset;
printk("%p\n", ptr);
return 0;
}
void print_object_addresses(void)
{
struct k_obj_type *obj_type;
/* Find the object type */
obj_type = k_obj_type_find(K_OBJ_TYPE_MY_NEW_TYPE);
/* Walk the list of objects */
k_obj_type_walk_unlocked(obj_type, walk_op, NULL);
}
Object Core Statistics Querying
===============================
The following code builds on the examples above and shows how an object
integrated into the object core statistics framework can both retrieve queried
data and reset the stats associated with the object.
.. code-block:: c
struct my_new_obj my_obj;
...
void my_func(void)
{
struct my_obj_type_query_stats my_stats;
int status;
my_obj_type_init(&my_obj);
...
status = k_obj_core_stats_query(K_OBJ_CORE(&my_obj),
&my_stats, sizeof(my_stats));
if (status != 0) {
/* Failed to get stats */
...
} else {
k_obj_core_stats_reset(K_OBJ_CORE(&my_obj));
}
...
}
Configuration Options
*********************
Related configuration options:
* :kconfig:option:`CONFIG_OBJ_CORE`
* :kconfig:option:`CONFIG_OBJ_CORE_CONDVAR`
* :kconfig:option:`CONFIG_OBJ_CORE_EVENT`
* :kconfig:option:`CONFIG_OBJ_CORE_FIFO`
* :kconfig:option:`CONFIG_OBJ_CORE_LIFO`
* :kconfig:option:`CONFIG_OBJ_CORE_MAILBOX`
* :kconfig:option:`CONFIG_OBJ_CORE_MEM_SLAB`
* :kconfig:option:`CONFIG_OBJ_CORE_MSGQ`
* :kconfig:option:`CONFIG_OBJ_CORE_MUTEX`
* :kconfig:option:`CONFIG_OBJ_CORE_PIPE`
* :kconfig:option:`CONFIG_OBJ_CORE_SEM`
* :kconfig:option:`CONFIG_OBJ_CORE_STACK`
* :kconfig:option:`CONFIG_OBJ_CORE_THREAD`
* :kconfig:option:`CONFIG_OBJ_CORE_TIMER`
* :kconfig:option:`CONFIG_OBJ_CORE_SYS_MEM_BLOCKS`
* :kconfig:option:`CONFIG_OBJ_CORE_STATS`
* :kconfig:option:`CONFIG_OBJ_CORE_STATS_MEM_SLAB`
* :kconfig:option:`CONFIG_OBJ_CORE_STATS_THREAD`
* :kconfig:option:`CONFIG_OBJ_CORE_STATS_SYSTEM`
* :kconfig:option:`CONFIG_OBJ_CORE_STATS_SYS_MEM_BLOCKS`
API Reference
*************
.. doxygengroup:: obj_core_apis
.. doxygengroup:: obj_core_stats_apis
``` | /content/code_sandbox/doc/kernel/object_cores/index.rst | restructuredtext | 2016-05-26T17:54:19 | 2024-08-16T18:09:06 | zephyr | zephyrproject-rtos/zephyr | 10,307 | 1,766 |
```restructuredtext
.. _util_api:
Utilities
#########
This page contains reference documentation for ``<sys/util.h>``, which provides
miscellaneous utility functions and macros.
.. doxygengroup:: sys-util
``` | /content/code_sandbox/doc/kernel/util/index.rst | restructuredtext | 2016-05-26T17:54:19 | 2024-08-16T18:09:06 | zephyr | zephyrproject-rtos/zephyr | 10,307 | 43 |
```restructuredtext
.. _device_model_api:
Device Driver Model
###################
Introduction
************
The Zephyr kernel supports a variety of device drivers. Whether a
driver is available depends on the board and the driver.
The Zephyr device model provides a consistent device model for configuring the
drivers that are part of a system. The device model is responsible
for initializing all the drivers configured into the system.
Each type of driver (e.g. UART, SPI, I2C) is supported by a generic type API.
In this model the driver fills in the pointer to the structure containing the
function pointers to its API functions during driver initialization. These
structures are placed into the RAM section in initialization level order.
.. image:: device_driver_model.svg
:width: 40%
:align: center
:alt: Device Driver Model
Standard Drivers
****************
Device drivers which are present on all supported board configurations
are listed below.
* **Interrupt controller**: This device driver is used by the kernel's
interrupt management subsystem.
* **Timer**: This device driver is used by the kernel's system clock and
hardware clock subsystem.
* **Serial communication**: This device driver is used by the kernel's
system console subsystem.
* **Entropy**: This device driver provides a source of entropy numbers
for the random number generator subsystem.
.. important::
Use the :ref:`random API functions <random_api>` for random
values. :ref:`Entropy functions <entropy_api>` should not be
directly used as a random number generator source as some hardware
implementations are designed to be an entropy seed source for random
number generators and will not provide cryptographically secure
random number streams.
Synchronous Calls
*****************
Zephyr provides a set of device drivers for multiple boards. Each driver
should support an interrupt-based implementation, rather than polling, unless
the specific hardware does not provide any interrupt.
High-level calls accessed through device-specific APIs, such as
:file:`i2c.h` or :file:`spi.h`, are usually intended as synchronous. Thus,
these calls should be blocking.
Driver APIs
***********
The following APIs for device drivers are provided by :file:`device.h`. The APIs
are intended for use in device drivers only and should not be used in
applications.
:c:func:`DEVICE_DEFINE()`
Create device object and related data structures including setting it
up for boot-time initialization.
:c:func:`DEVICE_NAME_GET()`
Converts a device identifier to the global identifier for a device
object.
:c:func:`DEVICE_GET()`
Obtain a pointer to a device object by name.
:c:func:`DEVICE_DECLARE()`
Declare a device object. Use this when you need a forward reference
to a device that has not yet been defined.
.. _device_struct:
Driver Data Structures
**********************
The device initialization macros populate some data structures at build time
which are
split into read-only and runtime-mutable parts. At a high level we have:
.. code-block:: C
struct device {
const char *name;
const void *config;
const void *api;
void * const data;
};
The ``config`` member is for read-only configuration data set at build time. For
example, base memory mapped IO addresses, IRQ line numbers, or other fixed
physical characteristics of the device. This is the ``config`` pointer
passed to ``DEVICE_DEFINE()`` and related macros.
The ``data`` struct is kept in RAM, and is used by the driver for
per-instance runtime housekeeping. For example, it may contain reference counts,
semaphores, scratch buffers, etc.
The ``api`` struct maps generic subsystem APIs to the device-specific
implementations in the driver. It is typically read-only and populated at
build time. The next section describes this in more detail.
Subsystems and API Structures
*****************************
Most drivers will be implementing a device-independent subsystem API.
Applications can simply program to that generic API, and application
code is not specific to any particular driver implementation.
A subsystem API definition typically looks like this:
.. code-block:: C
typedef int (*subsystem_do_this_t)(const struct device *dev, int foo, int bar);
typedef void (*subsystem_do_that_t)(const struct device *dev, void *baz);
struct subsystem_api {
subsystem_do_this_t do_this;
subsystem_do_that_t do_that;
};
static inline int subsystem_do_this(const struct device *dev, int foo, int bar)
{
struct subsystem_api *api;
api = (struct subsystem_api *)dev->api;
return api->do_this(dev, foo, bar);
}
static inline void subsystem_do_that(const struct device *dev, void *baz)
{
struct subsystem_api *api;
api = (struct subsystem_api *)dev->api;
api->do_that(dev, baz);
}
A driver implementing a particular subsystem will define the real implementation
of these APIs, and populate an instance of subsystem_api structure:
.. code-block:: C
static int my_driver_do_this(const struct device *dev, int foo, int bar)
{
...
}
static void my_driver_do_that(const struct device *dev, void *baz)
{
...
}
static struct subsystem_api my_driver_api_funcs = {
.do_this = my_driver_do_this,
.do_that = my_driver_do_that
};
The driver would then pass ``my_driver_api_funcs`` as the ``api`` argument to
``DEVICE_DEFINE()``.
.. note::
Since pointers to the API functions are referenced in the ``api``
struct, they will always be included in the binary even if unused;
``gc-sections`` linker option will always see at least one reference to
them. Providing for link-time size optimizations with driver APIs in
most cases requires that the optional feature be controlled by a
Kconfig option.
Device-Specific API Extensions
******************************
Some devices can be cast as an instance of a driver subsystem such as GPIO,
but provide additional functionality that cannot be exposed through the
standard API. These devices combine subsystem operations with
device-specific APIs, described in a device-specific header.
A device-specific API definition typically looks like this:
.. code-block:: C
#include <zephyr/drivers/subsystem.h>
/* When extensions need not be invoked from user mode threads */
int specific_do_that(const struct device *dev, int foo);
/* When extensions must be invokable from user mode threads */
__syscall int specific_from_user(const struct device *dev, int bar);
/* Only needed when extensions include syscalls */
#include <zephyr/syscalls/specific.h>
A driver implementing extensions to the subsystem will define the real
implementation of both the subsystem API and the specific APIs:
.. code-block:: C
static int generic_do_this(const struct device *dev, void *arg)
{
...
}
static struct generic_api api {
...
.do_this = generic_do_this,
...
};
/* supervisor-only API is globally visible */
int specific_do_that(const struct device *dev, int foo)
{
...
}
/* syscall API passes through a translation */
int z_impl_specific_from_user(const struct device *dev, int bar)
{
...
}
#ifdef CONFIG_USERSPACE
#include <zephyr/internal/syscall_handler.h>
int z_vrfy_specific_from_user(const struct device *dev, int bar)
{
K_OOPS(K_SYSCALL_SPECIFIC_DRIVER(dev, K_OBJ_DRIVER_GENERIC, &api));
return z_impl_specific_do_that(dev, bar)
}
#include <zephyr/syscalls/specific_from_user_mrsh.c>
#endif /* CONFIG_USERSPACE */
Applications use the device through both the subsystem and specific
APIs.
.. note::
Public API for device-specific extensions should be prefixed with the
compatible for the device to which it applies. For example, if
adding special functions to support the Maxim DS3231 the identifier
fragment ``specific`` in the examples above would be ``maxim_ds3231``.
Single Driver, Multiple Instances
*********************************
Some drivers may be instantiated multiple times in a given system. For example
there can be multiple GPIO banks, or multiple UARTS. Each instance of the driver
will have a different ``config`` struct and ``data`` struct.
Configuring interrupts for multiple drivers instances is a special case. If each
instance needs to configure a different interrupt line, this can be accomplished
through the use of per-instance configuration functions, since the parameters
to ``IRQ_CONNECT()`` need to be resolvable at build time.
For example, let's say we need to configure two instances of ``my_driver``, each
with a different interrupt line. In ``drivers/subsystem/subsystem_my_driver.h``:
.. code-block:: C
typedef void (*my_driver_config_irq_t)(const struct device *dev);
struct my_driver_config {
DEVICE_MMIO_ROM;
my_driver_config_irq_t config_func;
};
In the implementation of the common init function:
.. code-block:: C
void my_driver_isr(const struct device *dev)
{
/* Handle interrupt */
...
}
int my_driver_init(const struct device *dev)
{
const struct my_driver_config *config = dev->config;
DEVICE_MMIO_MAP(dev, K_MEM_CACHE_NONE);
/* Do other initialization stuff */
...
config->config_func(dev);
return 0;
}
Then when the particular instance is declared:
.. code-block:: C
#if CONFIG_MY_DRIVER_0
DEVICE_DECLARE(my_driver_0);
static void my_driver_config_irq_0(const struct device *dev)
{
IRQ_CONNECT(MY_DRIVER_0_IRQ, MY_DRIVER_0_PRI, my_driver_isr,
DEVICE_GET(my_driver_0), MY_DRIVER_0_FLAGS);
}
const static struct my_driver_config my_driver_config_0 = {
DEVICE_MMIO_ROM_INIT(DT_DRV_INST(0)),
.config_func = my_driver_config_irq_0
}
static struct my_data_0;
DEVICE_DEFINE(my_driver_0, MY_DRIVER_0_NAME, my_driver_init,
NULL, &my_data_0, &my_driver_config_0,
POST_KERNEL, MY_DRIVER_0_PRIORITY, &my_api_funcs);
#endif /* CONFIG_MY_DRIVER_0 */
Note the use of ``DEVICE_DECLARE()`` to avoid a circular dependency on providing
the IRQ handler argument and the definition of the device itself.
Initialization Levels
*********************
Drivers may depend on other drivers being initialized first, or require
the use of kernel services. :c:func:`DEVICE_DEFINE()` and related APIs
allow the user to specify at what time during the boot sequence the init
function will be executed. Any driver will specify one of four
initialization levels:
``PRE_KERNEL_1``
Used for devices that have no dependencies, such as those that rely
solely on hardware present in the processor/SOC. These devices cannot
use any kernel services during configuration, since the kernel services are
not yet available. The interrupt subsystem will be configured however
so it's OK to set up interrupts. Init functions at this level run on the
interrupt stack.
``PRE_KERNEL_2``
Used for devices that rely on the initialization of devices initialized
as part of the ``PRE_KERNEL_1`` level. These devices cannot use any kernel
services during configuration, since the kernel services are not yet
available. Init functions at this level run on the interrupt stack.
``POST_KERNEL``
Used for devices that require kernel services during configuration.
Init functions at this level run in context of the kernel main task.
Within each initialization level you may specify a priority level, relative to
other devices in the same initialization level. The priority level is specified
as an integer value in the range 0 to 99; lower values indicate earlier
initialization. The priority level must be a decimal integer literal without
leading zeroes or sign (e.g. 32), or an equivalent symbolic name (e.g.
``\#define MY_INIT_PRIO 32``); symbolic expressions are *not* permitted (e.g.
``CONFIG_KERNEL_INIT_PRIORITY_DEFAULT + 5``).
Drivers and other system utilities can determine whether startup is
still in pre-kernel states by using the :c:func:`k_is_pre_kernel`
function.
Deferred initialization
***********************
Initialization of devices can also be deferred to a later time. In this case,
the device is not automatically initialized by Zephyr at boot time. Instead,
the device is initialized when the application calls :c:func:`device_init`.
To defer a device driver initialization, add the property ``zephyr,deferred-init``
to the associated device node in the DTS file. For example:
.. code-block:: devicetree
/ {
a-driver@40000000 {
reg = <0x40000000 0x1000>;
zephyr,deferred-init;
};
};
System Drivers
**************
In some cases you may just need to run a function at boot. For such cases, the
:c:macro:`SYS_INIT` can be used. This macro does not take any config or runtime
data structures and there isn't a way to later get a device pointer by name. The
same device policies for initialization level and priority apply.
Inspecting the initialization sequence
**************************************
Device drivers declared with :c:macro:`DEVICE_DEFINE` (or any variations of it)
and :c:macro:`SYS_INIT` are processed at boot time and the corresponding
initialization functions are called sequentially according to their specified
level and priority.
Sometimes it's useful to inspect the final sequence of initialization function
call as produced by the linker. To do that, use the ``initlevels`` CMake
target, for example ``west build -t initlevels``.
Error handling
**************
In general, it's best to use ``__ASSERT()`` macros instead of
propagating return values unless the failure is expected to occur
during the normal course of operation (such as a storage device
full). Bad parameters, programming errors, consistency checks,
pathological/unrecoverable failures, etc., should be handled by
assertions.
When it is appropriate to return error conditions for the caller to
check, 0 should be returned on success and a POSIX :file:`errno.h` code
returned on failure. See
path_to_url#return-codes
for details about this.
Memory Mapping
**************
On some systems, the linear address of peripheral memory-mapped I/O (MMIO)
regions cannot be known at build time:
- The I/O ranges must be probed at runtime from the bus, such as with
PCI express
- A memory management unit (MMU) is active, and the physical address of
the MMIO range must be mapped into the page tables at some virtual
memory location determined by the kernel.
These systems must maintain storage for the MMIO range within RAM and
establish the mapping within the driver's init function. Other systems
do not care about this and can use MMIO physical addresses directly from
DTS and do not need any RAM-based storage for it.
For drivers that may need to deal with this situation, a set of
APIs under the DEVICE_MMIO scope are defined, along with a mapping function
:c:func:`device_map`.
Device Model Drivers with one MMIO region
=========================================
The simplest case is for drivers which need to maintain one MMIO region.
These drivers will need to use the ``DEVICE_MMIO_ROM`` and
``DEVICE_MMIO_RAM`` macros in the definitions for their ``config_info``
and ``driver_data`` structures, with initialization of the ``config_info``
from DTS using ``DEVICE_MMIO_ROM_INIT``. A call to ``DEVICE_MMIO_MAP()``
is made within the init function:
.. code-block:: C
struct my_driver_config {
DEVICE_MMIO_ROM; /* Must be first */
...
}
struct my_driver_dev_data {
DEVICE_MMIO_RAM; /* Must be first */
...
}
const static struct my_driver_config my_driver_config_0 = {
DEVICE_MMIO_ROM_INIT(DT_DRV_INST(...)),
...
}
int my_driver_init(const struct device *dev)
{
...
DEVICE_MMIO_MAP(dev, K_MEM_CACHE_NONE);
...
}
int my_driver_some_function(const struct device *dev)
{
...
/* Write some data to the MMIO region */
sys_write32(0xDEADBEEF, DEVICE_MMIO_GET(dev));
...
}
The particular expansion of these macros depends on configuration. On
a device with no MMU or PCI-e, ``DEVICE_MMIO_MAP`` and
``DEVICE_MMIO_RAM`` expand to nothing.
Device Model Drivers with multiple MMIO regions
===============================================
Some drivers may have multiple MMIO regions. In addition, some drivers
may already be implementing a form of inheritance which requires some other
data to be placed first in the ``config_info`` and ``driver_data``
structures.
This can be managed with the ``DEVICE_MMIO_NAMED`` variant macros. These
require that ``DEV_CFG()`` and ``DEV_DATA()`` macros be defined to obtain
a properly typed pointer to the driver's config_info or dev_data structs.
For example:
.. code-block:: C
struct my_driver_config {
...
DEVICE_MMIO_NAMED_ROM(corge);
DEVICE_MMIO_NAMED_ROM(grault);
...
}
struct my_driver_dev_data {
...
DEVICE_MMIO_NAMED_RAM(corge);
DEVICE_MMIO_NAMED_RAM(grault);
...
}
#define DEV_CFG(_dev) \
((const struct my_driver_config *)((_dev)->config))
#define DEV_DATA(_dev) \
((struct my_driver_dev_data *)((_dev)->data))
const static struct my_driver_config my_driver_config_0 = {
...
DEVICE_MMIO_NAMED_ROM_INIT(corge, DT_DRV_INST(...)),
DEVICE_MMIO_NAMED_ROM_INIT(grault, DT_DRV_INST(...)),
...
}
int my_driver_init(const struct device *dev)
{
...
DEVICE_MMIO_NAMED_MAP(dev, corge, K_MEM_CACHE_NONE);
DEVICE_MMIO_NAMED_MAP(dev, grault, K_MEM_CACHE_NONE);
...
}
int my_driver_some_function(const struct device *dev)
{
...
/* Write some data to the MMIO regions */
sys_write32(0xDEADBEEF, DEVICE_MMIO_GET(dev, grault));
sys_write32(0xF0CCAC1A, DEVICE_MMIO_GET(dev, corge));
...
}
Device Model Drivers with multiple MMIO regions in the same DT node
===================================================================
Some drivers may have multiple MMIO regions defined into the same DT device
node using the ``reg-names`` property to differentiate them, for example:
.. code-block:: devicetree
/dts-v1/;
/ {
a-driver@40000000 {
reg = <0x40000000 0x1000>,
<0x40001000 0x1000>;
reg-names = "corge", "grault";
};
};
This can be managed as seen in the previous section but this time using the
``DEVICE_MMIO_NAMED_ROM_INIT_BY_NAME`` macro instead. So the only difference
would be in the driver config struct:
.. code-block:: C
const static struct my_driver_config my_driver_config_0 = {
...
DEVICE_MMIO_NAMED_ROM_INIT_BY_NAME(corge, DT_DRV_INST(...)),
DEVICE_MMIO_NAMED_ROM_INIT_BY_NAME(grault, DT_DRV_INST(...)),
...
}
Drivers that do not use Zephyr Device Model
===========================================
Some drivers or driver-like code may not user Zephyr's device model,
and alternative storage must be arranged for the MMIO data. An
example of this are timer drivers, or interrupt controller code.
This can be managed with the ``DEVICE_MMIO_TOPLEVEL`` set of macros,
for example:
.. code-block:: C
DEVICE_MMIO_TOPLEVEL_STATIC(my_regs, DT_DRV_INST(..));
void some_init_code(...)
{
...
DEVICE_MMIO_TOPLEVEL_MAP(my_regs, K_MEM_CACHE_NONE);
...
}
void some_function(...)
...
sys_write32(DEVICE_MMIO_TOPLEVEL_GET(my_regs), 0xDEADBEEF);
...
}
Drivers that do not use DTS
===========================
Some drivers may not obtain the MMIO physical address from DTS, such as
is the case with PCI-E. In this case the :c:func:`device_map` function
may be used directly:
.. code-block:: C
void some_init_code(...)
{
...
struct pcie_bar mbar;
bool bar_found = pcie_get_mbar(bdf, index, &mbar);
device_map(DEVICE_MMIO_RAM_PTR(dev), mbar.phys_addr, mbar.size, K_MEM_CACHE_NONE);
...
}
For these cases, DEVICE_MMIO_ROM directives may be omitted.
API Reference
**************
.. doxygengroup:: device_model
``` | /content/code_sandbox/doc/kernel/drivers/index.rst | restructuredtext | 2016-05-26T17:54:19 | 2024-08-16T18:09:06 | zephyr | zephyrproject-rtos/zephyr | 10,307 | 4,521 |
```restructuredtext
.. _usermode_overview:
Overview
########
Threat Model
************
User mode threads are considered to be untrusted by Zephyr and are therefore
isolated from other user mode threads and from the kernel. A flawed or
malicious user mode thread cannot leak or modify the private data/resources
of another thread or the kernel, and cannot interfere with or
control another user mode thread or the kernel.
Example use-cases of Zephyr's user mode features:
- The kernel can protect against many unintentional programming errors which
could otherwise silently or spectacularly corrupt the system.
- The kernel can sandbox complex data parsers such as interpreters, network
protocols, and filesystems such that malicious third-party code or data
cannot compromise the kernel or other threads.
- The kernel can support the notion of multiple logical "applications", each
with their own group of threads and private data structures, which are
isolated from each other if one crashes or is otherwise compromised.
Design Goals
============
For threads running in a non-privileged CPU state (hereafter referred to as
'user mode') we aim to protect against the following:
- We prevent access to memory not specifically granted, or incorrect access to
memory that has an incompatible policy, such as attempting to write to a
read-only area.
- Access to thread stack buffers will be controlled with a policy which
partially depends on the underlying memory protection hardware.
- A user thread will by default have read/write access to its own stack
buffer.
- A user thread will never by default have access to user thread stacks
that are not members of the same memory domain.
- A user thread will never by default have access to thread stacks owned
by a supervisor thread, or thread stacks used to handle system call
privilege elevations, interrupts, or CPU exceptions.
- A user thread may have read/write access to the stacks of other user
threads in the same memory domain, depending on hardware.
- On MPU systems, threads may only access their own stack buffer.
- On MMU systems, threads may access any user thread stack in the same
memory domain. Portable code should not assume this.
- By default, program text and read-only data are accessible to all threads
on read-only basis, kernel-wide. This policy may be adjusted.
- User threads by default are not granted default access to any memory
except what is noted above.
- We prevent use of device drivers or kernel objects not specifically granted,
with the permission granularity on a per object or per driver instance
basis.
- We validate kernel or driver API calls with incorrect parameters that would
otherwise cause a crash or corruption of data structures private to the
kernel. This includes:
- Using the wrong kernel object type.
- Using parameters outside of proper bounds or with nonsensical values.
- Passing memory buffers that the calling thread does not have sufficient
access to read or write, depending on the semantics of the API.
- Use of kernel objects that are not in a proper initialization state.
- We ensure the detection and safe handling of user mode stack overflows.
- We prevent invoking system calls to functions excluded by the kernel
configuration.
- We prevent disabling of or tampering with kernel-defined and hardware-
enforced memory protections.
- We prevent re-entry from user to supervisor mode except through the kernel-
defined system calls and interrupt handlers.
- We prevent the introduction of new executable code by user mode threads,
except to the extent to which this is supported by kernel system calls.
We are specifically not protecting against the following attacks:
- The kernel itself, and any threads that are executing in supervisor mode,
are assumed to be trusted.
- The toolchain and any supplemental programs used by the build system are
assumed to be trusted.
- The kernel build is assumed to be trusted. There is considerable build-time
logic for creating the tables of valid kernel objects, defining system calls,
and configuring interrupts. The .elf binary files that are worked with
during this process are all assumed to be trusted code.
- We can't protect against mistakes made in memory domain configuration done in
kernel mode that exposes private kernel data structures to a user thread. RAM
for kernel objects should always be configured as supervisor-only.
- It is possible to make top-level declarations of user mode threads and
assign them permissions to kernel objects. In general, all C and header
files that are part of the kernel build producing zephyr.elf are assumed to
be trusted.
- We do not protect against denial of service attacks through thread CPU
starvation. Zephyr has no thread priority aging and a user thread of a
particular priority can starve all threads of lower priority, and also other
threads of the same priority if time-slicing is not enabled.
- There are build-time defined limits on how many threads can be active
simultaneously, after which creation of new user threads will fail.
- Stack overflows for threads running in supervisor mode may be caught,
but the integrity of the system cannot be guaranteed.
High-level Policy Details
*************************
Broadly speaking, we accomplish these thread-level memory protection goals
through the following mechanisms:
- Any user thread will only have access to a subset of memory:
typically its stack, program text, read-only data, and any partitions
configured in the :ref:`memory_domain` it belongs to. Access to any other RAM
must be done on the thread's behalf through system calls, or specifically
granted by a supervisor thread using the memory domain APIs. Newly created
threads inherit the memory domain configuration of the parent. Threads may
communicate with each other by having shared membership of the same memory
domains, or via kernel objects such as semaphores and pipes.
- User threads cannot directly access memory belonging to kernel objects.
Although pointers to kernel objects are used to reference them, actual
manipulation of kernel objects is done through system call interfaces. Device
drivers and threads stacks are also considered kernel objects. This ensures
that any data inside a kernel object that is private to the kernel cannot be
tampered with.
- User threads by default have no permission to access any kernel object or
driver other than their own thread object. Such access must be granted by
another thread that is either in supervisor mode or has permission on both
the receiving thread object and the kernel object being granted access to.
The creation of new threads has an option to automatically inherit
permissions of all kernel objects granted to the parent, except the parent
thread itself.
- For performance and footprint reasons Zephyr normally does little or no
parameter error checking for kernel object or device driver APIs. Access from
user mode through system calls involves an extra layer of handler functions,
which are expected to rigorously validate access permissions and type of
the object, check the validity of other parameters through bounds checking or
other means, and verify proper read/write access to any memory buffers
involved.
- Thread stacks are defined in such a way that exceeding the specified stack
space will generate a hardware fault. The way this is done specifically
varies per architecture.
Constraints
***********
All kernel objects, thread stacks, and device driver instances must be defined
at build time if they are to be used from user mode. Dynamic use-cases for
kernel objects will need to go through pre-defined pools of available objects.
There are some constraints if additional application binary data is loaded
for execution after the kernel starts:
- Loaded object code will not be able to define any kernel objects that will be
recognized by the kernel. This code will instead need to use APIs for
requesting kernel objects from pools.
- Similarly, since the loaded object code will not be part of the kernel build
process, this code will not be able to install interrupt handlers,
instantiate device drivers, or define system calls, regardless of what
mode it runs in.
- Loaded object code that does not come from a verified source should always
be entered with the CPU already in user mode.
``` | /content/code_sandbox/doc/kernel/usermode/overview.rst | restructuredtext | 2016-05-26T17:54:19 | 2024-08-16T18:09:06 | zephyr | zephyrproject-rtos/zephyr | 10,307 | 1,705 |
```restructuredtext
.. _usermode_api:
User Mode
#########
Zephyr offers the capability to run threads at a reduced privilege level
which we call user mode. The current implementation is designed for devices
with MPU hardware.
For details on creating threads that run in user mode, please see
:ref:`lifecycle_v2`.
.. toctree::
:maxdepth: 2
overview.rst
memory_domain.rst
kernelobjects.rst
syscalls.rst
mpu_stack_objects.rst
mpu_userspace.rst
``` | /content/code_sandbox/doc/kernel/usermode/index.rst | restructuredtext | 2016-05-26T17:54:19 | 2024-08-16T18:09:06 | zephyr | zephyrproject-rtos/zephyr | 10,307 | 119 |
```restructuredtext
.. _kernelobjects:
Kernel Objects
##############
A kernel object can be one of three classes of data:
* A core kernel object, such as a semaphore, thread, pipe, etc.
* A thread stack, which is an array of :c:struct:`z_thread_stack_element`
and declared with :c:macro:`K_THREAD_STACK_DEFINE()`
* A device driver instance (const struct device) that belongs to one of a defined
set of subsystems
The set of known kernel objects and driver subsystems is defined in
include/kernel.h as :c:enum:`k_objects`.
Kernel objects are completely opaque to user threads. User threads work
with addresses to kernel objects when making API calls, but may never
dereference these addresses, doing so will cause a memory protection fault.
All kernel objects must be placed in memory that is not accessible by
user threads.
Since user threads may not directly manipulate kernel objects, all use of
them must go through system calls. In order to perform a system call on
a kernel object, checks are performed by system call handler functions
that the kernel object address is valid and that the calling thread
has sufficient permissions to work with it.
Permission on an object also has the semantics of a reference to an object.
This is significant for certain object APIs which do temporary allocations,
or objects which themselves have been allocated from a runtime memory pool.
If an object loses all references, two events may happen:
* If the object has an associated cleanup function, the cleanup function
may be called to release any runtime-allocated buffers the object was using.
* If the object itself was dynamically allocated, the memory for the object
will be freed.
Object Placement
****************
Kernel objects that are only used by supervisor threads have no restrictions
and can be located anywhere in the binary, or even declared on stacks. However,
to prevent accidental or intentional corruption by user threads, they must
not be located in any memory that user threads have direct access to.
In order for a static kernel object to be usable by a user thread via system
call APIs, several conditions must be met on how the kernel object is declared:
* The object must be declared as a top-level global at build time, such that it
appears in the ELF symbol table. It is permitted to declare kernel objects
with static scope. The post-build script :ref:`gen_kobject_list.py` scans the
generated ELF file to find kernel objects and places their memory addresses
in a special table of kernel object metadata. Kernel objects may be members
of arrays or embedded within other data structures.
* Kernel objects must be located in memory reserved for the kernel. They
must not be located in any memory partitions that are user-accessible.
* Any memory reserved for a kernel object must be used exclusively for that
object. Kernel objects may not be members of a union data type.
Kernel objects that are found but do not meet the above conditions will not be
included in the generated table that is used to validate kernel object pointers
passed in from user mode.
The debug output of the :ref:`gen_kobject_list.py` script may be useful when
debugging why some object was unexpectedly not being tracked. This
information will be printed if the script is run with the ``--verbose`` flag,
or if the build system is invoked with verbose output.
Dynamic Objects
***************
Kernel objects may also be allocated at runtime if
:kconfig:option:`CONFIG_DYNAMIC_OBJECTS` is enabled. In this case, the
:c:func:`k_object_alloc` API may be used to instantiate an object from
the calling thread's resource pool. Such allocations may be freed in two
ways:
* Supervisor threads may call :c:func:`k_object_free` to force a dynamic
object to be released.
* If an object's references drop to zero (which happens when no threads have
permissions on it) the object will be automatically freed. User threads
may drop their own permission on an object with
:c:func:`k_object_release`, and their permissions are automatically
cleared when a thread terminates. Supervisor threads may additionally
revoke references for another thread using
:c:func:`k_object_access_revoke`.
Because permissions are also used for reference counting, it is important for
supervisor threads to acquire permissions on objects they are using even though
the access control aspects of the permission system are not enforced.
Implementation Details
======================
The :ref:`gen_kobject_list.py` script is a post-build step which finds all the
valid kernel object instances in the binary. It accomplishes this by parsing
the DWARF debug information present in the generated ELF file for the kernel.
Any instances of structs or arrays corresponding to kernel objects that meet
the object placement criteria will have their memory addresses placed in a
special perfect hash table of kernel objects generated by the 'gperf' tool.
When a system call is made and the kernel is presented with a memory address
of what may or may not be a valid kernel object, the address can be validated
with a constant-time lookup in this table.
Drivers are a special case. All drivers are instances of :c:struct:`device`, but
it is important to know what subsystem a driver belongs to so that
incorrect operations, such as calling a UART API on a sensor driver object, can
be prevented. When a device struct is found, its API pointer is examined to
determine what subsystem the driver belongs to.
The table itself maps kernel object memory addresses to instances of
:c:struct:`z_object`, which has all the metadata for that object. This
includes:
* A bitfield indicating permissions on that object. All threads have a
numerical ID assigned to them at build time, used to index the permission
bitfield for an object to see if that thread has permission on it. The size
of this bitfield is controlled by the :kconfig:option:`CONFIG_MAX_THREAD_BYTES`
option and the build system will generate an error if this value is too low.
* A type field indicating what kind of object this is, which is some
instance of :c:enum:`k_objects`.
* A set of flags for that object. This is currently used to track
initialization state and whether an object is public or not.
* An extra data field. The semantics of this field vary by object type, see
the definition of :c:union:`z_object_data`.
Dynamic objects allocated at runtime are tracked in a runtime red/black tree
which is used in parallel to the gperf table when validating object pointers.
Supervisor Thread Access Permission
***********************************
Supervisor threads can access any kernel object. However, permissions for
supervisor threads are still tracked for two reasons:
* If a supervisor thread calls :c:func:`k_thread_user_mode_enter`, the
thread will then run in user mode with any permissions it had been granted
(in many cases, by itself) when it was a supervisor thread.
* If a supervisor thread creates a user thread with the
:c:macro:`K_INHERIT_PERMS` option, the child thread will be granted the
same permissions as the parent thread, except the parent thread object.
User Thread Access Permission
*****************************
By default, when a user thread is created, it will only have access permissions
on its own thread object. Other kernel objects by default are not usable.
Access to them needs to be explicitly or implicitly granted. There are several
ways to do this.
* If a thread is created with the :c:macro:`K_INHERIT_PERMS`, that thread
will inherit all the permissions of the parent thread, except the parent
thread object.
* A thread that has permission on an object, or is running in supervisor mode,
may grant permission on that object to another thread via the
:c:func:`k_object_access_grant` API. The convenience pseudo-function
:c:func:`k_thread_access_grant` may also be used, which accepts an arbitrary
number of pointers to kernel objects and calls
:c:func:`k_object_access_grant` on each of them. The thread being granted
permission, or the object whose access is being granted, do not need to be
in an initialized state. If the caller is from user mode, the caller must
have permissions on both the kernel object and the target thread object.
* Supervisor threads may declare a particular kernel object to be a public
object, usable by all current and future threads with the
:c:func:`k_object_access_all_grant` API. You must assume that any
untrusted or exploited code will then be able to access the object. Use
this API with caution!
* If a thread was declared statically with :c:macro:`K_THREAD_DEFINE()`,
then the :c:macro:`K_THREAD_ACCESS_GRANT()` may be used to grant that thread
access to a set of kernel objects at boot time.
Once a thread has been granted access to an object, such access may be
removed with the :c:func:`k_object_access_revoke` API. This API is not
available to user threads, however user threads may use
:c:func:`k_object_release` to relinquish their own permissions on an
object.
API calls from supervisor mode to set permissions on kernel objects that are
not being tracked by the kernel will be no-ops. Doing the same from user mode
will result in a fatal error for the calling thread.
Objects allocated with :c:func:`k_object_alloc` implicitly grant
permission on the allocated object to the calling thread.
Initialization State
********************
Most operations on kernel objects will fail if the object is considered to be
in an uninitialized state. The appropriate init function for the object must
be performed first.
Some objects will be implicitly initialized at boot:
* Kernel objects that were declared with static initialization macros
(such as :c:macro:`K_SEM_DEFINE` for semaphores) will be in an initialized
state at build time.
* Device driver objects are considered initialized after their init function
is run by the kernel early in the boot process.
If a kernel object is initialized with a private static initializer, the object
must have :c:func:`k_object_init` called on it at some point by a supervisor
thread, otherwise the kernel will consider the object uninitialized if accessed
by a user thread. This is very uncommon, typically only for kernel objects that
are embedded within some larger struct and initialized statically.
.. code-block:: c
struct foo {
struct k_sem sem;
...
};
struct foo my_foo = {
.sem = Z_SEM_INITIALIZER(my_foo.sem, 0, 1),
...
};
...
k_object_init(&my_foo.sem);
...
Creating New Kernel Object Types
********************************
When implementing new kernel features or driver subsystems, it may be necessary
to define some new kernel object types. There are different steps needed
for creating core kernel objects and new driver subsystems.
Creating New Core Kernel Objects
================================
* In ``scripts/build/gen_kobject_list.py``, add the name of the struct to the
:py:data:`kobjects` list.
Instances of the new struct should now be tracked.
Creating New Driver Subsystem Kernel Objects
============================================
All driver instances are :c:struct:`device`. They are differentiated by
what API struct they are set to.
* In ``scripts/build/gen_kobject_list.py``, add the name of the API struct for the
new subsystem to the :py:data:`subsystems` list.
Driver instances of the new subsystem should now be tracked.
Configuration Options
*********************
Related configuration options:
* :kconfig:option:`CONFIG_USERSPACE`
* :kconfig:option:`CONFIG_MAX_THREAD_BYTES`
API Reference
*************
.. doxygengroup:: usermode_apis
``` | /content/code_sandbox/doc/kernel/usermode/kernelobjects.rst | restructuredtext | 2016-05-26T17:54:19 | 2024-08-16T18:09:06 | zephyr | zephyrproject-rtos/zephyr | 10,307 | 2,503 |
```restructuredtext
.. _mpu_stack_objects:
MPU Stack Objects
#################
Thread Stack Creation
*********************
Thread stacks are declared statically with :c:macro:`K_THREAD_STACK_DEFINE()`.
For architectures which utilize memory protection unit (MPU) hardware,
stacks are physically contiguous allocations. This contiguous allocation
has implications for the placement of stacks in memory, as well as the
implementation of other features such as stack protection and userspace. The
implications for placement are directly attributed to the alignment
requirements for MPU regions. This is discussed in the memory placement
section below.
Stack Guards
************
Stack protection mechanisms require hardware support that can restrict access
to memory. Memory protection units can provide this kind of support.
The MPU provides a fixed number of regions. Each region contains information
about the start, end, size, and access attributes to be enforced on that
particular region.
Stack guards are implemented by using a single MPU region and setting the
attributes for that region to not allow write access. If invalid accesses
occur, a fault ensues. The stack guard is defined at the bottom (the lowest
address) of the stack.
Memory Placement
****************
During stack creation, a set of constraints are enforced on the allocation of
memory. These constraints include determining the alignment of the stack and
the correct sizing of the stack. During linking of the binary, these
constraints are used to place the stacks properly.
The main source of the memory constraints is the MPU design for the SoC. The
MPU design may require specific constraints on the region definition. These
can include alignment of beginning and end addresses, sizes of allocations,
or even interactions between overlapping regions.
Some MPUs require that each region be aligned to a power of two. These SoCs
will have :kconfig:option:`CONFIG_MPU_REQUIRES_POWER_OF_TWO_ALIGNMENT` defined.
This means that a 1500 byte stack should be aligned to a 2kB boundary and the
stack size should also be adjusted to 2kB to ensure that nothing else is
placed in the remainder of the region. SoCs which include the unmodified ARM
v7m MPU will have these constraints.
Some ARM MPUs use start and end addresses to define MPU regions and both the
start and end addresses require 32 byte alignment. An example of this kind of
MPU is found in the NXP FRDM K64F.
MPUs may have a region priority mechanisms that use the highest priority region
that covers the memory access to determine the enforcement policy. Others may
logically OR regions to determine enforcement policy.
Size and alignment constraints may result in stack allocations being larger
than the requested size. Region priority mechanisms may result in
some added complexity when implementing stack guards.
``` | /content/code_sandbox/doc/kernel/usermode/mpu_stack_objects.rst | restructuredtext | 2016-05-26T17:54:19 | 2024-08-16T18:09:06 | zephyr | zephyrproject-rtos/zephyr | 10,307 | 582 |
```restructuredtext
.. _memory_domain:
Memory Protection Design
########################
Zephyr's memory protection design is geared towards microcontrollers with MPU
(Memory Protection Unit) hardware. We do support some architectures, such as x86,
which have a paged MMU (Memory Management Unit), but in that case the MMU is
used like an MPU with an identity page table.
All of the discussion below will be using MPU terminology; systems with MMUs
can be considered to have an MPU with an unlimited number of programmable
regions.
There are a few different levels on how memory access is configured when
Zephyr memory protection features are enabled, which we will describe here:
Boot Time Memory Configuration
******************************
This is the configuration of the MPU after the kernel has started up. It should
contain the following:
- Any configuration of memory regions which need to have special caching or
write-back policies for basic hardware and driver function. Note that most
MPUs have the concept of a default memory access policy map, which can be
enabled as a "background" mapping for any area of memory that doesn't
have an MPU region configuring it. It is strongly recommended to use this
to maximize the number of available MPU regions for the end user. On
ARMv7-M/ARMv8-M this is called the System Address Map, other CPUs may
have similar capabilities.
- A read-only, executable region or regions for program text and ro-data, that
is accessible to user mode. This could be further sub-divided into a
read-only region for ro-data, and a read-only, executable region for text, but
this will require an additional MPU region. This is required so that
threads running in user mode can read ro-data and fetch instructions.
- Depending on configuration, user-accessible read-write regions to support
extra features like GCOV, HEP, etc.
Assuming there is a background map which allows supervisor mode to access any
memory it needs, and regions are defined which grant user mode access to
text/ro-data, this is sufficient for the boot time configuration.
Hardware Stack Overflow
***********************
:kconfig:option:`CONFIG_HW_STACK_PROTECTION` is an optional feature which detects stack
buffer overflows when the system is running in supervisor mode. This
catches issues when the entire stack buffer has overflowed, and not
individual stack frames, use compiler-assisted :kconfig:option:`CONFIG_STACK_CANARIES`
for that.
Like any crash in supervisor mode, no guarantees can be made about the overall
health of the system after a supervisor mode stack overflow, and any instances
of this should be treated as a serious error. However it's still very useful to
know when these overflows happen, as without robust detection logic the system
will either crash in mysterious ways or behave in an undefined manner when the
stack buffer overflows.
Some systems implement this feature by creating at runtime a 'guard' MPU region
which is set to be read-only and is at either the beginning or immediately
preceding the supervisor mode stack buffer. If the stack overflows an
exception will be generated.
This feature is optional and is not required to catch stack overflows in user
mode; disabling this may free 1-2 MPU regions depending on the MPU design.
Other systems may have dedicated CPU support for catching stack overflows
and no extra MPU regions will be required.
Thread Stack
************
Any thread running in user mode will need access to its own stack buffer.
On context switch into a user mode thread, a dedicated MPU region or MMU
page table entries will be programmed with the bounds of the stack buffer.
A thread exceeding its stack buffer will start pushing data onto memory
it doesn't have access to and a memory access violation exception will be
generated.
Note that user threads have access to the stacks of other user threads in
the same memory domain. This is the minimum required for architectures to
support memory domains. Architecture can further restrict access to stacks
so each user thread only has access to its own stack if such architecture
advertises this capability via
:kconfig:option:`CONFIG_ARCH_MEM_DOMAIN_SUPPORTS_ISOLATED_STACKS`.
This behavior is enabled by default if supported and can be selectively
disabled via :kconfig:option:`CONFIG_MEM_DOMAIN_ISOLATED_STACKS` if
architecture supports both operating modes. However, some architectures
may decide to enable this all the time, and thus this option cannot be
disabled. Regardless of these kconfigs, user threads cannot access
the stacks of other user threads outside of their memory domains.
Thread Resource Pools
*********************
A small subset of kernel APIs, invoked as system calls, require heap memory
allocations. This memory is used only by the kernel and is not accessible
directly by user mode. In order to use these system calls, invoking threads
must assign themselves to a resource pool, which is a :c:struct:`k_heap`
object. Memory is drawn from a thread's resource pool using
:c:func:`z_thread_malloc` and freed with :c:func:`k_free`.
The APIs which use resource pools are as follows, with any alternatives
noted for users who do not want heap allocations within their application:
- :c:func:`k_stack_alloc_init` sets up a k_stack with its storage
buffer allocated out of a resource pool instead of a buffer provided by the
user. An alternative is to declare k_stacks that are automatically
initialized at boot with :c:macro:`K_STACK_DEFINE()`, or to initialize the
k_stack in supervisor mode with :c:func:`k_stack_init`.
- :c:func:`k_pipe_alloc_init` sets up a k_pipe object with its
storage buffer allocated out of a resource pool instead of a buffer provided
by the user. An alternative is to declare k_pipes that are automatically
initialized at boot with :c:macro:`K_PIPE_DEFINE()`, or to initialize the
k_pipe in supervisor mode with :c:func:`k_pipe_init`.
- :c:func:`k_msgq_alloc_init` sets up a k_msgq object with its
storage buffer allocated out of a resource pool instead of a buffer provided
by the user. An alternative is to declare a k_msgq that is automatically
initialized at boot with :c:macro:`K_MSGQ_DEFINE()`, or to initialize the
k_msgq in supervisor mode with :c:func:`k_msgq_init`.
- :c:func:`k_poll` when invoked from user mode, needs to make a kernel-side
copy of the provided events array while waiting for an event. This copy is
freed when :c:func:`k_poll` returns for any reason.
- :c:func:`k_queue_alloc_prepend` and :c:func:`k_queue_alloc_append`
allocate a container structure to place the data in, since the internal
bookkeeping information that defines the queue cannot be placed in the
memory provided by the user.
- :c:func:`k_object_alloc` allows for entire kernel objects to be
dynamically allocated at runtime and a usable pointer to them returned to
the caller.
The relevant API is :c:func:`k_thread_heap_assign` which assigns
a k_heap to draw these allocations from for the target thread.
If the system heap is enabled, then the system heap may be used with
:c:func:`k_thread_system_pool_assign`, but it is preferable for different
logical applications running on the system to have their own pools.
Memory Domains
**************
The kernel ensures that any user thread will have access to its own stack
buffer, plus program text and read-only data. The memory domain APIs are the
way to grant access to additional blocks of memory to a user thread.
Conceptually, a memory domain is a collection of some number of memory
partitions. The maximum number of memory partitions in a domain
is limited by the number of available MPU regions. This is why it is important
to minimize the number of boot-time MPU regions.
Memory domains are *not* intended to control access to memory from supervisor
mode. In some cases this may be unavoidable; for example some architectures do
not allow for the definition of regions which are read-only to user mode but
read-write to supervisor mode. A great deal of care must be taken when working
with such regions to not unintentionally cause the kernel to crash when
accessing such a region. Any attempt to use memory domain APIs to control
supervisor mode access is at best undefined behavior; supervisor mode access
policy is only intended to be controlled by boot-time memory regions.
Memory domain APIs are only available to supervisor mode. The only control
user mode has over memory domains is that any user thread's child threads
will automatically become members of the parent's domain.
All threads are members of a memory domain, including supervisor threads
(even though this has no implications on their memory access). There is a
default domain ``k_mem_domain_default`` which will be assigned to threads if
they have not been specifically assigned to a domain, or inherited a memory
domain membership from their parent thread. The main thread starts as a
member of the default domain.
Memory Partitions
=================
Each memory partition consists of a memory address, a size,
and access attributes. It is intended that memory partitions are used to
control access to system memory. Defining memory partitions are subject
to the following constraints:
- The partition must represent a memory region that can be programmed by
the underlying memory management hardware, and needs to conform to any
underlying hardware constraints. For example, many MPU-based systems require
that partitions be sized to some power of two, and aligned to their own
size. For MMU-based systems, the partition must be aligned to a page and
the size some multiple of the page size.
- Partitions within the same memory domain may not overlap each other. There is
no notion of precedence among partitions within a memory domain. Partitions
within a memory domain are assumed to have a higher precedence than any
boot-time memory regions, however whether a memory domain partition can
overlap a boot-time memory region is architecture specific.
- The same partition may be specified in multiple memory domains. For example
there may be a shared memory area that multiple domains grant access to.
- Care must be taken in determining what memory to expose in a partition.
It is not appropriate to provide direct user mode access to any memory
containing private kernel data.
- Memory domain partitions are intended to control access to system RAM.
Configuration of memory partitions which do not correspond to RAM
may not be supported by the architecture; this is true for MMU-based systems.
There are two ways to define memory partitions: either manually or
automatically.
Manual Memory Partitions
------------------------
The following code declares a global array ``buf``, and then declares
a read-write partition for it which may be added to a domain:
.. code-block:: c
uint8_t __aligned(32) buf[32];
K_MEM_PARTITION_DEFINE(my_partition, buf, sizeof(buf),
K_MEM_PARTITION_P_RW_U_RW);
This does not scale particularly well when we are trying to contain multiple
objects spread out across several C files into a single partition.
Automatic Memory Partitions
---------------------------
Automatic memory partitions are created by the build system. All globals
which need to be placed inside a partition are tagged with their destination
partition. The build system will then coalesce all of these into a single
contiguous block of memory, zero any BSS variables at boot, and define
a memory partition of appropriate base address and size which contains all
the tagged data.
.. figure:: auto_mem_domain.png
:alt: Automatic Memory Domain build flow
:align: center
Automatic Memory Domain build flow
Automatic memory partitions are only configured as read-write
regions. They are defined with :c:macro:`K_APPMEM_PARTITION_DEFINE()`.
Global variables are then routed to this partition using
:c:macro:`K_APP_DMEM()` for initialized data and :c:macro:`K_APP_BMEM()` for
BSS.
.. code-block:: c
#include <zephyr/app_memory/app_memdomain.h>
/* Declare a k_mem_partition "my_partition" that is read-write to
* user mode. Note that we do not specify a base address or size.
*/
K_APPMEM_PARTITION_DEFINE(my_partition);
/* The global variable var1 will be inside the bounds of my_partition
* and be initialized with 37 at boot.
*/
K_APP_DMEM(my_partition) int var1 = 37;
/* The global variable var2 will be inside the bounds of my_partition
* and be zeroed at boot size K_APP_BMEM() was used, indicating a BSS
* variable.
*/
K_APP_BMEM(my_partition) int var2;
The build system will ensure that the base address of ``my_partition`` will
be properly aligned, and the total size of the region conforms to the memory
management hardware requirements, adding padding if necessary.
If multiple partitions are being created, a variadic preprocessor macro can be
used as provided in ``app_macro_support.h``:
.. code-block:: c
FOR_EACH(K_APPMEM_PARTITION_DEFINE, part0, part1, part2);
Automatic Partitions for Static Library Globals
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The build-time logic for setting up automatic memory partitions is in
``scripts/build/gen_app_partitions.py``. If a static library is linked into Zephyr,
it is possible to route all the globals in that library to a specific
memory partition with the ``--library`` argument.
For example, if the Newlib C library is enabled, the Newlib globals all need
to be placed in ``z_libc_partition``. The invocation of the script in the
top-level ``CMakeLists.txt`` adds the following:
.. code-block:: none
gen_app_partitions.py ... --library libc.a z_libc_partition ..
For pre-compiled libraries there is no support for expressing this in the
project-level configuration or build files; the toplevel ``CMakeLists.txt`` must
be edited.
For Zephyr libraries created using ``zephyr_library`` or ``zephyr_library_named``
the ``zephyr_library_app_memory`` function can be used to specify the memory
partition where all globals in the library should be placed.
.. _memory_domain_predefined_partitions:
Pre-defined Memory Partitions
-----------------------------
There are a few memory partitions which are pre-defined by the system:
- ``z_malloc_partition`` - This partition contains the system-wide pool of
memory used by libc malloc(). Due to possible starvation issues, it is
not recommended to draw heap memory from a global pool, instead
it is better to define various sys_heap objects and assign them
to specific memory domains.
- ``z_libc_partition`` - Contains globals required by the C library and runtime.
Required when using either the Minimal C library or the Newlib C Library.
Required when :kconfig:option:`CONFIG_STACK_CANARIES` is enabled.
Library-specific partitions are listed in ``include/app_memory/partitions.h``.
For example, to use the MBEDTLS library from user mode, the
``k_mbedtls_partition`` must be added to the domain.
Memory Domain Usage
===================
Create a Memory Domain
----------------------
A memory domain is defined using a variable of type
:c:struct:`k_mem_domain`. It must then be initialized by calling
:c:func:`k_mem_domain_init`.
The following code defines and initializes an empty memory domain.
.. code-block:: c
struct k_mem_domain app0_domain;
k_mem_domain_init(&app0_domain, 0, NULL);
Add Memory Partitions into a Memory Domain
------------------------------------------
There are two ways to add memory partitions into a memory domain.
This first code sample shows how to add memory partitions while creating
a memory domain.
.. code-block:: c
/* the start address of the MPU region needs to align with its size */
uint8_t __aligned(32) app0_buf[32];
uint8_t __aligned(32) app1_buf[32];
K_MEM_PARTITION_DEFINE(app0_part0, app0_buf, sizeof(app0_buf),
K_MEM_PARTITION_P_RW_U_RW);
K_MEM_PARTITION_DEFINE(app0_part1, app1_buf, sizeof(app1_buf),
K_MEM_PARTITION_P_RW_U_RO);
struct k_mem_partition *app0_parts[] = {
app0_part0,
app0_part1
};
k_mem_domain_init(&app0_domain, ARRAY_SIZE(app0_parts), app0_parts);
This second code sample shows how to add memory partitions into an initialized
memory domain one by one.
.. code-block:: c
/* the start address of the MPU region needs to align with its size */
uint8_t __aligned(32) app0_buf[32];
uint8_t __aligned(32) app1_buf[32];
K_MEM_PARTITION_DEFINE(app0_part0, app0_buf, sizeof(app0_buf),
K_MEM_PARTITION_P_RW_U_RW);
K_MEM_PARTITION_DEFINE(app0_part1, app1_buf, sizeof(app1_buf),
K_MEM_PARTITION_P_RW_U_RO);
k_mem_domain_add_partition(&app0_domain, &app0_part0);
k_mem_domain_add_partition(&app0_domain, &app0_part1);
.. note::
The maximum number of memory partitions is limited by the maximum
number of MPU regions or the maximum number of MMU tables.
Memory Domain Assignment
------------------------
Any thread may join a memory domain, and any memory domain may have multiple
threads assigned to it. Threads are assigned to memory domains with an API
call:
.. code-block:: c
k_mem_domain_add_thread(&app0_domain, app_thread_id);
If the thread was already a member of some other domain (including the
default domain), it will be removed from it in favor of the new one.
In addition, if a thread is a member of a memory domain, and it creates a
child thread, that thread will belong to the domain as well.
Remove a Memory Partition from a Memory Domain
----------------------------------------------
The following code shows how to remove a memory partition from a memory
domain.
.. code-block:: c
k_mem_domain_remove_partition(&app0_domain, &app0_part1);
The k_mem_domain_remove_partition() API finds the memory partition
that matches the given parameter and removes that partition from the
memory domain.
Available Partition Attributes
------------------------------
When defining a partition, we need to set access permission attributes
to the partition. Since the access control of memory partitions relies on
either an MPU or MMU, the available partition attributes would be architecture
dependent.
The complete list of available partition attributes for a specific architecture
is found in the architecture-specific include file
``include/zephyr/arch/<arch name>/arch.h``, (for example, ``include/zehpyr/arch/arm/arch.h``.)
Some examples of partition attributes are:
.. code-block:: c
/* Denote partition is privileged read/write, unprivileged read/write */
K_MEM_PARTITION_P_RW_U_RW
/* Denote partition is privileged read/write, unprivileged read-only */
K_MEM_PARTITION_P_RW_U_RO
In almost all cases ``K_MEM_PARTITION_P_RW_U_RW`` is the right choice.
Configuration Options
*********************
Related configuration options:
* :kconfig:option:`CONFIG_MAX_DOMAIN_PARTITIONS`
API Reference
*************
The following memory domain APIs are provided by :zephyr_file:`include/zephyr/kernel.h`:
.. doxygengroup:: mem_domain_apis
``` | /content/code_sandbox/doc/kernel/usermode/memory_domain.rst | restructuredtext | 2016-05-26T17:54:19 | 2024-08-16T18:09:06 | zephyr | zephyrproject-rtos/zephyr | 10,307 | 4,152 |
```restructuredtext
.. _mpu_userspace:
MPU Backed Userspace
####################
The MPU backed userspace implementation requires the creation of a secondary
set of stacks. These stacks exist in a 1:1 relationship with each thread stack
defined in the system. The privileged stacks are created as a part of the
build process.
A post-build script :ref:`gen_kobject_list.py` scans the generated
ELF file and finds all of the thread stack objects. A set of privileged
stacks, a lookup table, and a set of helper functions are created and added
to the image.
During the process of dropping a thread to user mode, the privileged stack
information is filled in and later used by the swap and system call
infrastructure to configure the MPU regions properly for the thread stack and
guard (if applicable).
During system calls, the user mode thread's access to the system call and the
passed-in parameters are all validated. The user mode thread is then elevated
to privileged mode, the stack is switched to use the privileged stack, and the
call is made to the specified kernel API. On return from the kernel API, the
thread is set back to user mode and the stack is restored to the user stack.
``` | /content/code_sandbox/doc/kernel/usermode/mpu_userspace.rst | restructuredtext | 2016-05-26T17:54:19 | 2024-08-16T18:09:06 | zephyr | zephyrproject-rtos/zephyr | 10,307 | 263 |
```restructuredtext
.. _cmake_pkg:
Zephyr CMake Package
####################
The Zephyr `CMake package`_ is a convenient way to create a Zephyr-based application.
.. note::
The :ref:`zephyr-app-types` section introduces the application types
used in this page.
The Zephyr CMake package ensures that CMake can automatically select a Zephyr installation to use for building
the application, whether it is a :ref:`Zephyr repository application <zephyr-repo-app>`,
a :ref:`Zephyr workspace application <zephyr-workspace-app>`, or a
:ref:`Zephyr freestanding application <zephyr-freestanding-app>`.
When developing a Zephyr-based application, then a developer simply needs to write
``find_package(Zephyr)`` in the beginning of the application :file:`CMakeLists.txt` file.
To use the Zephyr CMake package it must first be exported to the `CMake user package registry`_.
This is means creating a reference to the current Zephyr installation inside the
CMake user package registry.
.. tabs::
.. group-tab:: Ubuntu
In Linux, the CMake user package registry is found in:
``~/.cmake/packages/Zephyr``
.. group-tab:: macOS
In macOS, the CMake user package registry is found in:
``~/.cmake/packages/Zephyr``
.. group-tab:: Windows
In Windows, the CMake user package registry is found in:
``HKEY_CURRENT_USER\Software\Kitware\CMake\Packages\Zephyr``
The Zephyr CMake package allows CMake to automatically find a Zephyr base.
One or more Zephyr installations must be exported.
Exporting multiple Zephyr installations may be useful when developing or testing
Zephyr freestanding applications, Zephyr workspace application with vendor forks, etc..
Zephyr CMake package export (west)
**********************************
When installing Zephyr using :ref:`west <get_the_code>` then it is recommended
to export Zephyr using ``west zephyr-export``.
.. _zephyr_cmake_package_export:
Zephyr CMake package export (without west)
******************************************
Zephyr CMake package is exported to the CMake user package registry using the following commands:
.. code-block:: bash
cmake -P <PATH-TO-ZEPHYR>/share/zephyr-package/cmake/zephyr_export.cmake
This will export the current Zephyr to the CMake user package registry.
To also export the Zephyr Unittest CMake package, run the following command in addition:
.. code-block:: bash
cmake -P <PATH-TO-ZEPHYR>/share/zephyrunittest-package/cmake/zephyr_export.cmake
.. _zephyr_cmake_package_zephyr_base:
Zephyr Base Environment Setting
*******************************
The Zephyr CMake package search functionality allows for explicitly specifying
a Zephyr base using an environment variable.
To do this, use the following ``find_package()`` syntax:
.. code-block:: cmake
find_package(Zephyr REQUIRED HINTS $ENV{ZEPHYR_BASE})
This syntax instructs CMake to first search for Zephyr using the Zephyr base environment setting
:envvar:`ZEPHYR_BASE` and then use the normal search paths.
.. _zephyr_cmake_search_order:
Zephyr CMake Package Search Order
*********************************
When Zephyr base environment setting is not used for searching, the Zephyr installation matching
the following criteria will be used:
* A Zephyr repository application will use the Zephyr in which it is located.
For example:
.. code-block:: none
<projects>/zephyr-workspace/zephyr
samples
hello_world
in this example, ``hello_world`` will use ``<projects>/zephyr-workspace/zephyr``.
* Zephyr workspace application will use the Zephyr that share the same workspace.
For example:
.. code-block:: none
<projects>/zephyr-workspace
zephyr
...
my_applications
my_first_app
in this example, ``my_first_app`` will use ``<projects>/zephyr-workspace/zephyr`` as this Zephyr
is located in the same workspace as the Zephyr workspace application.
.. note::
The root of a Zephyr workspace is identical to ``west topdir`` if the workspace was
installed using ``west``
* Zephyr freestanding application will use the Zephyr registered in the CMake user package registry.
For example:
.. code-block:: none
<projects>/zephyr-workspace-1
zephyr (Not exported to CMake)
<projects>/zephyr-workspace-2
zephyr (Exported to CMake)
<home>/app
CMakeLists.txt
prj.conf
src
main.c
in this example, only ``<projects>/zephyr-workspace-2/zephyr`` is exported to the CMake package
registry and therefore this Zephyr will be used by the Zephyr freestanding application
``<home>/app``.
If user wants to test the application with ``<projects>/zephyr-workspace-1/zephyr``, this can be
done by using the Zephyr Base environment setting, meaning set
``ZEPHYR_BASE=<projects>/zephyr-workspace-1/zephyr``, before
running CMake.
.. note::
The Zephyr package selected on the first CMake invocation will be used for all subsequent
builds. To change the Zephyr package, for example to test the application using Zephyr base
environment setting, then it is necessary to do a pristine build first
(See :ref:`application_rebuild`).
Zephyr CMake Package Version
****************************
When writing an application then it is possible to specify a Zephyr version number ``x.y.z`` that
must be used in order to build the application.
Specifying a version is especially useful for a Zephyr freestanding application as it ensures the
application is built with a minimal Zephyr version.
It also helps CMake to select the correct Zephyr to use for building, when there are multiple
Zephyr installations in the system.
For example:
.. code-block:: cmake
find_package(Zephyr 2.2.0)
project(app)
will require ``app`` to be built with Zephyr 2.2.0 as minimum.
CMake will search all exported candidates to find a Zephyr installation which matches this version
criteria.
Thus it is possible to have multiple Zephyr installations and have CMake automatically select
between them based on the version number provided, see `CMake package version`_ for details.
For example:
.. code-block:: none
<projects>/zephyr-workspace-2.a
zephyr (Exported to CMake)
<projects>/zephyr-workspace-2.b
zephyr (Exported to CMake)
<home>/app
CMakeLists.txt
prj.conf
src
main.c
in this case, there are two released versions of Zephyr installed at their own workspaces.
Workspace 2.a and 2.b, corresponding to the Zephyr version.
To ensure ``app`` is built with minimum version ``2.a`` the following ``find_package``
syntax may be used:
.. code-block:: cmake
find_package(Zephyr 2.a)
project(app)
Note that both ``2.a`` and ``2.b`` fulfill this requirement.
CMake also supports the keyword ``EXACT``, to ensure an exact version is used, if that is required.
In this case, the application CMakeLists.txt could be written as:
.. code-block:: cmake
find_package(Zephyr 2.a EXACT)
project(app)
In case no Zephyr is found which satisfies the version required, as example, the application specifies
.. code-block:: cmake
find_package(Zephyr 2.z)
project(app)
then an error similar to below will be printed:
.. code-block:: none
Could not find a configuration file for package "Zephyr" that is compatible
with requested version "2.z".
The following configuration files were considered but not accepted:
<projects>/zephyr-workspace-2.a/zephyr/share/zephyr-package/cmake/ZephyrConfig.cmake, version: 2.a.0
<projects>/zephyr-workspace-2.b/zephyr/share/zephyr-package/cmake/ZephyrConfig.cmake, version: 2.b.0
.. note:: It can also be beneficial to specify a version number for Zephyr repository applications
and Zephyr workspace applications. Specifying a version in those cases ensures the
application will only build if the Zephyr repository or workspace is matching.
This can be useful to avoid accidental builds when only part of a workspace has been
updated.
Multiple Zephyr Installations (Zephyr workspace)
************************************************
Testing out a new Zephyr version, while at the same time keeping the existing Zephyr in the
workspace untouched is sometimes beneficial.
Or having both an upstream Zephyr, Vendor specific, and a custom Zephyr in same workspace.
For example:
.. code-block:: none
<projects>/zephyr-workspace
zephyr
zephyr-vendor
zephyr-custom
...
my_applications
my_first_app
in this setup, ``find_package(Zephyr)`` has the following order of precedence for selecting
which Zephyr to use:
* Project name: ``zephyr``
* First project, when Zephyr projects are ordered lexicographical, in this case.
* ``zephyr-custom``
* ``zephyr-vendor``
This means that ``my_first_app`` will use ``<projects>/zephyr-workspace/zephyr``.
It is possible to specify a Zephyr preference list in the application.
A Zephyr preference list can be specified as:
.. code-block:: cmake
set(ZEPHYR_PREFER "zephyr-custom" "zephyr-vendor")
find_package(Zephyr)
project(my_first_app)
the ``ZEPHYR_PREFER`` is a list, allowing for multiple Zephyrs.
If a Zephyr is specified in the list, but not found in the system, it is simply ignored and
``find_package(Zephyr)`` will continue to the next candidate.
This allows for temporary creation of a new Zephyr release to be tested, without touching current
Zephyr. When testing is done, the ``zephyr-test`` folder can simply be removed.
Such a CMakeLists.txt could look as:
.. code-block:: cmake
set(ZEPHYR_PREFER "zephyr-test")
find_package(Zephyr)
project(my_first_app)
.. _cmake_build_config_package:
Zephyr Build Configuration CMake packages
*****************************************
There are two Zephyr Build configuration packages which provide control over the build
settings in Zephyr in a more generic way. These packages are:
* **ZephyrBuildConfiguration**: Applies to all Zephyr applications in the workspace
* **ZephyrAppConfiguration**: Applies only to the application you are currently building
They are similar to the per-user :file:`.zephyrrc` file that can be used to set :ref:`env_vars`,
but they set CMake variables instead. They also allow you to automatically share the build
configuration among all users through the project repository. They also allow more advanced use
cases, such as loading of additional CMake boilerplate code.
The Zephyr Build Configuration CMake packages will be loaded in the Zephyr boilerplate code after
initial properties and ``ZEPHYR_BASE`` has been defined, but before CMake code execution. The
ZephyrBuildConfiguration is included first and ZephyrAppConfiguration afterwards. That means the
application-specific package could override the workspace settings, if needed.
This allows the Zephyr Build Configuration CMake packages to setup or extend properties such as:
``DTS_ROOT``, ``BOARD_ROOT``, ``TOOLCHAIN_ROOT`` / other toolchain setup, fixed overlays, and any
other property that can be controlled. It also allows inclusion of additional boilerplate code.
To provide a ZephyrBuildConfiguration or ZephyrAppConfiguration, create
:file:`ZephyrBuildConfig.cmake` and/or :file:`ZephyrAppConfig.cmake` respectively and place them
in the appropriate location. The CMake ``find_package`` mechanism will search for these files with
the steps below. Other default CMake package search paths and hints are disabled and there is no
version checking implemented for these packages. This also means that these packages cannot be
installed in the CMake package registry. The search steps are:
1. If ``ZephyrBuildConfiguration_ROOT``, or ``ZephyrAppConfiguration_ROOT`` respectively, is set,
search within this prefix path. If a matching file is found, execute this file. If no matching
file is found, go to step 2.
2. Search within ``${ZEPHYR_BASE}/../*``, or ``${APPLICATION_SOURCE_DIR}`` respectively. If a
matching file is found, execute this file. If no matching file is found, abort the search.
It is recommended to place the files in the default paths from step 2, but with the
``<PackageName>_ROOT`` variables you have the flexibility to place them anywhere. This is
especially necessary for freestanding applications, for which the default path to
ZephyrBuildConfiguration usually does not work. In this case the ``<PackageName>_ROOT`` variables
can be set on the CMake command line, **before** ``find_package(Zephyr ...)``, as environment
variable or from a CMake cache initialization file with the ``-C`` command line option.
.. note:: The ``<PackageName>_ROOT`` variables, as well as the default paths, are just the prefixes
to the search path. These prefixes get combined with additional path suffixes, which together
form the actual search path. Any combination that honors the
`CMake package search procedure`_ is valid and will work.
If you want to completely disable the search for these packages, you can use the special CMake
``CMAKE_DISABLE_FIND_PACKAGE_<PackageName>`` variable for that. Just set
``CMAKE_DISABLE_FIND_PACKAGE_ZephyrBuildConfiguration`` or
``CMAKE_DISABLE_FIND_PACKAGE_ZephyrAppConfiguration`` to ``TRUE`` to disable the package.
An example folder structure could look like this:
.. code-block:: none
<projects>/zephyr-workspace
zephyr
...
manifest repo (can be named anything)
cmake/ZephyrBuildConfig.cmake
...
zephyr application
share/zephyrapp-package/cmake/ZephyrAppConfig.cmake
A sample :file:`ZephyrBuildConfig.cmake` can be seen below.
.. code-block:: cmake
# ZephyrBuildConfig.cmake sample code
# To ensure final path is absolute and does not contain ../.. in variable.
get_filename_component(APPLICATION_PROJECT_DIR
${CMAKE_CURRENT_LIST_DIR}/../../..
ABSOLUTE
)
# Add this project to list of board roots
list(APPEND BOARD_ROOT ${APPLICATION_PROJECT_DIR})
# Default to GNU Arm Embedded toolchain if no toolchain is set
if(NOT ENV{ZEPHYR_TOOLCHAIN_VARIANT})
set(ZEPHYR_TOOLCHAIN_VARIANT gnuarmemb)
find_program(GNU_ARM_GCC arm-none-eabi-gcc)
if(NOT ${GNU_ARM_GCC} STREQUAL GNU_ARM_GCC-NOTFOUND)
# The toolchain root is located above the path to the compiler.
get_filename_component(GNUARMEMB_TOOLCHAIN_PATH ${GNU_ARM_GCC}/../.. ABSOLUTE)
endif()
endif()
Zephyr CMake package source code
********************************
The Zephyr CMake package source code in
:zephyr_file:`share/zephyr-package/cmake` and
:zephyr_file:`share/zephyrunittest-package/cmake` contains the CMake config
package which is used by the CMake ``find_package`` function.
It also contains code for exporting Zephyr as a CMake config package.
The following is an overview of the files in these directories:
:file:`ZephyrConfigVersion.cmake`
The Zephyr package version file. This file is called by CMake to determine
if this installation fulfils the requirements specified by user when calling
``find_package(Zephyr ...)``. It is also responsible for detection of Zephyr
repository or workspace only installations.
:file:`ZephyrUnittestConfigVersion.cmake`
Same responsibility as ``ZephyrConfigVersion.cmake``, but for unit tests.
Includes ``ZephyrConfigVersion.cmake``.
:file:`ZephyrConfig.cmake`
The Zephyr package file. This file is called by CMake to for the package
meeting which fulfils the requirements specified by user when calling
``find_package(Zephyr ...)``. This file is responsible for sourcing of
boilerplate code.
:file:`ZephyrUnittestConfig.cmake`
Same responsibility as ``ZephyrConfig.cmake``, but for unit tests.
Includes ``ZephyrConfig.cmake``.
:file:`zephyr_package_search.cmake`
Common file used for detection of Zephyr repository and workspace candidates.
Used by ``ZephyrConfigVersion.cmake`` and ``ZephyrConfig.cmake`` for common code.
:file:`zephyr_export.cmake`
See :ref:`zephyr_cmake_package_export`.
.. _CMake package: path_to_url
.. _CMake user package registry: path_to_url#user-package-registry
.. _CMake package version: path_to_url#version-selection
.. _CMake package search procedure: path_to_url#search-procedure
``` | /content/code_sandbox/doc/build/zephyr_cmake_package.rst | restructuredtext | 2016-05-26T17:54:19 | 2024-08-16T18:09:06 | zephyr | zephyrproject-rtos/zephyr | 10,307 | 3,966 |
```restructuredtext
.. _build_overview:
Build and Configuration Systems
###############################
.. toctree::
:maxdepth: 1
cmake/index.rst
dts/index
kconfig/index.rst
snippets/index.rst
zephyr_cmake_package.rst
sysbuild/index.rst
version/index.rst
flashing/index.rst
``` | /content/code_sandbox/doc/build/index.rst | restructuredtext | 2016-05-26T17:54:19 | 2024-08-16T18:09:06 | zephyr | zephyrproject-rtos/zephyr | 10,307 | 81 |
```restructuredtext
.. _syscalls:
System Calls
############
User threads run with a reduced set of privileges than supervisor threads:
certain CPU instructions may not be used, and they have access to only a
limited part of the memory map. System calls (may) allow user threads to
perform operations not directly available to them.
When defining system calls, it is very important to ensure that access to the
API's private data is done exclusively through system call interfaces.
Private kernel data should never be made available to user mode threads
directly. For example, the ``k_queue`` APIs were intentionally not made
available as they store bookkeeping information about the queue directly
in the queue buffers which are visible from user mode.
APIs that allow the user to register callback functions that run in
supervisor mode should never be exposed as system calls. Reserve these
for supervisor-mode access only.
This section describes how to declare new system calls and discusses a few
implementation details relevant to them.
Components
**********
All system calls have the following components:
* A **C prototype** prefixed with :c:macro:`__syscall` for the API. It
will be declared in some header under ``include/`` or in another
``SYSCALL_INCLUDE_DIRS`` directory. This prototype is never implemented
manually, instead it gets created by the :ref:`gen_syscalls.py` script.
What gets generated is an inline function which either calls the
implementation function directly (if called from supervisor mode) or goes
through privilege elevation and validation steps (if called from user
mode).
* An **implementation function**, which is the real implementation of the
system call. The implementation function may assume that all parameters
passed in have been validated if it was invoked from user mode.
* A **verification function**, which wraps the implementation function
and does validation of all the arguments passed in.
* An **unmarshalling function**, which is an automatically generated
handler that must be included by user source code.
C Prototype
***********
The C prototype represents how the API is invoked from either user or
supervisor mode. For example, to initialize a semaphore:
.. code-block:: c
__syscall void k_sem_init(struct k_sem *sem, unsigned int initial_count,
unsigned int limit);
The :c:macro:`__syscall` attribute is very special. To the C compiler, it
simply expands to 'static inline'. However to the post-build
:ref:`parse_syscalls.py` script, it indicates that this API is a system call.
The :ref:`parse_syscalls.py` script does some parsing of the function prototype,
to determine the data types of its return value and arguments, and has some
limitations:
* Array arguments must be passed in as pointers, not arrays. For example,
``int foo[]`` or ``int foo[12]`` is not allowed, but should instead be
expressed as ``int *foo``.
* Function pointers horribly confuse the limited parser. The workaround is
to typedef them first, and then express in the argument list in terms
of that typedef.
* :c:macro:`__syscall` must be the first thing in the prototype.
The preprocessor is intentionally not used when determining the set of system
calls to generate. However, any generated system calls that don't actually have
a verification function defined (because the related feature is not enabled in
the kernel configuration) will instead point to a special verification for
unimplemented system calls. Data type definitions for APIs should not have
conditional visibility to the compiler.
Any header file that declares system calls must include a special generated
header at the very bottom of the header file. This header follows the
naming convention ``syscalls/<name of header file>``. For example, at the
bottom of ``include/sensor.h``:
.. code-block:: c
#include <zephyr/syscalls/sensor.h>
C prototype functions must be declared in one of the directories
listed in the CMake variable ``SYSCALL_INCLUDE_DIRS``. This list
always contains ``APPLICATION_SOURCE_DIR`` when
``CONFIG_APPLICATION_DEFINED_SYSCALL`` is set, or
``${ZEPHYR_BASE}/subsys/testsuite/ztest/include`` when
``CONFIG_ZTEST`` is set. Additional paths can be added to the list
through the CMake command line or in CMake code that is run before
``find_package(Zephyr ...)`` is run. ``${ZEPHYR_BASE}/include``
is always scanned for potential syscall prototypes.
Note that not all syscalls will be included in the final binaries.
CMake functions ``zephyr_syscall_header`` and
``zephyr_syscall_header_ifdef`` are used to specify which header
files contain syscall prototypes where those syscalls must be
present in the final binaries. Note that header files inside
directories listed in CMake variable ``SYSCALL_INCLUDE_DIRS``
will always have their syscalls present in final binaries.
To force all syscalls to be included in the final binaries,
turn on :kconfig:option:`CONFIG_EMIT_ALL_SYSCALLS`.
Invocation Context
==================
Source code that uses system call APIs can be made more efficient if it is
known that all the code inside a particular C file runs exclusively in
user mode, or exclusively in supervisor mode. The system will look for
the definition of macros :c:macro:`__ZEPHYR_SUPERVISOR__` or
:c:macro:`__ZEPHYR_USER__`, typically these will be added to the compiler
flags in the build system for the related files.
* If :kconfig:option:`CONFIG_USERSPACE` is not enabled, all APIs just directly call
the implementation function.
* Otherwise, the default case is to make a runtime check to see if the
processor is currently running in user mode, and either make the system call
or directly call the implementation function as appropriate.
* If :c:macro:`__ZEPHYR_SUPERVISOR__` is defined, then it is assumed that
all the code runs in supervisor mode and all APIs just directly call the
implementation function. If the code was actually running in user mode,
there will be a CPU exception as soon as it tries to do something it isn't
allowed to do.
* If :c:macro:`__ZEPHYR_USER__` is defined, then it is assumed that all the
code runs in user mode and system calls are unconditionally made.
Implementation Details
======================
Declaring an API with :c:macro:`__syscall` causes some code to be generated in
C and header files by the :ref:`gen_syscalls.py` script, all of which can be found in
the project out directory under ``include/generated/``:
* The system call is added to the enumerated type of system call IDs,
which is expressed in ``include/generated/zephyr/syscall_list.h``. It is the name
of the API in uppercase, prefixed with ``K_SYSCALL_``.
* An entry for the system call is created in the dispatch table
``_k_syscall_table``, expressed in ``include/generated/zephyr/syscall_dispatch.c``
* This table only contains syscalls where their corresponding
prototypes are declared in header files when
:kconfig:option:`CONFIG_EMIT_ALL_SYSCALLS` is enabled:
* Indicated by CMake functions ``zephyr_syscall_header`` and
``zephyr_syscall_header_ifdef``, or
* Under directories specified in CMake variable
``SYSCALL_INCLUDE_DIRS``.
* A weak verification function is declared, which is just an alias of the
'unimplemented system call' verifier. This is necessary since the real
verification function may or may not be built depending on the kernel
configuration. For example, if a user thread makes a sensor subsystem
API call, but the sensor subsystem is not enabled, the weak verifier
will be invoked instead.
* An unmarshalling function is defined in ``include/generated/<name>_mrsh.c``
The body of the API is created in the generated system header. Using the
example of :c:func:`k_sem_init()`, this API is declared in
``include/kernel.h``. At the bottom of ``include/kernel.h`` is::
#include <zephyr/syscalls/kernel.h>
Inside this header is the body of :c:func:`k_sem_init()`::
static inline void k_sem_init(struct k_sem * sem, unsigned int initial_count, unsigned int limit)
{
#ifdef CONFIG_USERSPACE
if (z_syscall_trap()) {
arch_syscall_invoke3(*(uintptr_t *)&sem, *(uintptr_t *)&initial_count, *(uintptr_t *)&limit, K_SYSCALL_K_SEM_INIT);
return;
}
compiler_barrier();
#endif
z_impl_k_sem_init(sem, initial_count, limit);
}
This generates an inline function that takes three arguments with void
return value. Depending on context it will either directly call the
implementation function or go through a system call elevation. A
prototype for the implementation function is also automatically generated.
The final layer is the invocation of the system call itself. All architectures
implementing system calls must implement the seven inline functions
:c:func:`_arch_syscall_invoke0` through :c:func:`_arch_syscall_invoke6`. These
functions marshal arguments into designated CPU registers and perform the
necessary privilege elevation. Parameters of API inline function, before being
passed as arguments to system call, are C casted to ``uintptr_t`` which matches
size of register.
Exception to above is passing 64-bit parameters on 32-bit systems, in which case
64-bit parameters are split into lower and higher part and passed as two consecutive
arguments.
There is always a ``uintptr_t`` type return value, which may be neglected if
not needed.
.. figure:: syscall_flow.png
:alt: System Call execution flow
:width: 80%
:align: center
System Call execution flow
Some system calls may have more than six arguments, but number of arguments
passed via registers is limited to six for all architectures.
Additional arguments will need to be passed in an array in the source memory
space, which needs to be treated as untrusted memory in the verification
function. This code (packing, unpacking and validation) is generated
automatically as needed in the stub above and in the unmarshalling function.
System calls return ``uintptr_t`` type value that is C casted, by wrapper, to
a return type of API prototype declaration. This means that 64-bit value may
not be directly returned, from a system call to its wrapper, on 32-bit systems.
To solve the problem the automatically generated wrapper function defines 64-bit
intermediate variable, which is considered **untrusted** buffer, on its stack
and passes pointer to that variable to the system call, as a final argument.
Upon return from the system call the value written to that buffer will be
returned by the wrapper function.
The problem does not exist on 64-bit systems which are able to return 64-bit
values directly.
Implementation Function
***********************
The implementation function is what actually does the work for the API.
Zephyr normally does little to no error checking of arguments, or does this
kind of checking with assertions. When writing the implementation function,
validation of any parameters is optional and should be done with assertions.
All implementation functions must follow the naming convention, which is the
name of the API prefixed with ``z_impl_``. Implementation functions may be
declared in the same header as the API as a static inline function or
declared in some C file. There is no prototype needed for implementation
functions, these are automatically generated.
Verification Function
*********************
The verification function runs on the kernel side when a user thread makes
a system call. When the user thread makes a software interrupt to elevate to
supervisor mode, the common system call entry point uses the system call ID
provided by the user to look up the appropriate unmarshalling function for that
system call and jump into it. This in turn calls the verification function.
Verification and unmarshalling functions only run when system call APIs are
invoked from user mode. If an API is invoked from supervisor mode, the
implementation is simply called and there is no software trap.
The purpose of the verification function is to validate all the arguments
passed in. This includes:
* Any kernel object pointers provided. For example, the semaphore APIs must
ensure that the semaphore object passed in is a valid semaphore and that
the calling thread has permission on it.
* Any memory buffers passed in from user mode. Checks must be made that the
calling thread has read or write permissions on the provided buffer.
* Any other arguments that have a limited range of valid values.
Verification functions involve a great deal of boilerplate code which has been
made simpler by some macros in :zephyr_file:`include/zephyr/internal/syscall_handler.h`.
Verification functions should be declared using these macros.
Argument Validation
===================
Several macros exist to validate arguments:
* :c:macro:`K_SYSCALL_OBJ()` Checks a memory address to assert that it is
a valid kernel object of the expected type, that the calling thread
has permissions on it, and that the object is initialized.
* :c:macro:`K_SYSCALL_OBJ_INIT()` is the same as
:c:macro:`K_SYSCALL_OBJ()`, except that the provided object may be
uninitialized. This is useful for verifiers of object init functions.
* :c:macro:`K_SYSCALL_OBJ_NEVER_INIT()` is the same as
:c:macro:`K_SYSCALL_OBJ()`, except that the provided object must be
uninitialized. This is not used very often, currently only for
:c:func:`k_thread_create()`.
* :c:macro:`K_SYSCALL_MEMORY_READ()` validates a memory buffer of a particular
size. The calling thread must have read permissions on the entire buffer.
* :c:macro:`K_SYSCALL_MEMORY_WRITE()` is the same as
:c:macro:`K_SYSCALL_MEMORY_READ()` but the calling thread must additionally
have write permissions.
* :c:macro:`K_SYSCALL_MEMORY_ARRAY_READ()` validates an array whose total size
is expressed as separate arguments for the number of elements and the
element size. This macro correctly accounts for multiplication overflow
when computing the total size. The calling thread must have read permissions
on the total size.
* :c:macro:`K_SYSCALL_MEMORY_ARRAY_WRITE()` is the same as
:c:macro:`K_SYSCALL_MEMORY_ARRAY_READ()` but the calling thread must
additionally have write permissions.
* :c:macro:`K_SYSCALL_VERIFY_MSG()` does a runtime check of some boolean
expression which must evaluate to true otherwise the check will fail.
A variant :c:macro:`K_SYSCALL_VERIFY` exists which does not take
a message parameter, instead printing the expression tested if it
fails. The latter should only be used for the most obvious of tests.
* :c:macro:`K_SYSCALL_DRIVER_OP()` checks at runtime if a driver
instance is capable of performing a particular operation. While this
macro can be used by itself, it's mostly a building block for macros
that are automatically generated for every driver subsystem. For
instance, to validate the GPIO driver, one could use the
:c:macro:`K_SYSCALL_DRIVER_GPIO()` macro.
* :c:macro:`K_SYSCALL_SPECIFIC_DRIVER()` is a runtime check to verify that
a provided pointer is a valid instance of a specific device driver, that
the calling thread has permissions on it, and that the driver has been
initialized. It does this by checking the API structure pointer that
is stored within the driver instance and ensuring that it matches the
provided value, which should be the address of the specific driver's
API structure.
If any check fails, the macros will return a nonzero value. The macro
:c:macro:`K_OOPS()` can be used to induce a kernel oops which will kill the
calling thread. This is done instead of returning some error condition to
keep the APIs the same when calling from supervisor mode.
.. _syscall_verification:
Verifier Definition
===================
All system calls are dispatched to a verifier function with a prefixed
``z_vrfy_`` name based on the system call. They have exactly the same
return type and argument types as the wrapped system call. Their job
is to execute the system call (generally by calling the implementation
function) after having validated all arguments.
The verifier is itself invoked by an automatically generated
unmarshaller function which takes care of unpacking the register
arguments from the architecture layer and casting them to the correct
type. This is defined in a header file that must be included from
user code, generally somewhere after the definition of the verifier in
a translation unit (so that it can be inlined).
For example:
.. code-block:: c
static int z_vrfy_k_sem_take(struct k_sem *sem, int32_t timeout)
{
K_OOPS(K_SYSCALL_OBJ(sem, K_OBJ_SEM));
return z_impl_k_sem_take(sem, timeout);
}
#include <zephyr/syscalls/k_sem_take_mrsh.c>
Verification Memory Access Policies
===================================
Parameters passed to system calls by reference require special handling,
because the value of these parameters can be changed at any time by any
user thread that has access to the memory that parameter points to. If the
kernel makes any logical decisions based on the contents of this memory, this
can open up the kernel to attacks even if checking is done. This is a class
of exploits known as TOCTOU (Time Of Check to Time Of Use).
The proper procedure to mitigate these attacks is to make a copies in the
verification function, and only perform parameter checks on the copies, which
user threads will never have access to. The implementation functions get passed
the copy and not the original data sent by the user. The
:c:func:`k_usermode_to_copy()` and :c:func:`k_usermode_from_copy()` APIs exist for
this purpose.
There is one exception in place, with respect to large data buffers which are
only used to provide a memory area that is either only written to, or whose
contents are never used for any validation or control flow. Further
discussion of this later in this section.
As a first example, consider a parameter which is used as an output parameter
for some integral value:
.. code-block:: c
int z_vrfy_some_syscall(int *out_param)
{
int local_out_param;
int ret;
ret = z_impl_some_syscall(&local_out_param);
K_OOPS(k_usermode_to_copy(out_param, &local_out_param, sizeof(*out_param)));
return ret;
}
Here we have allocated ``local_out_param`` on the stack, passed its address to
the implementation function, and then used :c:func:`k_usermode_to_copy()` to fill
in the memory passed in by the caller.
It might be tempting to do something more concise:
.. code-block:: c
int z_vrfy_some_syscall(int *out_param)
{
K_OOPS(K_SYSCALL_MEMORY_WRITE(out_param, sizeof(*out_param)));
return z_impl_some_syscall(out_param);
}
However, this is unsafe if the implementation ever does any reads to this
memory as part of its logic. For example, it could be used to store some
counter value, and this could be meddled with by user threads that have access
to its memory. It is by far safest for small integral values to do the copying
as shown in the first example.
Some parameters may be input/output. For instance, it's not uncommon to see APIs
which pass in a pointer to some ``size_t`` which is a maximum allowable size,
which is then updated by the implementation to reflect the actual number of
bytes processed. This too should use a stack copy:
.. code-block:: c
int z_vrfy_in_out_syscall(size_t *size_ptr)
{
size_t size;
int ret;
K_OOPS(k_usermode_from_copy(&size, size_ptr, sizeof(size));
ret = z_impl_in_out_syscall(&size);
K_OOPS(k_usermode_to_copy(size_ptr, &size, sizeof(size)));
return ret;
}
Many system calls pass in structures or even linked data structures. All should
be copied. Typically this is done by allocating copies on the stack:
.. code-block:: c
struct bar {
...
};
struct foo {
...
struct bar *bar_left;
struct bar *bar_right;
};
int z_vrfy_must_alloc(struct foo *foo)
{
int ret;
struct foo foo_copy;
struct bar bar_right_copy;
struct bar bar_left_copy;
K_OOPS(k_usermode_from_copy(&foo_copy, foo, sizeof(*foo)));
K_OOPS(k_usermode_from_copy(&bar_right_copy, foo_copy.bar_right,
sizeof(struct bar)));
foo_copy.bar_right = &bar_right_copy;
K_OOPS(k_usermode_from_copy(&bar_left_copy, foo_copy.bar_left,
sizeof(struct bar)));
foo_copy.bar_left = &bar_left_copy;
return z_impl_must_alloc(&foo_copy);
}
In some cases the amount of data isn't known at compile time or may be too
large to allocate on the stack. In this scenario, it may be necessary to draw
memory from the caller's resource pool via :c:func:`z_thread_malloc()`. This
should always be considered last resort. Functional safety programming
guidelines heavily discourage usage of heap and the fact that a resource pool is
used must be clearly documented. Any issues with allocation must be
reported, to a caller, with returning the ``-ENOMEM`` . The ``K_OOPS()``
should never be used to verify if resource allocation has been successful.
.. code-block:: c
struct bar {
...
};
struct foo {
size_t count;
struct bar *bar_list; /* array of struct bar of size count */
};
int z_vrfy_must_alloc(struct foo *foo)
{
int ret;
struct foo foo_copy;
struct bar *bar_list_copy;
size_t bar_list_bytes;
/* Safely copy foo into foo_copy */
K_OOPS(k_usermode_from_copy(&foo_copy, foo, sizeof(*foo)));
/* Bounds check the count member, in the copy we made */
if (foo_copy.count > 32) {
return -EINVAL;
}
/* Allocate RAM for the bar_list, replace the pointer in
* foo_copy */
bar_list_bytes = foo_copy.count * sizeof(struct_bar);
bar_list_copy = z_thread_malloc(bar_list_bytes);
if (bar_list_copy == NULL) {
return -ENOMEM;
}
K_OOPS(k_usermode_from_copy(bar_list_copy, foo_copy.bar_list,
bar_list_bytes));
foo_copy.bar_list = bar_list_copy;
ret = z_impl_must_alloc(&foo_copy);
/* All done with the memory, free it and return */
k_free(foo_copy.bar_list_copy);
return ret;
}
Finally, we must consider large data buffers. These represent areas of user
memory which either have data copied out of, or copied into. It is permitted
to pass these pointers to the implementation function directly. The caller's
access to the buffer still must be validated with ``K_SYSCALL_MEMORY`` APIs.
The following constraints need to be met:
* If the buffer is used by the implementation function to write data, such
as data captured from some MMIO region, the implementation function must
only write this data, and never read it.
* If the buffer is used by the implementation function to read data, such
as a block of memory to write to some hardware destination, this data
must be read without any processing. No conditional logic can be implemented
due to the data buffer's contents. If such logic is required a copy must be
made.
* The buffer must only be used synchronously with the call. The implementation
must not ever save the buffer address and use it asynchronously, such as
when an interrupt fires.
.. code-block:: c
int z_vrfy_get_data_from_kernel(void *buf, size_t size)
{
K_OOPS(K_SYSCALL_MEMORY_WRITE(buf, size));
return z_impl_get_data_from_kernel(buf, size);
}
Verification Return Value Policies
==================================
When verifying system calls, it's important to note which kinds of verification
failures should propagate a return value to the caller, and which should
simply invoke :c:macro:`K_OOPS()` which kills the calling thread. The current
conventions are as follows:
#. For system calls that are defined but not compiled, invocations of these
missing system calls are routed to :c:func:`handler_no_syscall()` which
invokes :c:macro:`K_OOPS()`.
#. Any invalid access to memory found by the set of ``K_SYSCALL_MEMORY`` APIs,
:c:func:`k_usermode_from_copy()`, :c:func:`k_usermode_to_copy()`
should trigger a :c:macro:`K_OOPS`. This happens when the caller doesn't have
appropriate permissions on the memory buffer or some size calculation
overflowed.
#. Most system calls take kernel object pointers as an argument, checked either
with one of the ``K_SYSCALL_OBJ`` functions, ``K_SYSCALL_DRIVER_nnnnn``, or
manually using :c:func:`k_object_validate()`. These can fail for a variety
of reasons: missing driver API, bad kernel object pointer, wrong kernel
object type, or improper initialization state. These issues should always
invoke :c:macro:`K_OOPS()`.
#. Any error resulting from a failed memory heap allocation, often from
invoking :c:func:`z_thread_malloc()`, should propagate ``-ENOMEM`` to the
caller.
#. General parameter checks should be done in the implementation function,
in most cases using ``CHECKIF()``.
* The behavior of ``CHECKIF()`` depends on the kernel configuration, but if
user mode is enabled, :kconfig:option:`CONFIG_RUNTIME_ERROR_CHECKS` is enforced,
which guarantees that these checks will be made and a return value
propagated.
#. It is totally forbidden for any kind of kernel mode callback function to
be registered from user mode. APIs which simply install callbacks shall not
be exposed as system calls. Some driver subsystem APIs may take optional
function callback pointers. User mode verification functions for these APIs
must enforce that these are NULL and should invoke :c:macro:`K_OOPS()` if
not.
#. Some parameter checks are enforced only from user mode. These should be
checked in the verification function and propagate a return value to the
caller if possible.
There are some known exceptions to these policies currently in Zephyr:
* :c:func:`k_thread_join()` and :c:func:`k_thread_abort()` are no-ops if
the thread object isn't initialized. This is because for threads, the
initialization bit pulls double-duty to indicate whether a thread is
running, cleared upon exit. See #23030.
* :c:func:`k_thread_create()` invokes :c:macro:`K_OOPS()` for parameter
checks, due to a great deal of existing code ignoring the return value.
This will also be addressed by #23030.
* :c:func:`k_thread_abort()` invokes :c:macro:`K_OOPS()` if an essential
thread is aborted, as the function has no return value.
* Various system calls related to logging invoke :c:macro:`K_OOPS()`
when bad parameters are passed in as they do not propagate errors.
Configuration Options
*********************
Related configuration options:
* :kconfig:option:`CONFIG_USERSPACE`
* :kconfig:option:`CONFIG_EMIT_ALL_SYSCALLS`
APIs
****
Helper macros for creating system call verification functions are provided in
:zephyr_file:`include/zephyr/internal/syscall_handler.h`:
* :c:macro:`K_SYSCALL_OBJ()`
* :c:macro:`K_SYSCALL_OBJ_INIT()`
* :c:macro:`K_SYSCALL_OBJ_NEVER_INIT()`
* :c:macro:`K_OOPS()`
* :c:macro:`K_SYSCALL_MEMORY_READ()`
* :c:macro:`K_SYSCALL_MEMORY_WRITE()`
* :c:macro:`K_SYSCALL_MEMORY_ARRAY_READ()`
* :c:macro:`K_SYSCALL_MEMORY_ARRAY_WRITE()`
* :c:macro:`K_SYSCALL_VERIFY_MSG()`
* :c:macro:`K_SYSCALL_VERIFY`
Functions for invoking system calls are defined in
:zephyr_file:`include/zephyr/syscall.h`:
* :c:func:`_arch_syscall_invoke0`
* :c:func:`_arch_syscall_invoke1`
* :c:func:`_arch_syscall_invoke2`
* :c:func:`_arch_syscall_invoke3`
* :c:func:`_arch_syscall_invoke4`
* :c:func:`_arch_syscall_invoke5`
* :c:func:`_arch_syscall_invoke6`
``` | /content/code_sandbox/doc/kernel/usermode/syscalls.rst | restructuredtext | 2016-05-26T17:54:19 | 2024-08-16T18:09:06 | zephyr | zephyrproject-rtos/zephyr | 10,307 | 6,281 |
```restructuredtext
.. _dt_vs_kconfig:
Devicetree versus Kconfig
#########################
Along with devicetree, Zephyr also uses the Kconfig language to configure the
source code. Whether to use devicetree or Kconfig for a particular purpose can
sometimes be confusing. This section should help you decide which one to use.
In short:
* Use devicetree to describe **hardware** and its **boot-time configuration**.
Examples include peripherals on a board, boot-time clock frequencies,
interrupt lines, etc.
* Use Kconfig to configure **software support** to build into the final
image. Examples include whether to add networking support, which drivers are
needed by the application, etc.
In other words, devicetree mainly deals with hardware, and Kconfig with
software.
For example, consider a board containing a SoC with 2 UART, or serial port,
instances.
* The fact that the board has this UART **hardware** is described with two UART
nodes in the devicetree. These provide the UART type (via the ``compatible``
property) and certain settings such as the address range of the hardware
peripheral registers in memory (via the ``reg`` property).
* Additionally, the UART **boot-time configuration** is also described with
devicetree. This could include configuration such as the RX IRQ line's
priority and the UART baud rate. These may be modifiable at runtime, but
their boot-time configuration is described in devicetree.
* Whether or not to include **software support** for UART in the build is
controlled via Kconfig. Applications which do not need to use the UARTs can
remove the driver source code from the build using Kconfig, even though the
board's devicetree still includes UART nodes.
As another example, consider a device with a 2.4GHz, multi-protocol radio
supporting both the Bluetooth Low Energy and 802.15.4 wireless technologies.
* Devicetree should be used to describe the presence of the radio **hardware**,
what driver or drivers it's compatible with, etc.
* **Boot-time configuration** for the radio, such as TX power in dBm, should
also be specified using devicetree.
* Kconfig should determine which **software features** should be built for the
radio, such as selecting a BLE or 802.15.4 protocol stack.
As another example, Kconfig options that formerly enabled a particular
instance of a driver (that is itself enabled by Kconfig) have been
removed. The devices are selected individually using devicetree's
:ref:`status <dt-important-props>` keyword on the corresponding hardware
instance.
There are **exceptions** to these rules:
* Because Kconfig is unable to flexibly control some instance-specific driver
configuration parameters, such as the size of an internal buffer, these
options may be defined in devicetree. However, to make clear that they are
specific to Zephyr drivers and not hardware description or configuration these
properties should be prefixed with ``zephyr,``,
e.g. ``zephyr,random-mac-address`` in the common Ethernet devicetree
properties.
* Devicetree's ``chosen`` keyword, which allows the user to select a specific
instance of a hardware device to be used for a particular purpose. An example
of this is selecting a particular UART for use as the system's console.
``` | /content/code_sandbox/doc/build/dts/dt-vs-kconfig.rst | restructuredtext | 2016-05-26T17:54:19 | 2024-08-16T18:09:06 | zephyr | zephyrproject-rtos/zephyr | 10,307 | 748 |
```restructuredtext
.. _devicetree-intro:
Introduction to devicetree
##########################
.. tip::
This is a conceptual overview of devicetree and how Zephyr uses it. For
step-by-step guides and examples, see :ref:`dt-howtos`.
The following pages introduce general devicetree concepts and how they apply to
Zephyr.
.. toctree::
:maxdepth: 2
intro-scope-purpose.rst
intro-syntax-structure.rst
intro-input-output.rst
``` | /content/code_sandbox/doc/build/dts/intro.rst | restructuredtext | 2016-05-26T17:54:19 | 2024-08-16T18:09:06 | zephyr | zephyrproject-rtos/zephyr | 10,307 | 115 |
```restructuredtext
.. _dt-howtos:
Devicetree HOWTOs
#################
This page has step-by-step advice for getting things done with devicetree.
.. tip:: See :ref:`dt-trouble` for troubleshooting advice.
.. _get-devicetree-outputs:
Get your devicetree and generated header
****************************************
A board's devicetree (:ref:`BOARD.dts <devicetree-in-out-files>`) pulls in
common node definitions via ``#include`` preprocessor directives. This at least
includes the SoC's ``.dtsi``. One way to figure out the devicetree's contents
is by opening these files, e.g. by looking in
``dts/<ARCH>/<vendor>/<soc>.dtsi``, but this can be time consuming.
If you just want to see the "final" devicetree for your board, build an
application and open the :file:`zephyr.dts` file in the build directory.
.. tip::
You can build :ref:`hello_world` to see the "base" devicetree for your board
without any additional changes from :ref:`overlay files <dt-input-files>`.
For example, using the :ref:`qemu_cortex_m3` board to build :ref:`hello_world`:
.. code-block:: sh
# --cmake-only here just forces CMake to run, skipping the
# build process to save time.
west build -b qemu_cortex_m3 samples/hello_world --cmake-only
You can change ``qemu_cortex_m3`` to match your board.
CMake prints the input and output file locations like this:
.. code-block:: none
-- Found BOARD.dts: .../zephyr/boards/arm/qemu_cortex_m3/qemu_cortex_m3.dts
-- Generated zephyr.dts: .../zephyr/build/zephyr/zephyr.dts
-- Generated devicetree_generated.h: .../zephyr/build/zephyr/include/generated/zephyr/devicetree_generated.h
The :file:`zephyr.dts` file is the final devicetree in DTS format.
The :file:`devicetree_generated.h` file is the corresponding generated header.
See :ref:`devicetree-in-out-files` for details about these files.
.. _dt-get-device:
Get a struct device from a devicetree node
******************************************
When writing Zephyr applications, you'll often want to get a driver-level
:ref:`struct device <device_model_api>` corresponding to a devicetree node.
For example, with this devicetree fragment, you might want the struct device
for ``serial@40002000``:
.. code-block:: devicetree
/ {
soc {
serial0: serial@40002000 {
status = "okay";
current-speed = <115200>;
/* ... */
};
};
aliases {
my-serial = &serial0;
};
chosen {
zephyr,console = &serial0;
};
};
Start by making a :ref:`node identifier <dt-node-identifiers>` for the device
you are interested in. There are different ways to do this; pick whichever one
works best for your requirements. Here are some examples:
.. code-block:: c
/* Option 1: by node label */
#define MY_SERIAL DT_NODELABEL(serial0)
/* Option 2: by alias */
#define MY_SERIAL DT_ALIAS(my_serial)
/* Option 3: by chosen node */
#define MY_SERIAL DT_CHOSEN(zephyr_console)
/* Option 4: by path */
#define MY_SERIAL DT_PATH(soc, serial_40002000)
Once you have a node identifier there are two ways to proceed. One way to get a
device is to use :c:func:`DEVICE_DT_GET`:
.. code-block:: c
const struct device *const uart_dev = DEVICE_DT_GET(MY_SERIAL);
if (!device_is_ready(uart_dev)) {
/* Not ready, do not use */
return -ENODEV;
}
There are variants of :c:func:`DEVICE_DT_GET` such as
:c:func:`DEVICE_DT_GET_OR_NULL`, :c:func:`DEVICE_DT_GET_ONE` or
:c:func:`DEVICE_DT_GET_ANY`. This idiom fetches the device pointer at
build-time, which means there is no runtime penalty. This method is useful if
you want to store the device pointer as configuration data. But because the
device may not be initialized, or may have failed to initialize, you must verify
that the device is ready to be used before passing it to any API functions.
(This check is done for you by :c:func:`device_get_binding`.)
In some situations the device cannot be known at build-time, e.g., if it depends
on user input like in a shell application. In this case you can get the
``struct device`` by combining :c:func:`device_get_binding` with the device
name:
.. code-block:: c
const char *dev_name = /* TODO: insert device name from user */;
const struct device *uart_dev = device_get_binding(dev_name);
You can then use ``uart_dev`` with :ref:`uart_api` API functions like
:c:func:`uart_configure`. Similar code will work for other device types; just
make sure you use the correct API for the device.
If you're having trouble, see :ref:`dt-trouble`. The first thing to check is
that the node has ``status = "okay"``, like this:
.. code-block:: c
#define MY_SERIAL DT_NODELABEL(my_serial)
#if DT_NODE_HAS_STATUS(MY_SERIAL, okay)
const struct device *const uart_dev = DEVICE_DT_GET(MY_SERIAL);
#else
#error "Node is disabled"
#endif
If you see the ``#error`` output, make sure to enable the node in your
devicetree. In some situations your code will compile but it will fail to link
with a message similar to:
.. code-block:: none
...undefined reference to `__device_dts_ord_N'
collect2: error: ld returned 1 exit status
This likely means there's a Kconfig issue preventing the device driver from
being built, resulting in a reference that does not exist. If your code compiles
successfully, the last thing to check is if the device is ready, like this:
.. code-block:: c
if (!device_is_ready(uart_dev)) {
printk("Device not ready\n");
}
If you find that the device is not ready, it likely means that the device's
initialization function failed. Enabling logging or debugging driver code may
help in such situations. Note that you can also use :c:func:`device_get_binding`
to obtain a reference at runtime. If it returns ``NULL`` it can either mean that
device's driver failed to initialize or that it does not exist.
.. _dts-find-binding:
Find a devicetree binding
*************************
:ref:`dt-bindings` are YAML files which declare what you can do with the nodes
they describe, so it's critical to be able to find them for the nodes you are
using.
If you don't have them already, :ref:`get-devicetree-outputs`. To find a node's
binding, open the generated header file, which starts with a list of nodes in a
block comment:
.. code-block:: c
/*
* [...]
* Nodes in dependency order (ordinal and path):
* 0 /
* 1 /aliases
* 2 /chosen
* 3 /flash@0
* 4 /memory@20000000
* (etc.)
* [...]
*/
Make note of the path to the node you want to find, like ``/flash@0``. Search
for the node's output in the file, which starts with something like this if the
node has a matching binding:
.. code-block:: c
/*
* Devicetree node:
* /flash@0
*
* Binding (compatible = soc-nv-flash):
* $ZEPHYR_BASE/dts/bindings/mtd/soc-nv-flash.yaml
* [...]
*/
See :ref:`missing-dt-binding` for troubleshooting.
.. _set-devicetree-overlays:
Set devicetree overlays
***********************
Devicetree overlays are explained in :ref:`devicetree-intro`. The CMake
variable :makevar:`DTC_OVERLAY_FILE` contains a space- or semicolon-separated
list of overlay files to use. If :makevar:`DTC_OVERLAY_FILE` specifies multiple
files, they are included in that order by the C preprocessor. A file in a
Zephyr module can be referred to by escaping the Zephyr module dir variable
like ``\${ZEPHYR_<module>_MODULE_DIR}/<path-to>/dts.overlay``
when setting the DTC_OVERLAY_FILE variable.
You can set :makevar:`DTC_OVERLAY_FILE` to contain exactly the files you want
to use. Here is an :ref:`example <west-building-dtc-overlay-file>` using
``west build``.
If you don't set :makevar:`DTC_OVERLAY_FILE`, the build system will follow
these steps, looking for files in your application configuration directory to
use as devicetree overlays:
#. If the file :file:`socs/<SOC>_<BOARD_QUALIFIERS>.overlay` exists, it will be used.
#. If the file :file:`boards/<BOARD>.overlay` exists, it will be used in addition to the above.
#. If the current board has :ref:`multiple revisions <porting_board_revisions>`
and :file:`boards/<BOARD>_<revision>.overlay` exists, it will be used in addition to the above.
#. If one or more files have been found in the previous steps, the build system
stops looking and just uses those files.
#. Otherwise, if :file:`<BOARD>.overlay` exists, it will be used, and the build
system will stop looking for more files.
#. Otherwise, if :file:`app.overlay` exists, it will be used.
Extra devicetree overlays may be provided using ``EXTRA_DTC_OVERLAY_FILE`` which
will still allow the build system to automatically use devicetree overlays
described in the above steps.
The build system appends overlays specified in ``EXTRA_DTC_OVERLAY_FILE``
to the overlays in ``DTC_OVERLAY_FILE`` when processing devicetree overlays.
This means that changes made via ``EXTRA_DTC_OVERLAY_FILE`` have higher
precedence than those made via ``DTC_OVERLAY_FILE``.
All configuration files will be taken from the application's configuration
directory except for files with an absolute path that are given with the
``DTC_OVERLAY_FILE`` or ``EXTRA_DTC_OVERLAY_FILE`` argument.
See :ref:`Application Configuration Directory <application-configuration-directory>`
on how the application configuration directory is defined.
Using :ref:`shields` will also add devicetree overlay files.
The :makevar:`DTC_OVERLAY_FILE` value is stored in the CMake cache and used
in successive builds.
The :ref:`build system <build_overview>` prints all the devicetree overlays it
finds in the configuration phase, like this:
.. code-block:: none
-- Found devicetree overlay: .../some/file.overlay
.. _use-dt-overlays:
Use devicetree overlays
***********************
See :ref:`set-devicetree-overlays` for how to add an overlay to the build.
Overlays can override node property values in multiple ways.
For example, if your BOARD.dts contains this node:
.. code-block:: devicetree
/ {
soc {
serial0: serial@40002000 {
status = "okay";
current-speed = <115200>;
/* ... */
};
};
};
These are equivalent ways to override the ``current-speed`` value in an
overlay:
.. Disable syntax highlighting as this construct does not seem supported by pygments
.. code-block:: none
/* Option 1 */
&serial0 {
current-speed = <9600>;
};
/* Option 2 */
&{/soc/serial@40002000} {
current-speed = <9600>;
};
We'll use the ``&serial0`` style for the rest of these examples.
You can add aliases to your devicetree using overlays: an alias is just a
property of the ``/aliases`` node. For example:
.. code-block:: devicetree
/ {
aliases {
my-serial = &serial0;
};
};
Chosen nodes work the same way. For example:
.. code-block:: devicetree
/ {
chosen {
zephyr,console = &serial0;
};
};
To delete a property (in addition to deleting properties in general, this is
how to set a boolean property to false if it's true in BOARD.dts):
.. code-block:: devicetree
&serial0 {
/delete-property/ some-unwanted-property;
};
You can add subnodes using overlays. For example, to configure a SPI or I2C
child device on an existing bus node, do something like this:
.. code-block:: devicetree
/* SPI device example */
&spi1 {
my_spi_device: temp-sensor@0 {
compatible = "...";
label = "TEMP_SENSOR_0";
/* reg is the chip select number, if needed;
* If present, it must match the node's unit address. */
reg = <0>;
/* Configure other SPI device properties as needed.
* Find your device's DT binding for details. */
spi-max-frequency = <4000000>;
};
};
/* I2C device example */
&i2c2 {
my_i2c_device: touchscreen@76 {
compatible = "...";
label = "TOUCHSCREEN";
/* reg is the I2C device address.
* It must match the node's unit address. */
reg = <76>;
/* Configure other I2C device properties as needed.
* Find your device's DT binding for details. */
};
};
Other bus devices can be configured similarly:
- create the device as a subnode of the parent bus
- set its properties according to its binding
Assuming you have a suitable device driver associated with the
``my_spi_device`` and ``my_i2c_device`` compatibles, you should now be able to
enable the driver via Kconfig and :ref:`get the struct device <dt-get-device>`
for your newly added bus node, then use it with that driver API.
.. _dt-create-devices:
Write device drivers using devicetree APIs
******************************************
"Devicetree-aware" :ref:`device drivers <device_model_api>` should create a
``struct device`` for each ``status = "okay"`` devicetree node with a
particular :ref:`compatible <dt-important-props>` (or related set of
compatibles) supported by the driver.
Writing a devicetree-aware driver begins by defining a :ref:`devicetree binding
<dt-bindings>` for the devices supported by the driver. Use existing bindings
from similar drivers as a starting point. A skeletal binding to get started
needs nothing more than this:
.. code-block:: yaml
description: <Human-readable description of your binding>
compatible: "foo-company,bar-device"
include: base.yaml
See :ref:`dts-find-binding` for more advice on locating existing bindings.
After writing your binding, your driver C file can then use the devicetree API
to find ``status = "okay"`` nodes with the desired compatible, and instantiate
a ``struct device`` for each one. There are two options for instantiating each
``struct device``: using instance numbers, and using node labels.
In either case:
- Each ``struct device``\ 's name should be set to its devicetree node's
``label`` property. This allows the driver's users to :ref:`dt-get-device` in
the usual way.
- Each device's initial configuration should use values from devicetree
properties whenever practical. This allows users to configure the driver
using :ref:`devicetree overlays <use-dt-overlays>`.
Examples for how to do this follow. They assume you've already implemented the
device-specific configuration and data structures and API functions, like this:
.. code-block:: c
/* my_driver.c */
#include <zephyr/drivers/some_api.h>
/* Define data (RAM) and configuration (ROM) structures: */
struct my_dev_data {
/* per-device values to store in RAM */
};
struct my_dev_cfg {
uint32_t freq; /* Just an example: initial clock frequency in Hz */
/* other configuration to store in ROM */
};
/* Implement driver API functions (drivers/some_api.h callbacks): */
static int my_driver_api_func1(const struct device *dev, uint32_t *foo) { /* ... */ }
static int my_driver_api_func2(const struct device *dev, uint64_t bar) { /* ... */ }
static struct some_api my_api_funcs = {
.func1 = my_driver_api_func1,
.func2 = my_driver_api_func2,
};
.. _dt-create-devices-inst:
Option 1: create devices using instance numbers
===============================================
Use this option, which uses :ref:`devicetree-inst-apis`, if possible. However,
they only work when devicetree nodes for your driver's ``compatible`` are all
equivalent, and you do not need to be able to distinguish between them.
To use instance-based APIs, begin by defining ``DT_DRV_COMPAT`` to the
lowercase-and-underscores version of the compatible that the device driver
supports. For example, if your driver's compatible is ``"vnd,my-device"`` in
devicetree, you would define ``DT_DRV_COMPAT`` to ``vnd_my_device`` in your
driver C file:
.. code-block:: c
/*
* Put this near the top of the file. After the includes is a good place.
* (Note that you can therefore run "git grep DT_DRV_COMPAT drivers" in
* the zephyr Git repository to look for example drivers using this style).
*/
#define DT_DRV_COMPAT vnd_my_device
.. important::
As shown, the DT_DRV_COMPAT macro should have neither quotes nor special
characters. Remove quotes and convert special characters to underscores
when creating ``DT_DRV_COMPAT`` from the compatible property.
Finally, define an instantiation macro, which creates each ``struct device``
using instance numbers. Do this after defining ``my_api_funcs``.
.. code-block:: c
/*
* This instantiation macro is named "CREATE_MY_DEVICE".
* Its "inst" argument is an arbitrary instance number.
*
* Put this near the end of the file, e.g. after defining "my_api_funcs".
*/
#define CREATE_MY_DEVICE(inst) \
static struct my_dev_data my_data_##inst = { \
/* initialize RAM values as needed, e.g.: */ \
.freq = DT_INST_PROP(inst, clock_frequency), \
}; \
static const struct my_dev_cfg my_cfg_##inst = { \
/* initialize ROM values as needed. */ \
}; \
DEVICE_DT_INST_DEFINE(inst, \
my_dev_init_function, \
NULL, \
&my_data_##inst, \
&my_cfg_##inst, \
MY_DEV_INIT_LEVEL, MY_DEV_INIT_PRIORITY, \
&my_api_funcs);
Notice the use of APIs like :c:func:`DT_INST_PROP` and
:c:func:`DEVICE_DT_INST_DEFINE` to access devicetree node data. These
APIs retrieve data from the devicetree for instance number ``inst`` of
the node with compatible determined by ``DT_DRV_COMPAT``.
Finally, pass the instantiation macro to :c:func:`DT_INST_FOREACH_STATUS_OKAY`:
.. code-block:: c
/* Call the device creation macro for each instance: */
DT_INST_FOREACH_STATUS_OKAY(CREATE_MY_DEVICE)
``DT_INST_FOREACH_STATUS_OKAY`` expands to code which calls
``CREATE_MY_DEVICE`` once for each enabled node with the compatible determined
by ``DT_DRV_COMPAT``. It does not append a semicolon to the end of the
expansion of ``CREATE_MY_DEVICE``, so the macro's expansion must end in a
semicolon or function definition to support multiple devices.
Option 2: create devices using node labels
==========================================
Some device drivers cannot use instance numbers. One example is an SoC
peripheral driver which relies on vendor HAL APIs specialized for individual IP
blocks to implement Zephyr driver callbacks. Cases like this should use
:c:func:`DT_NODELABEL` to refer to individual nodes in the devicetree
representing the supported peripherals on the SoC. The devicetree.h
:ref:`devicetree-generic-apis` can then be used to access node data.
For this to work, your :ref:`SoC's dtsi file <dt-input-files>` must define node
labels like ``mydevice0``, ``mydevice1``, etc. appropriately for the IP blocks
your driver supports. The resulting devicetree usually looks something like
this:
.. code-block:: devicetree
/ {
soc {
mydevice0: dev@0 {
compatible = "vnd,my-device";
};
mydevice1: dev@1 {
compatible = "vnd,my-device";
};
};
};
The driver can use the ``mydevice0`` and ``mydevice1`` node labels in the
devicetree to operate on specific device nodes:
.. code-block:: c
/*
* This is a convenience macro for creating a node identifier for
* the relevant devices. An example use is MYDEV(0) to refer to
* the node with label "mydevice0".
*/
#define MYDEV(idx) DT_NODELABEL(mydevice ## idx)
/*
* Define your instantiation macro; "idx" is a number like 0 for mydevice0
* or 1 for mydevice1. It uses MYDEV() to create the node label from the
* index.
*/
#define CREATE_MY_DEVICE(idx) \
static struct my_dev_data my_data_##idx = { \
/* initialize RAM values as needed, e.g.: */ \
.freq = DT_PROP(MYDEV(idx), clock_frequency), \
}; \
static const struct my_dev_cfg my_cfg_##idx = { /* ... */ }; \
DEVICE_DT_DEFINE(MYDEV(idx), \
my_dev_init_function, \
NULL, \
&my_data_##idx, \
&my_cfg_##idx, \
MY_DEV_INIT_LEVEL, MY_DEV_INIT_PRIORITY, \
&my_api_funcs)
Notice the use of APIs like :c:func:`DT_PROP` and
:c:func:`DEVICE_DT_DEFINE` to access devicetree node data.
Finally, manually detect each enabled devicetree node and use
``CREATE_MY_DEVICE`` to instantiate each ``struct device``:
.. code-block:: c
#if DT_NODE_HAS_STATUS(DT_NODELABEL(mydevice0), okay)
CREATE_MY_DEVICE(0)
#endif
#if DT_NODE_HAS_STATUS(DT_NODELABEL(mydevice1), okay)
CREATE_MY_DEVICE(1)
#endif
Since this style does not use ``DT_INST_FOREACH_STATUS_OKAY()``, the driver
author is responsible for calling ``CREATE_MY_DEVICE()`` for every possible
node, e.g. using knowledge about the peripherals available on supported SoCs.
.. _dt-drivers-that-depend:
Device drivers that depend on other devices
*******************************************
At times, one ``struct device`` depends on another ``struct device`` and
requires a pointer to it. For example, a sensor device might need a pointer to
its SPI bus controller device. Some advice:
- Write your devicetree binding in a way that permits use of
:ref:`devicetree-hw-api` from devicetree.h if possible.
- In particular, for bus devices, your driver's binding should include a
file like :zephyr_file:`dts/bindings/spi/spi-device.yaml` which provides
common definitions for devices addressable via a specific bus. This enables
use of APIs like :c:func:`DT_BUS` to obtain a node identifier for the bus
node. You can then :ref:`dt-get-device` for the bus in the usual way.
Search existing bindings and device drivers for examples.
.. _dt-apps-that-depend:
Applications that depend on board-specific devices
**************************************************
One way to allow application code to run unmodified on multiple boards is by
supporting a devicetree alias to specify the hardware specific portions, as is
done in the :zephyr:code-sample:`blinky` sample. The application can then be configured in
:ref:`BOARD.dts <devicetree-in-out-files>` files or via :ref:`devicetree
overlays <use-dt-overlays>`.
``` | /content/code_sandbox/doc/build/dts/howtos.rst | restructuredtext | 2016-05-26T17:54:19 | 2024-08-16T18:09:06 | zephyr | zephyrproject-rtos/zephyr | 10,307 | 5,577 |
```restructuredtext
.. _dt-trouble:
Troubleshooting devicetree
##########################
Here are some tips for fixing misbehaving devicetree related code.
See :ref:`dt-howtos` for other "HOWTO" style information.
.. _dt-trouble-try-pristine:
Try again with a pristine build directory
*****************************************
.. important:: Try this first, before doing anything else.
See :ref:`west-building-pristine` for examples, or just delete the build
directory completely and retry.
This is general advice which is especially applicable to debugging devicetree
issues, because the outputs are created during the CMake configuration phase,
and are not always regenerated when one of their inputs changes.
Make sure <devicetree.h> is included
************************************
Unlike Kconfig symbols, the :file:`devicetree.h` header must be included
explicitly.
Many Zephyr header files rely on information from devicetree, so including some
other API may transitively include :file:`devicetree.h`, but that's not
guaranteed.
undefined reference to ``__device_dts_ord_<N>``
***********************************************
This usually happens on a line like this:
.. code-block:: c
const struct device *dev = DEVICE_DT_GET(NODE_ID);
where ``NODE_ID`` is a valid :ref:`node identifier <dt-node-identifiers>`, but
no device driver has allocated a ``struct device`` for this devicetree node.
You thus get a linker error, because you're asking for a pointer to a device
that isn't defined.
To fix it, you need to make sure that:
1. The node is enabled: the node must have ``status = "okay";``.
(Recall that a missing ``status`` property means the same thing as ``status
= "okay";``; see :ref:`dt-important-props` for more information about
``status``).
2. A device driver responsible for allocating the ``struct device`` is enabled.
That is, the Kconfig option which makes the build system compile the driver
sources into your application needs to be set to ``y``.
(See :ref:`setting_configuration_values` for more information on setting
Kconfig options.)
Below, ``<build>`` means your build directory.
**Making sure the node is enabled**:
To find the devicetree node you need to check, use the number ``<N>`` from the
linker error. Look for this number in the list of nodes at the top of
:file:`<build>/zephyr/include/generated/zephyr/devicetree_generated.h`. For example, if
``<N>`` is 15, and your :file:`devicetree_generated.h` file looks like this,
the node you are interested in is ``/soc/i2c@deadbeef``:
.. code-block:: none
/*
* Generated by gen_defines.py
*
* DTS input file:
* <build>/zephyr/zephyr.dts.pre
*
* Directories with bindings:
* $ZEPHYR_BASE/dts/bindings
*
* Node dependency ordering (ordinal and path):
* 0 /
* 1 /aliases
[...]
* 15 /soc/i2c@deadbeef
[...]
Now look for this node in :file:`<build>/zephyr/zephyr.dts`, which is the final
devicetree for your application build. (See :ref:`get-devicetree-outputs` for
information and examples.)
If the node has ``status = "disabled";`` in :file:`zephyr.dts`, then you need
to enable it by setting ``status = "okay";``, probably by using a devicetree
:ref:`overlay <set-devicetree-overlays>`. For example, if :file:`zephyr.dts`
looks like this:
.. code-block:: DTS
i2c0: i2c@deadbeef {
status = "disabled";
};
Then you should put this into your devicetree overlay and
:ref:`dt-trouble-try-pristine`:
.. code-block:: DTS
&i2c0 {
status = "okay";
};
Make sure that you see ``status = "okay";`` in :file:`zephyr.dts` after you
rebuild.
**Making sure the device driver is enabled**:
The first step is to figure out which device driver is responsible for handling
your devicetree node and allocating devices for it. To do this, you need to
start with the ``compatible`` property in your devicetree node, and find the
driver that allocates ``struct device`` instances for that compatible.
If you're not familiar with how devices are allocated from devicetree nodes
based on compatible properties, the ZDS 2021 talk `A deep dive into the Zephyr
2.5 device model`_ may be a useful place to start, along with the
:ref:`device_model_api` pages. See :ref:`dt-important-props` and the Devicetree
specification for more information about ``compatible``.
.. _A deep dive into the Zephyr 2.5 device model:
path_to_url
There is currently no documentation for what device drivers exist and which
devicetree compatibles they are associated with. You will have to figure this
out by reading the source code:
- Look in :zephyr_file:`drivers` for the appropriate subdirectory that
corresponds to the API your device implements
- Look inside that directory for relevant files until you figure out what the
driver is, or realize there is no such driver.
Often, but not always, you can find the driver by looking for a file that sets
the ``DT_DRV_COMPAT`` macro to match your node's ``compatible`` property,
except lowercased and with special characters converted to underscores. For
example, if your node's compatible is ``vnd,foo-device``, look for a file with this
line:
.. code-block:: C
#define DT_DRV_COMPAT vnd_foo_device
.. important::
This **does not always work** since not all drivers use ``DT_DRV_COMPAT``.
If you find a driver, you next need to make sure the Kconfig option that
compiles it is enabled. (If you don't find a driver, and you are sure the
compatible property is correct, then you need to write a driver. Writing
drivers is outside the scope of this documentation page.)
Continuing the above example, if your devicetree node looks like this now:
.. code-block:: DTS
i2c0: i2c@deadbeef {
compatible = "nordic,nrf-twim";
status = "okay";
};
Then you would look inside of :zephyr_file:`drivers/i2c` for the driver file
that handles the compatible ``nordic,nrf-twim``. In this case, that is
:zephyr_file:`drivers/i2c/i2c_nrfx_twim.c`. Notice how even in cases where
``DT_DRV_COMPAT`` is not set, you can use information like driver file names as
clues.
Once you know the driver you want to enable, you need to make sure its Kconfig
option is set to ``y``. You can figure out which Kconfig option is needed by
looking for a line similar to this one in the :file:`CMakeLists.txt` file in
the drivers subdirectory. Continuing the above example,
:zephyr_file:`drivers/i2c/CMakeLists.txt` has a line that looks like this:
.. code-block:: cmake
zephyr_library_sources_ifdef(CONFIG_NRFX_TWIM i2c_nrfx_twim.c)
This means that :kconfig:option:`CONFIG_NRFX_TWIM` must be set to ``y`` in
:file:`<build>/zephyr/.config` file.
If your driver's Kconfig is not set to ``y``, you need to figure out what you
need to do to make that happen. Often, this will happen automatically as soon
as you enable the devicetree node. Otherwise, it is sometimes as simple as
adding a line like this to your application's :file:`prj.conf` file and then
making sure to :ref:`dt-trouble-try-pristine`:
.. code-block:: cfg
CONFIG_FOO=y
where ``CONFIG_FOO`` is the option that :file:`CMakeLists.txt` uses to decide
whether or not to compile the driver.
However, there may be other problems in your way, such as unmet Kconfig
dependencies that you also have to enable before you can enable your driver.
Consult the Kconfig file that defines ``CONFIG_FOO`` (for your value of
``FOO``) for more information.
.. _dt-use-the-right-names:
Make sure you're using the right names
**************************************
Remember that:
- In C/C++, devicetree names must be lowercased and special characters must be
converted to underscores. Zephyr's generated devicetree header has DTS names
converted in this way into the C tokens used by the preprocessor-based
``<devicetree.h>`` API.
- In overlays, use devicetree node and property names the same way they
would appear in any DTS file. Zephyr overlays are just DTS fragments.
For example, if you're trying to **get** the ``clock-frequency`` property of a
node with path ``/soc/i2c@12340000`` in a C/C++ file:
.. code-block:: c
/*
* foo.c: lowercase-and-underscores names
*/
/* Don't do this: */
#define MY_CLOCK_FREQ DT_PROP(DT_PATH(soc, i2c@1234000), clock-frequency)
/* ^ ^
* @ should be _ - should be _ */
/* Do this instead: */
#define MY_CLOCK_FREQ DT_PROP(DT_PATH(soc, i2c_1234000), clock_frequency)
/* ^ ^ */
And if you're trying to **set** that property in a devicetree overlay:
.. code-block:: none
/*
* foo.overlay: DTS names with special characters, etc.
*/
/* Don't do this; you'll get devicetree errors. */
&{/soc/i2c_12340000/} {
clock_frequency = <115200>;
};
/* Do this instead. Overlays are just DTS fragments. */
&{/soc/i2c@12340000/} {
clock-frequency = <115200>;
};
Look at the preprocessor output
*******************************
To save preprocessor output files, enable the
:kconfig:option:`CONFIG_COMPILER_SAVE_TEMPS` option. For example, to build
:ref:`hello_world` with west with this option set, use:
.. code-block:: sh
west build -b BOARD samples/hello_world -- -DCONFIG_COMPILER_SAVE_TEMPS=y
This will create a preprocessor output file named :file:`foo.c.i` in the build
directory for each source file :file:`foo.c`.
You can then search for the file in the build directory to see what your
devicetree macros expanded to. For example, on macOS and Linux, using ``find``
to find :file:`main.c.i`:
.. code-block:: sh
$ find build -name main.c.i
build/CMakeFiles/app.dir/src/main.c.i
It's usually easiest to run a style formatter on the results before opening
them. For example, to use ``clang-format`` to reformat the file in place:
.. code-block:: sh
clang-format -i build/CMakeFiles/app.dir/src/main.c.i
You can then open the file in your favorite editor to view the final C results
after preprocessing.
Do not track macro expansion
****************************
Compiler messages for devicetree errors can sometimes be very long. This
typically happens when the compiler prints a message for every step of a
complex macro expansion that has several intermediate expansion steps.
To prevent the compiler from doing this, you can disable the
:kconfig:option:`CONFIG_COMPILER_TRACK_MACRO_EXPANSION` option. This typically
reduces the output to one message per error.
For example, to build :ref:`hello_world` with west and this option disabled,
use:
.. code-block:: sh
west build -b BOARD samples/hello_world -- -DCONFIG_COMPILER_TRACK_MACRO_EXPANSION=n
Validate properties
*******************
If you're getting a compile error reading a node property, check your node
identifier and property. For example, if you get a build error on a line that
looks like this:
.. code-block:: c
int baud_rate = DT_PROP(DT_NODELABEL(my_serial), current_speed);
Try checking the node by adding this to the file and recompiling:
.. code-block:: c
#if !DT_NODE_EXISTS(DT_NODELABEL(my_serial))
#error "whoops"
#endif
If you see the "whoops" error message when you rebuild, the node identifier
isn't referring to a valid node. :ref:`get-devicetree-outputs` and debug from
there.
Some hints for what to check next if you don't see the "whoops" error message:
- did you :ref:`dt-use-the-right-names`?
- does the :ref:`property exist <dt-checking-property-exists>`?
- does the node have a :ref:`matching binding <dt-bindings>`?
- does the binding define the property?
.. _missing-dt-binding:
Check for missing bindings
**************************
See :ref:`dt-bindings` for information about bindings, and
:ref:`devicetree_binding_index` for information on bindings built into Zephyr.
If the build fails to :ref:`dts-find-binding` for a node, then either the
node's ``compatible`` property is not defined, or its value has no matching
binding. If the property is set, check for typos in its name. In a devicetree
source file, ``compatible`` should look like ``"vnd,some-device"`` --
:ref:`dt-use-the-right-names`.
If your binding file is not under :file:`zephyr/dts`, you may need to set
:ref:`DTS_ROOT <dts_root>`; see :ref:`dt-where-bindings-are-located`.
Errors with DT_INST_() APIs
***************************
If you're using an API like :c:func:`DT_INST_PROP`, you must define
``DT_DRV_COMPAT`` to the lowercase-and-underscores version of the compatible
you are interested in. See :ref:`dt-create-devices-inst`.
``` | /content/code_sandbox/doc/build/dts/troubleshooting.rst | restructuredtext | 2016-05-26T17:54:19 | 2024-08-16T18:09:06 | zephyr | zephyrproject-rtos/zephyr | 10,307 | 3,215 |
```unknown
/*
*
*
* This is used multiple times in the documentation.
* If you change it for one example, you could break others, so be careful.
*/
/* start-after-here */
/dts-v1/;
/ {
aliases {
sensor-controller = &i2c1;
};
soc {
i2c1: i2c@40002000 {
compatible = "vnd,soc-i2c";
label = "I2C_1";
reg = <0x40002000 0x1000>;
status = "okay";
clock-frequency = < 100000 >;
};
};
};
``` | /content/code_sandbox/doc/build/dts/main-example.dts | unknown | 2016-05-26T17:54:19 | 2024-08-16T18:09:06 | zephyr | zephyrproject-rtos/zephyr | 10,307 | 140 |
```restructuredtext
.. _dt-from-c:
Devicetree access from C/C++
############################
This guide describes Zephyr's ``<zephyr/devicetree.h>`` API for reading the
devicetree from C source files. It assumes you're familiar with the concepts in
:ref:`devicetree-intro` and :ref:`dt-bindings`. See :ref:`dt-reference` for
reference material.
A note for Linux developers
***************************
Linux developers familiar with devicetree should be warned that the API
described here differs significantly from how devicetree is used on Linux.
Instead of generating a C header with all the devicetree data which is then
abstracted behind a macro API, the Linux kernel would instead read the
devicetree data structure in its binary form. The binary representation is
parsed at runtime, for example to load and initialize device drivers.
Zephyr does not work this way because the size of the devicetree binary and
associated handling code would be too large to fit comfortably on the
relatively constrained devices Zephyr supports.
.. _dt-node-identifiers:
Node identifiers
****************
To get information about a particular devicetree node, you need a *node
identifier* for it. This is a just a C macro that refers to the node.
These are the main ways to get a node identifier:
By path
Use :c:func:`DT_PATH()` along with the node's full path in the devicetree,
starting from the root node. This is mostly useful if you happen to know the
exact node you're looking for.
By node label
Use :c:func:`DT_NODELABEL()` to get a node identifier from a :ref:`node
label <dt-node-labels>`. Node labels are often provided by SoC :file:`.dtsi`
files to give nodes names that match the SoC datasheet, like ``i2c1``,
``spi2``, etc.
By alias
Use :c:func:`DT_ALIAS()` to get a node identifier for a property of the
special ``/aliases`` node. This is sometimes done by applications (like
:zephyr:code-sample:`blinky`, which uses the ``led0`` alias) that need to
refer to *some* device of a particular type ("the board's user LED") but
don't care which one is used.
By instance number
This is done primarily by device drivers, as instance numbers are a way to
refer to individual nodes based on a matching compatible. Get these with
:c:func:`DT_INST()`, but be careful doing so. See below.
By chosen node
Use :c:func:`DT_CHOSEN()` to get a node identifier for ``/chosen`` node
properties.
By parent/child
Use :c:func:`DT_PARENT()` and :c:func:`DT_CHILD()` to get a node identifier
for a parent or child node, starting from a node identifier you already have.
Two node identifiers which refer to the same node are identical and can be used
interchangeably.
.. _dt-node-main-ex:
Here's a DTS fragment for some imaginary hardware we'll return to throughout
this file for examples:
.. literalinclude:: main-example.dts
:language: devicetree
:start-after: start-after-here
Here are a few ways to get node identifiers for the ``i2c@40002000`` node:
- ``DT_PATH(soc, i2c_40002000)``
- ``DT_NODELABEL(i2c1)``
- ``DT_ALIAS(sensor_controller)``
- ``DT_INST(x, vnd_soc_i2c)`` for some unknown number ``x``. See the
:c:func:`DT_INST()` documentation for details.
.. important::
Non-alphanumeric characters like dash (``-``) and the at sign (``@``) in
devicetree names are converted to underscores (``_``). The names in a DTS
are also converted to lowercase.
.. _node-ids-are-not-values:
Node identifiers are not values
*******************************
There is no way to store one in a variable. You cannot write:
.. code-block:: c
/* These will give you compiler errors: */
void *i2c_0 = DT_INST(0, vnd_soc_i2c);
unsigned int i2c_1 = DT_INST(1, vnd_soc_i2c);
long my_i2c = DT_NODELABEL(i2c1);
If you want something short to save typing, use C macros:
.. code-block:: c
/* Use something like this instead: */
#define MY_I2C DT_NODELABEL(i2c1)
#define INST(i) DT_INST(i, vnd_soc_i2c)
#define I2C_0 INST(0)
#define I2C_1 INST(1)
Property access
***************
The right API to use to read property values depends on the node and property.
- :ref:`dt-checking-property-exists`
- :ref:`simple-properties`
- :ref:`reg-properties`
- :ref:`interrupts-properties`
- :ref:`phandle-properties`
.. _dt-checking-property-exists:
Checking properties and values
==============================
You can use :c:func:`DT_NODE_HAS_PROP()` to check if a node has a property. For
the :ref:`example devicetree <dt-node-main-ex>` above:
.. code-block:: c
DT_NODE_HAS_PROP(DT_NODELABEL(i2c1), clock_frequency) /* expands to 1 */
DT_NODE_HAS_PROP(DT_NODELABEL(i2c1), not_a_property) /* expands to 0 */
.. _simple-properties:
Simple properties
=================
Use ``DT_PROP(node_id, property)`` to read basic integer, boolean, string,
numeric array, and string array properties.
For example, to read the ``clock-frequency`` property's value in the
:ref:`above example <dt-node-main-ex>`:
.. code-block:: c
DT_PROP(DT_PATH(soc, i2c_40002000), clock_frequency) /* This is 100000, */
DT_PROP(DT_NODELABEL(i2c1), clock_frequency) /* and so is this, */
DT_PROP(DT_ALIAS(sensor_controller), clock_frequency) /* and this. */
.. important::
The DTS property ``clock-frequency`` is spelled ``clock_frequency`` in C.
That is, properties also need special characters converted to underscores.
Their names are also forced to lowercase.
Properties with ``string`` and ``boolean`` types work the exact same way. The
``DT_PROP()`` macro expands to a string literal in the case of strings, and the
number 0 or 1 in the case of booleans. For example:
.. code-block:: c
#define I2C1 DT_NODELABEL(i2c1)
DT_PROP(I2C1, status) /* expands to the string literal "okay" */
.. note::
Don't use DT_NODE_HAS_PROP() for boolean properties. Use DT_PROP() instead
as shown above. It will expand to either 0 or 1 depending on if the property
is present or absent.
Properties with type ``array``, ``uint8-array``, and ``string-array`` work
similarly, except ``DT_PROP()`` expands to an array initializer in these cases.
Here is an example devicetree fragment:
.. code-block:: devicetree
foo: foo@1234 {
a = <1000 2000 3000>; /* array */
b = [aa bb cc dd]; /* uint8-array */
c = "bar", "baz"; /* string-array */
};
Its properties can be accessed like this:
.. code-block:: c
#define FOO DT_NODELABEL(foo)
int a[] = DT_PROP(FOO, a); /* {1000, 2000, 3000} */
unsigned char b[] = DT_PROP(FOO, b); /* {0xaa, 0xbb, 0xcc, 0xdd} */
char* c[] = DT_PROP(FOO, c); /* {"foo", "bar"} */
You can use :c:func:`DT_PROP_LEN()` to get logical array lengths in number of
elements.
.. code-block:: c
size_t a_len = DT_PROP_LEN(FOO, a); /* 3 */
size_t b_len = DT_PROP_LEN(FOO, b); /* 4 */
size_t c_len = DT_PROP_LEN(FOO, c); /* 2 */
``DT_PROP_LEN()`` cannot be used with the special ``reg`` or ``interrupts``
properties. These have alternative macros which are described next.
.. _reg-properties:
reg properties
==============
See :ref:`dt-important-props` for an introduction to ``reg``.
Given a node identifier ``node_id``, ``DT_NUM_REGS(node_id)`` is the
total number of register blocks in the node's ``reg`` property.
You **cannot** read register block addresses and lengths with ``DT_PROP(node,
reg)``. Instead, if a node only has one register block, use
:c:func:`DT_REG_ADDR` or :c:func:`DT_REG_SIZE`:
- ``DT_REG_ADDR(node_id)``: the given node's register block address
- ``DT_REG_SIZE(node_id)``: its size
Use :c:func:`DT_REG_ADDR_BY_IDX` or :c:func:`DT_REG_SIZE_BY_IDX` instead if the
node has multiple register blocks:
- ``DT_REG_ADDR_BY_IDX(node_id, idx)``: address of register block at index
``idx``
- ``DT_REG_SIZE_BY_IDX(node_id, idx)``: size of block at index ``idx``
The ``idx`` argument to these must be an integer literal or a macro that
expands to one without requiring any arithmetic. In particular, ``idx`` cannot
be a variable. This won't work:
.. code-block:: c
/* This will cause a compiler error. */
for (size_t i = 0; i < DT_NUM_REGS(node_id); i++) {
size_t addr = DT_REG_ADDR_BY_IDX(node_id, i);
}
.. _interrupts-properties:
interrupts properties
=====================
See :ref:`dt-important-props` for a brief introduction to ``interrupts``.
Given a node identifier ``node_id``, ``DT_NUM_IRQS(node_id)`` is the total
number of interrupt specifiers in the node's ``interrupts`` property.
The most general purpose API macro for accessing these is
:c:func:`DT_IRQ_BY_IDX`:
.. code-block:: c
DT_IRQ_BY_IDX(node_id, idx, val)
Here, ``idx`` is the logical index into the ``interrupts`` array, i.e. it is
the index of an individual interrupt specifier in the property. The ``val``
argument is the name of a cell within the interrupt specifier. To use this
macro, check the bindings file for the node you are interested in to find the
``val`` names.
Most Zephyr devicetree bindings have a cell named ``irq``, which is the
interrupt number. You can use :c:func:`DT_IRQN` as a convenient way to get a
processed view of this value.
.. warning::
Here, "processed" reflects Zephyr's devicetree :ref:`dt-scripts`, which
change the ``irq`` number in :ref:`zephyr.dts <devicetree-in-out-files>` to
handle hardware constraints on some SoCs and in accordance with Zephyr's
multilevel interrupt numbering.
This is currently not very well documented, and you'll need to read the
scripts' source code and existing drivers for more details if you are writing
a device driver.
.. _phandle-properties:
phandle properties
==================
.. note::
See :ref:`dt-phandles` for a detailed guide to phandles.
Property values can refer to other nodes using the ``&another-node`` phandle
syntax introduced in :ref:`dt-writing-property-values`. Properties which
contain phandles have type ``phandle``, ``phandles``, or ``phandle-array`` in
their bindings. We'll call these "phandle properties" for short.
You can convert a phandle to a node identifier using :c:func:`DT_PHANDLE`,
:c:func:`DT_PHANDLE_BY_IDX`, or :c:func:`DT_PHANDLE_BY_NAME`, depending on the
type of property you are working with.
One common use case for phandle properties is referring to other hardware in
the tree. In this case, you usually want to convert the devicetree-level
phandle to a Zephyr driver-level :ref:`struct device <device_model_api>`.
See :ref:`dt-get-device` for ways to do that.
Another common use case is accessing specifier values in a phandle array. The
general purpose APIs for this are :c:func:`DT_PHA_BY_IDX` and :c:func:`DT_PHA`.
There are also hardware-specific shorthands like :c:func:`DT_GPIO_CTLR_BY_IDX`,
:c:func:`DT_GPIO_CTLR`,
:c:func:`DT_GPIO_PIN_BY_IDX`, :c:func:`DT_GPIO_PIN`,
:c:func:`DT_GPIO_FLAGS_BY_IDX`, and :c:func:`DT_GPIO_FLAGS`.
See :c:func:`DT_PHA_HAS_CELL_AT_IDX` and :c:func:`DT_PROP_HAS_IDX` for ways to
check if a specifier value is present in a phandle property.
.. _other-devicetree-apis:
Other APIs
**********
Here are pointers to some other available APIs.
- :c:func:`DT_CHOSEN`, :c:func:`DT_HAS_CHOSEN`: for properties
of the special ``/chosen`` node
- :c:func:`DT_HAS_COMPAT_STATUS_OKAY`, :c:func:`DT_NODE_HAS_COMPAT`: global-
and node-specific tests related to the ``compatible`` property
- :c:func:`DT_BUS`: get a node's bus controller, if there is one
- :c:func:`DT_ENUM_IDX`: for properties whose values are among a fixed list of
choices
- :ref:`devicetree-flash-api`: APIs for managing fixed flash partitions.
Also see :ref:`flash_map_api`, which wraps this in a more user-friendly API.
Device driver conveniences
**************************
Special purpose macros are available for writing device drivers, which usually
rely on :ref:`instance identifiers <dt-node-identifiers>`.
To use these, you must define ``DT_DRV_COMPAT`` to the ``compat`` value your
driver implements support for. This ``compat`` value is what you would pass to
:c:func:`DT_INST`.
If you do that, you can access the properties of individual instances of your
compatible with less typing, like this:
.. code-block:: c
#include <zephyr/devicetree.h>
#define DT_DRV_COMPAT my_driver_compat
/* This is same thing as DT_INST(0, my_driver_compat): */
DT_DRV_INST(0)
/*
* This is the same thing as
* DT_PROP(DT_INST(0, my_driver_compat), clock_frequency)
*/
DT_INST_PROP(0, clock_frequency)
See :ref:`devicetree-inst-apis` for a generic API reference.
Hardware specific APIs
**********************
Convenience macros built on top of the above APIs are also defined to help
readability for hardware specific code. See :ref:`devicetree-hw-api` for
details.
Generated macros
****************
While the :file:`zephyr/devicetree.h` API is not generated, it does rely on a
generated C header which is put into every application build directory:
:ref:`devicetree_generated.h <dt-outputs>`. This file contains macros with
devicetree data.
These macros have tricky naming conventions which the :ref:`devicetree_api` API
abstracts away. They should be considered an implementation detail, but it's
useful to understand them since they will frequently be seen in compiler error
messages.
This section contains an Augmented Backus-Naur Form grammar for these
generated macros, with examples and more details in comments. See `RFC 7405`_
(which extends `RFC 5234`_) for a syntax specification.
.. literalinclude:: macros.bnf
:language: abnf
.. _RFC 7405: path_to_url
.. _RFC 5234: path_to_url
``` | /content/code_sandbox/doc/build/dts/api-usage.rst | restructuredtext | 2016-05-26T17:54:19 | 2024-08-16T18:09:06 | zephyr | zephyrproject-rtos/zephyr | 10,307 | 3,598 |
```restructuredtext
.. _dt-writing-bindings:
Rules for upstream bindings
###########################
This section includes general rules for writing bindings that you want to
submit to the upstream Zephyr Project. (You don't need to follow these rules
for bindings you don't intend to contribute to the Zephyr Project, but it's a
good idea.)
Decisions made by the Zephyr devicetree maintainer override the contents of
this section. If that happens, though, please let them know so they can update
this page, or you can send a patch yourself.
.. contents:: Contents
:local:
Always check for existing bindings
**********************************
Zephyr aims for devicetree :ref:`dt-source-compatibility`. Therefore, if there
is an existing binding for your device in an authoritative location, you should
try to replicate its properties when writing a Zephyr binding, and you must
justify any Zephyr-specific divergences.
In particular, this rule applies if:
- There is an existing binding in the mainline Linux kernel. See
:file:`Documentation/devicetree/bindings` in `Linus's tree`_ for existing
bindings and the `Linux devicetree documentation`_ for more information.
- Your hardware vendor provides an official binding outside of the Linux
kernel.
.. _Linus's tree:
path_to_url
.. _Linux devicetree documentation:
path_to_url
General rules
*************
File names
==========
Bindings which match a compatible must have file names based on the compatible.
- For example, a binding for compatible ``vnd,foo`` must be named ``vnd,foo.yaml``.
- If the binding is bus-specific, you can append the bus to the file name;
for example, if the binding YAML has ``on-bus: bar``, you may name the file
``vnd,foo-bar.yaml``.
Recommendations are requirements
================================
All recommendations in :ref:`dt-bindings-default` are requirements when
submitting the binding.
In particular, if you use the ``default:`` feature, you must justify the
value in the property's description.
Descriptions
============
There are only two acceptable ways to write property ``description:``
strings.
If your description is short, it's fine to use this style:
.. code-block:: yaml
description: my short string
If your description is long or spans multiple lines, you must use this
style:
.. code-block:: yaml
description: |
My very long string
goes here.
Look at all these lines!
This ``|`` style prevents YAML parsers from removing the newlines in
multi-line descriptions. This in turn makes these long strings
display properly in the :ref:`devicetree_binding_index`.
Naming conventions
==================
Do not use uppercase letters (``A`` through ``Z``) or underscores (``_``) in
property names. Use lowercase letters (``a`` through ``z``) instead of
uppercase. Use dashes (``-``) instead of underscores. (The one exception to
this rule is if you are replicating a well-established binding from somewhere
like Linux.)
Rules for vendor prefixes
*************************
The following general rules apply to vendor prefixes in :ref:`compatible
<dt-important-props>` properties.
- If your device is manufactured by a specific vendor, then its compatible
should have a vendor prefix.
If your binding describes hardware with a well known vendor from the list in
:zephyr_file:`dts/bindings/vendor-prefixes.txt`, you must use that vendor
prefix.
- If your device is not manufactured by a specific hardware vendor, do **not**
invent a vendor prefix. Vendor prefixes are not mandatory parts of compatible
properties, and compatibles should not include them unless they refer to an
actual vendor. There are some exceptions to this rule, but the practice is
strongly discouraged.
- Do not submit additions to Zephyr's :file:`dts/bindings/vendor-prefixes.txt`
file unless you also include users of the new prefix. This means at least a
binding and a devicetree using the vendor prefix, and should ideally include
a device driver handling that compatible.
For custom bindings, you can add a custom
:file:`dts/bindings/vendor-prefixes.txt` file to any directory in your
:ref:`DTS_ROOT <dts_root>`. The devicetree tooling will respect these
prefixes, and will not generate warnings or errors if you use them in your
own bindings or devicetrees.
- We sometimes synchronize Zephyr's vendor-prefixes.txt file with the Linux
kernel's equivalent file; this process is exempt from the previous rule.
- If your binding is describing an abstract class of hardware with Zephyr
specific drivers handling the nodes, it's usually best to use ``zephyr`` as
the vendor prefix. See :ref:`dt_vendor_zephyr` for examples.
.. _dt-bindings-default-rules:
Rules for default values
************************
In any case where ``default:`` is used in a devicetree binding, the
``description:`` for that property **must** explain *why* the value was
selected and any conditions that would make it necessary to provide a different
value. Additionally, if changing one property would require changing another to
create a consistent configuration, then those properties should be made
required.
There is no need to document the default value itself; this is already present
in the :ref:`devicetree_binding_index` output.
There is a risk in using ``default:`` when the value in the binding may be
incorrect for a particular board or hardware configuration. For example,
defaulting the capacity of the connected power cell in a charging IC binding
is likely to be incorrect. For such properties it's better to make the
property ``required: true``, forcing the user to make an explicit choice.
Driver developers should use their best judgment as to whether a value can be
safely defaulted. Candidates for default values include:
- delays that would be different only under unusual conditions
(such as intervening hardware)
- configuration for devices that have a standard initial configuration (such as
a USB audio headset)
- defaults which match the vendor-specified power-on reset value
(as long as they are independent from other properties)
Examples of how to write descriptions according to these rules:
.. code-block:: yaml
properties:
cs-interval:
type: int
default: 0
description: |
Minimum interval between chip select deassertion and assertion.
The default corresponds to the reset value of the register field.
hold-time-ms:
type: int
default: 20
description: |
Amount of time to hold the power enable GPIO asserted before
initiating communication. The default was recommended in the
manufacturer datasheet, and would only change under very
cold temperatures.
Some examples of what **not** to do, and why:
.. code-block:: yaml
properties:
# Description doesn't mention anything about the default
foo:
type: int
default: 1
description: number of foos
# Description mentions the default value instead of why it
# was chosen
bar:
type: int
default: 2
description: bar size; default is 2
# Explanation of the default value is in a comment instead
# of the description. This won't be shown in the bindings index.
baz:
type: int
# This is the recommended value chosen by the manufacturer.
default: 2
description: baz time in milliseconds
The ``zephyr,`` prefix
**********************
You must add this prefix to property names in the following cases:
- Zephyr-specific extensions to bindings we share with upstream Linux. One
example is the ``zephyr,vref-mv`` ADC channel property which is common to ADC
controllers defined in :zephyr_file:`dts/bindings/adc/adc-controller.yaml`.
This channel binding is partially shared with an analogous Linux binding, and
Zephyr-specific extensions are marked as such with the prefix.
- Configuration values that are specific to a Zephyr device driver. One example
is the ``zephyr,lazy-load`` property in the :dtcompatible:`ti,bq274xx`
binding. Though devicetree in general is a hardware description and
configuration language, it is Zephyr's only mechanism for configuring driver
behavior for an individual ``struct device``. Therefore, as a compromise,
we do allow some software configuration in Zephyr's devicetree bindings, as
long as they use this prefix to show that they are Zephyr specific.
You may use the ``zephyr,`` prefix when naming a devicetree compatible that is
specific to Zephyr. One example is
:dtcompatible:`zephyr,ipc-openamp-static-vrings`. In this case, it's permitted
but not required to add the ``zephyr,`` prefix to properties defined in the
binding.
``` | /content/code_sandbox/doc/build/dts/bindings-upstream.rst | restructuredtext | 2016-05-26T17:54:19 | 2024-08-16T18:09:06 | zephyr | zephyrproject-rtos/zephyr | 10,307 | 1,958 |
```restructuredtext
.. _devicetree:
Devicetree
##########
A *devicetree* is a hierarchical data structure primarily used to describe
hardware. Zephyr uses devicetree in two main ways:
- to describe hardware to the :ref:`device_model_api`
- to provide that hardware's initial configuration
This page links to a high level guide on devicetree as well as reference
material.
.. _dt-guide:
Devicetree Guide
****************
The pages in this section are a high-level guide to using devicetree for Zephyr
development.
.. toctree::
:maxdepth: 2
intro.rst
design.rst
bindings.rst
api-usage.rst
phandles.rst
zephyr-user-node.rst
howtos.rst
troubleshooting.rst
dt-vs-kconfig.rst
.. _dt-reference:
Devicetree Reference
********************
These pages contain reference material for Zephyr's devicetree APIs and
built-in bindings.
For the platform-independent details, see the `Devicetree specification`_.
.. _Devicetree specification: path_to_url
.. We use ":glob:" with "*" here to add the generated bindings page.
.. toctree::
:maxdepth: 3
:glob:
api/*
``` | /content/code_sandbox/doc/build/dts/index.rst | restructuredtext | 2016-05-26T17:54:19 | 2024-08-16T18:09:06 | zephyr | zephyrproject-rtos/zephyr | 10,307 | 293 |
```restructuredtext
.. _dt-design:
Design goals
############
Zephyr's use of devicetree has evolved significantly over time, and further
changes are expected. The following are the general design goals, along with
specific examples about how they impact Zephyr's source code, and areas where
more work remains to be done.
Single source for hardware information
**************************************
Zephyr's built-in device drivers and sample applications shall obtain
configurable hardware descriptions from devicetree.
Examples
========
- New device drivers shall use devicetree APIs to determine which :ref:`devices
to create <dt-create-devices>`.
- In-tree sample applications shall use :ref:`aliases <dt-alias-chosen>` to
determine which of multiple possible generic devices of a given type will be
used in the current build. For example, the :zephyr:code-sample:`blinky` sample uses this to
determine the LED to blink.
- Boot-time pin muxing and pin control for new SoCs shall be accomplished via a
devicetree-based pinctrl driver
Example remaining work
======================
- Zephyr's :ref:`twister_script` currently use :file:`board.yaml` files to
determine the hardware supported by a board. This should be obtained from
devicetree instead.
- Legacy device drivers currently use Kconfig to determine which instances of a
particular compatible are enabled. This can and should be done with devicetree
overlays instead.
- Board-level documentation still contains tables of hardware support which are
generated and maintained by hand. This can and should be obtained from the
board level devicetree instead.
- Runtime determination of ``struct device`` relationships should be done using
information obtained from devicetree, e.g. for device power management.
.. _dt-source-compatibility:
Source compatibility with other operating systems
*************************************************
Zephyr's devicetree tooling is based on a generic layer which is interoperable
with other devicetree users, such as the Linux kernel.
Zephyr's binding language *semantics* can support Zephyr-specific attributes,
but shall not express Zephyr-specific relationships.
Examples
========
- Zephyr's devicetree source parser, :ref:`dtlib.py <dt-scripts>`, is
source-compatible with other tools like `dtc`_ in both directions:
:file:`dtlib.py` can parse ``dtc`` output, and ``dtc`` can parse
:file:`dtlib.py` output.
- Zephyr's "extended dtlib" library, :file:`edtlib.py`, shall not include
Zephyr-specific features. Its purpose is to provide a higher-level view of the
devicetree for common elements like interrupts and buses.
Only the high-level :file:`gen_defines.py` script, which is built on top of
:file:`edtlib.py`, contains Zephyr-specific knowledge and features.
.. _dtc: path_to_url
Example remaining work
======================
- Zephyr has a custom :ref:`dt-bindings` language *syntax*. While Linux's
dtschema does not yet meet Zephyr's needs, we should try to follow what it is
capable of representing in Zephyr's own bindings.
- Due to inflexibility in the bindings language, Zephyr cannot support the full
set of bindings supported by Linux.
- Devicetree source sharing between Zephyr and Linux is not done.
``` | /content/code_sandbox/doc/build/dts/design.rst | restructuredtext | 2016-05-26T17:54:19 | 2024-08-16T18:09:06 | zephyr | zephyrproject-rtos/zephyr | 10,307 | 762 |
```restructuredtext
.. _dt-bindings-file-syntax:
Devicetree bindings syntax
##########################
This page documents the syntax of Zephyr's bindings format. Zephyr bindings
files are YAML files. A :ref:`simple example <dt-bindings-simple-example>` was
given in the introduction page.
.. contents:: Contents
:local:
:depth: 3
Top level keys
**************
The top level of a bindings file maps keys to values. The top-level keys look
like this:
.. code-block:: yaml
# A high level description of the device the binding applies to:
description: |
This is the Vendomatic company's foo-device.
Descriptions which span multiple lines (like this) are OK,
and are encouraged for complex bindings.
See path_to_url for formatting help.
# You can include definitions from other bindings using this syntax:
include: other.yaml
# Used to match nodes to this binding:
compatible: "manufacturer,foo-device"
properties:
# Requirements for and descriptions of the properties that this
# binding's nodes need to satisfy go here.
child-binding:
# You can constrain the children of the nodes matching this binding
# using this key.
# If the node describes bus hardware, like an SPI bus controller
# on an SoC, use 'bus:' to say which one, like this:
bus: spi
# If the node instead appears as a device on a bus, like an external
# SPI memory chip, use 'on-bus:' to say what type of bus, like this.
# Like 'compatible', this key also influences the way nodes match
# bindings.
on-bus: spi
foo-cells:
# "Specifier" cell names for the 'foo' domain go here; example 'foo'
# values are 'gpio', 'pwm', and 'dma'. See below for more information.
These keys are explained in the following sections.
.. _dt-bindings-description:
Description
***********
A free-form description of node hardware goes here. You can put links to
datasheets or example nodes or properties as well.
.. _dt-bindings-compatible:
Compatible
**********
This key is used to match nodes to this binding as described in
:ref:`dt-binding-compat`. It should look like this in a binding file:
.. code-block:: YAML
# Note the comma-separated vendor prefix and device name
compatible: "manufacturer,device"
This devicetree node would match the above binding:
.. code-block:: devicetree
device {
compatible = "manufacturer,device";
};
Assuming no binding has ``compatible: "manufacturer,device-v2"``, it would also
match this node:
.. code-block:: devicetree
device-2 {
compatible = "manufacturer,device-v2", "manufacturer,device";
};
Each node's ``compatible`` property is tried in order. The first matching
binding is used. The :ref:`on-bus: <dt-bindings-on-bus>` key can be used to
refine the search.
If more than one binding for a compatible is found, an error is raised.
The ``manufacturer`` prefix identifies the device vendor. See
:zephyr_file:`dts/bindings/vendor-prefixes.txt` for a list of accepted vendor
prefixes. The ``device`` part is usually from the datasheet.
Some bindings apply to a generic class of devices which do not have a specific
vendor. In these cases, there is no vendor prefix. One example is the
:dtcompatible:`gpio-leds` compatible which is commonly used to describe board
LEDs connected to GPIOs.
.. _dt-bindings-properties:
Properties
**********
The ``properties:`` key describes properties that nodes which match the binding
contain. For example, a binding for a UART peripheral might look something like
this:
.. code-block:: YAML
compatible: "manufacturer,serial"
properties:
reg:
type: array
description: UART peripheral MMIO register space
required: true
current-speed:
type: int
description: current baud rate
required: true
In this example, a node with compatible ``"manufacturer,serial"`` must contain
a node named ``current-speed``. The property's value must be a single integer.
Similarly, the node must contain a ``reg`` property.
The build system uses bindings to generate C macros for devicetree properties
that appear in DTS files. You can read more about how to get property values in
source code from these macros in :ref:`dt-from-c`. Generally speaking, the
build system only generates macros for properties listed in the ``properties:``
key for the matching binding. Properties not mentioned in the binding are
generally ignored by the build system.
The one exception is that the build system will always generate macros for
standard properties, like :ref:`reg <dt-important-props>`, whose meaning is
defined by the devicetree specification. This happens regardless of whether the
node has a matching binding or not.
Property entry syntax
=====================
Property entries in ``properties:`` are written in this syntax:
.. code-block:: none
<property name>:
required: <true | false>
type: <string | int | boolean | array | uint8-array | string-array |
phandle | phandles | phandle-array | path | compound>
deprecated: <true | false>
default: <default>
description: <description of the property>
enum:
- <item1>
- <item2>
...
- <itemN>
const: <string | int | array | uint8-array | string-array>
specifier-space: <space-name>
.. _dt-bindings-example-properties:
Example property definitions
============================
Here are some more examples.
.. code-block:: YAML
properties:
# Describes a property like 'current-speed = <115200>;'. We pretend that
# it's obligatory for the example node and set 'required: true'.
current-speed:
type: int
required: true
description: Initial baud rate for bar-device
# Describes an optional property like 'keys = "foo", "bar";'
keys:
type: string-array
description: Keys for bar-device
# Describes an optional property like 'maximum-speed = "full-speed";'
# the enum specifies known values that the string property may take
maximum-speed:
type: string
description: Configures USB controllers to work up to a specific speed.
enum:
- "low-speed"
- "full-speed"
- "high-speed"
- "super-speed"
# Describes an optional property like 'resolution = <16>;'
# the enum specifies known values that the int property may take
resolution:
type: int
enum:
- 8
- 16
- 24
- 32
# Describes a required property '#address-cells = <1>'; the const
# specifies that the value for the property is expected to be the value 1
"#address-cells":
type: int
required: true
const: 1
int-with-default:
type: int
default: 123
description: Value for int register, default is power-up configuration.
array-with-default:
type: array
default: [1, 2, 3] # Same as 'array-with-default = <1 2 3>'
string-with-default:
type: string
default: "foo"
string-array-with-default:
type: string-array
default: ["foo", "bar"] # Same as 'string-array-with-default = "foo", "bar"'
uint8-array-with-default:
type: uint8-array
default: [0x12, 0x34] # Same as 'uint8-array-with-default = [12 34]'
required
========
Adding ``required: true`` to a property definition will fail the build if a
node matches the binding, but does not contain the property.
The default setting is ``required: false``; that is, properties are optional by
default. Using ``required: false`` is therefore redundant and strongly
discouraged.
type
====
The type of a property constrains its values. The following types are
available. See :ref:`dt-writing-property-values` for more details about writing
values of each type in a DTS file. See :ref:`dt-phandles` for more information
about the ``phandle*`` type properties.
.. list-table::
:header-rows: 1
:widths: 1 3 2
* - Type
- Description
- Example in DTS
* - ``string``
- exactly one string
- ``status = "disabled";``
* - ``int``
- exactly one 32-bit value (cell)
- ``current-speed = <115200>;``
* - ``boolean``
- flags that don't take a value when true, and are absent if false
- ``hw-flow-control;``
* - ``array``
- zero or more 32-bit values (cells)
- ``offsets = <0x100 0x200 0x300>;``
* - ``uint8-array``
- zero or more bytes, in hex ('bytestring' in the Devicetree specification)
- ``local-mac-address = [de ad be ef 12 34];``
* - ``string-array``
- zero or more strings
- ``dma-names = "tx", "rx";``
* - ``phandle``
- exactly one phandle
- ``interrupt-parent = <&gic>;``
* - ``phandles``
- zero or more phandles
- ``pinctrl-0 = <&usart2_tx_pd5 &usart2_rx_pd6>;``
* - ``phandle-array``
- a list of phandles and 32-bit cells (usually specifiers)
- ``dmas = <&dma0 2>, <&dma0 3>;``
* - ``path``
- a path to a node as a phandle path reference or path string
- ``zephyr,bt-c2h-uart = &uart0;`` or
``foo = "/path/to/some/node";``
* - ``compound``
- a catch-all for more complex types (no macros will be generated)
- ``foo = <&label>, [01 02];``
deprecated
==========
A property with ``deprecated: true`` indicates to both the user and the tooling
that the property is meant to be phased out.
The tooling will report a warning if the devicetree includes the property that
is flagged as deprecated. (This warning is upgraded to an error in the
:ref:`twister_script` for upstream pull requests.)
The default setting is ``deprecated: false``. Using ``deprecated: false`` is
therefore redundant and strongly discouraged.
.. _dt-bindings-default:
default
=======
The optional ``default:`` setting gives a value that will be used if the
property is missing from the devicetree node.
For example, with this binding fragment:
.. code-block:: YAML
properties:
foo:
type: int
default: 3
If property ``foo`` is missing in a matching node, then the output will be as
if ``foo = <3>;`` had appeared in the DTS (except YAML data types are used for
the default value).
Note that combining ``default:`` with ``required: true`` will raise an error.
For rules related to ``default`` in upstream Zephyr bindings, see
:ref:`dt-bindings-default-rules`.
See :ref:`dt-bindings-example-properties` for examples. Putting ``default:`` on
any property type besides those used in :ref:`dt-bindings-example-properties`
will raise an error.
enum
====
The ``enum:`` line is followed by a list of values the property may contain. If
a property value in DTS is not in the ``enum:`` list in the binding, an error
is raised. See :ref:`dt-bindings-example-properties` for examples.
const
=====
This specifies a constant value the property must take. It is mainly useful for
constraining the values of common properties for a particular piece of
hardware.
.. _dt-bindings-specifier-space:
specifier-space
===============
.. warning::
It is an abuse of this feature to use it to name properties in
unconventional ways.
For example, this feature is not meant for cases like naming a property
``my-pin``, then assigning it to the "gpio" specifier space using this
feature. Properties which refer to GPIOs should use conventional names, i.e.
end in ``-gpios`` or ``-gpio``.
This property, if present, manually sets the specifier space associated with a
property with type ``phandle-array``.
Normally, the specifier space is encoded implicitly in the property name. A
property named ``foos`` with type ``phandle-array`` implicitly has specifier
space ``foo``. As a special case, ``*-gpios`` properties have specifier space
"gpio", so that ``foo-gpios`` will have specifier space "gpio" rather than
"foo-gpio".
You can use ``specifier-space`` to manually provide a space if
using this convention would result in an awkward or unconventional name.
For example:
.. code-block:: YAML
compatible: ...
properties:
bar:
type: phandle-array
specifier-space: my-custom-space
Above, the ``bar`` property's specifier space is set to "my-custom-space".
You could then use the property in a devicetree like this:
.. code-block:: DTS
controller1: custom-controller@1000 {
#my-custom-space-cells = <2>;
};
controller2: custom-controller@2000 {
#my-custom-space-cells = <1>;
};
my-node {
bar = <&controller1 10 20>, <&controller2 30>;
};
Generally speaking, you should reserve this feature for cases where the
implicit specifier space naming convention doesn't work. One appropriate
example is an ``mboxes`` property with specifier space "mbox", not "mboxe". You
can write this property as follows:
.. code-block:: YAML
properties:
mboxes:
type: phandle-array
specifier-space: mbox
.. _dt-bindings-child:
Child-binding
*************
``child-binding`` can be used when a node has children that all share the same
properties. Each child gets the contents of ``child-binding`` as its binding,
though an explicit ``compatible = ...`` on the child node takes precedence, if
a binding is found for it.
Consider a binding for a PWM LED node like this one, where the child nodes are
required to have a ``pwms`` property:
.. code-block:: devicetree
pwmleds {
compatible = "pwm-leds";
red_pwm_led {
pwms = <&pwm3 4 15625000>;
};
green_pwm_led {
pwms = <&pwm3 0 15625000>;
};
/* ... */
};
The binding would look like this:
.. code-block:: YAML
compatible: "pwm-leds"
child-binding:
description: LED that uses PWM
properties:
pwms:
type: phandle-array
required: true
``child-binding`` also works recursively. For example, this binding:
.. code-block:: YAML
compatible: foo
child-binding:
child-binding:
properties:
my-property:
type: int
required: true
will apply to the ``grandchild`` node in this DTS:
.. code-block:: devicetree
parent {
compatible = "foo";
child {
grandchild {
my-property = <123>;
};
};
};
.. _dt-bindings-bus:
Bus
***
If the node is a bus controller, use ``bus:`` in the binding to say what type
of bus. For example, a binding for a SPI peripheral on an SoC would look like
this:
.. code-block:: YAML
compatible: "manufacturer,spi-peripheral"
bus: spi
# ...
The presence of this key in the binding informs the build system that the
children of any node matching this binding appear on this type of bus.
This in turn influences the way ``on-bus:`` is used to match bindings for the
child nodes.
For a single bus supporting multiple protocols, e.g. I3C and I2C, the ``bus:``
in the binding can have a list as value:
.. code-block:: YAML
compatible: "manufacturer,i3c-controller"
bus: [i3c, i2c]
# ...
.. _dt-bindings-on-bus:
On-bus
******
If the node appears as a device on a bus, use ``on-bus:`` in the binding to say
what type of bus.
For example, a binding for an external SPI memory chip should include this line:
.. code-block:: YAML
on-bus: spi
And a binding for an I2C based temperature sensor should include this line:
.. code-block:: YAML
on-bus: i2c
When looking for a binding for a node, the build system checks if the binding
for the parent node contains ``bus: <bus type>``. If it does, then only
bindings with a matching ``on-bus: <bus type>`` and bindings without an
explicit ``on-bus`` are considered. Bindings with an explicit ``on-bus: <bus
type>`` are searched for first, before bindings without an explicit ``on-bus``.
The search repeats for each item in the node's ``compatible`` property, in
order.
This feature allows the same device to have different bindings depending on
what bus it appears on. For example, consider a sensor device with compatible
``manufacturer,sensor`` which can be used via either I2C or SPI.
The sensor node may therefore appear in the devicetree as a child node of
either an SPI or an I2C controller, like this:
.. code-block:: devicetree
spi-bus@0 {
/* ... some compatible with 'bus: spi', etc. ... */
sensor@0 {
compatible = "manufacturer,sensor";
reg = <0>;
/* ... */
};
};
i2c-bus@0 {
/* ... some compatible with 'bus: i2c', etc. ... */
sensor@79 {
compatible = "manufacturer,sensor";
reg = <79>;
/* ... */
};
};
You can write two separate binding files which match these individual sensor
nodes, even though they have the same compatible:
.. code-block:: YAML
# manufacturer,sensor-spi.yaml, which matches sensor@0 on the SPI bus:
compatible: "manufacturer,sensor"
on-bus: spi
# manufacturer,sensor-i2c.yaml, which matches sensor@79 on the I2C bus:
compatible: "manufacturer,sensor"
properties:
uses-clock-stretching:
type: boolean
on-bus: i2c
Only ``sensor@79`` can have a ``use-clock-stretching`` property. The
bus-sensitive logic ignores :file:`manufacturer,sensor-i2c.yaml` when searching
for a binding for ``sensor@0``.
.. _dt-bindings-cells:
Specifier cell names (\*-cells)
*******************************
This section documents how to name the cells in a specifier within a binding.
These concepts are discussed in detail later in this guide in
:ref:`dt-phandle-arrays`.
Consider a binding for a node whose phandle may appear in a ``phandle-array``
property, like the PWM controllers ``pwm1`` and ``pwm2`` in this example:
.. code-block:: DTS
pwm1: pwm@deadbeef {
compatible = "foo,pwm";
#pwm-cells = <2>;
};
pwm2: pwm@deadbeef {
compatible = "bar,pwm";
#pwm-cells = <1>;
};
my-node {
pwms = <&pwm1 1 2000>, <&pwm2 3000>;
};
The bindings for compatible ``"foo,pwm"`` and ``"bar,pwm"`` must give a name to
the cells that appear in a PWM specifier using ``pwm-cells:``, like this:
.. code-block:: YAML
# foo,pwm.yaml
compatible: "foo,pwm"
...
pwm-cells:
- channel
- period
# bar,pwm.yaml
compatible: "bar,pwm"
...
pwm-cells:
- period
A ``*-names`` (e.g. ``pwm-names``) property can appear on the node as well,
giving a name to each entry.
This allows the cells in the specifiers to be accessed by name, e.g. using APIs
like :c:macro:`DT_PWMS_CHANNEL_BY_NAME`.
If the specifier is empty (e.g. ``#clock-cells = <0>``), then ``*-cells`` can
either be omitted (recommended) or set to an empty array. Note that an empty
array is specified as e.g. ``clock-cells: []`` in YAML.
.. _dt-bindings-include:
Include
*******
Bindings can include other files, which can be used to share common property
definitions between bindings. Use the ``include:`` key for this. Its value is
either a string or a list.
In the simplest case, you can include another file by giving its name as a
string, like this:
.. code-block:: YAML
include: foo.yaml
If any file named :file:`foo.yaml` is found (see
:ref:`dt-where-bindings-are-located` for the search process), it will be
included into this binding.
Included files are merged into bindings with a simple recursive dictionary
merge. The build system will check that the resulting merged binding is
well-formed. It is allowed to include at any level, including ``child-binding``,
like this:
.. code-block:: YAML
# foo.yaml will be merged with content at this level
include: foo.yaml
child-binding:
# bar.yaml will be merged with content at this level
include: bar.yaml
It is an error if a key appears with a different value in a binding and in a
file it includes, with one exception: a binding can have ``required: true`` for
a :ref:`property definition <dt-bindings-properties>` for which the included
file has ``required: false``. The ``required: true`` takes precedence, allowing
bindings to strengthen requirements from included files.
Note that weakening requirements by having ``required: false`` where the
included file has ``required: true`` is an error. This is meant to keep the
organization clean.
The file :zephyr_file:`base.yaml <dts/bindings/base/base.yaml>` contains
definitions for many common properties. When writing a new binding, it is a
good idea to check if :file:`base.yaml` already defines some of the needed
properties, and include it if it does.
Note that you can make a property defined in base.yaml obligatory like this,
taking :ref:`reg <dt-important-props>` as an example:
.. code-block:: YAML
reg:
required: true
This relies on the dictionary merge to fill in the other keys for ``reg``, like
``type``.
To include multiple files, you can use a list of strings:
.. code-block:: YAML
include:
- foo.yaml
- bar.yaml
This includes the files :file:`foo.yaml` and :file:`bar.yaml`. (You can
write this list in a single line of YAML as ``include: [foo.yaml, bar.yaml]``.)
When including multiple files, any overlapping ``required`` keys on properties
in the included files are ORed together. This makes sure that a ``required:
true`` is always respected.
In some cases, you may want to include some property definitions from a file,
but not all of them. In this case, ``include:`` should be a list, and you can
filter out just the definitions you want by putting a mapping in the list, like
this:
.. code-block:: YAML
include:
- name: foo.yaml
property-allowlist:
- i-want-this-one
- and-this-one
- name: bar.yaml
property-blocklist:
- do-not-include-this-one
- or-this-one
Each map element must have a ``name`` key which is the filename to include, and
may have ``property-allowlist`` and ``property-blocklist`` keys that filter
which properties are included.
You cannot have a single map element with both ``property-allowlist`` and
``property-blocklist`` keys. A map element with neither ``property-allowlist``
nor ``property-blocklist`` is valid; no additional filtering is done.
You can freely intermix strings and mappings in a single ``include:`` list:
.. code-block:: YAML
include:
- foo.yaml
- name: bar.yaml
property-blocklist:
- do-not-include-this-one
- or-this-one
Finally, you can filter from a child binding like this:
.. code-block:: YAML
include:
- name: bar.yaml
child-binding:
property-allowlist:
- child-prop-to-allow
Nexus nodes and maps
********************
All ``phandle-array`` type properties support mapping through ``*-map``
properties, e.g. ``gpio-map``, as defined by the Devicetree specification.
This is used, for example, to define connector nodes for common breakout
headers, such as the ``arduino_header`` nodes that are conventionally defined
in the devicetrees for boards with Arduino compatible expansion headers.
``` | /content/code_sandbox/doc/build/dts/bindings-syntax.rst | restructuredtext | 2016-05-26T17:54:19 | 2024-08-16T18:09:06 | zephyr | zephyrproject-rtos/zephyr | 10,307 | 5,671 |
```restructuredtext
.. _dt-binding-compat:
Introduction to Devicetree Bindings
###################################
.. note::
For a detailed syntax reference, see :ref:`dt-bindings-file-syntax`.
Devicetree nodes are matched to bindings using their :ref:`compatible
properties <dt-important-props>`.
During the :ref:`build_configuration_phase`, the build system tries to match
each node in the devicetree to a binding file. When this succeeds, the build
system uses the information in the binding file both when validating the node's
contents and when generating macros for the node.
.. _dt-bindings-simple-example:
A simple example
****************
Here is an example devicetree node:
.. code-block:: devicetree
/* Node in a DTS file */
bar-device {
compatible = "foo-company,bar-device";
num-foos = <3>;
};
Here is a minimal binding file which matches the node:
.. code-block:: yaml
# A YAML binding matching the node
compatible: "foo-company,bar-device"
properties:
num-foos:
type: int
required: true
The build system matches the ``bar-device`` node to its YAML binding because
the node's ``compatible`` property matches the binding's ``compatible:`` line.
What the build system does with bindings
****************************************
The build system uses bindings both to validate devicetree nodes and to convert
the devicetree's contents into the generated :ref:`devicetree_generated.h
<dt-outputs>` header file.
For example, the build system would use the above binding to check that the
required ``num-foos`` property is present in the ``bar-device`` node, and that
its value, ``<3>``, has the correct type.
The build system will then generate a macro for the ``bar-device`` node's
``num-foos`` property, which will expand to the integer literal ``3``. This
macro lets you get the value of the property in C code using the API which is
discussed later in this guide in :ref:`dt-from-c`.
For another example, the following node would cause a build error, because it
has no ``num-foos`` property, and this property is marked required in the
binding:
.. code-block:: devicetree
bad-node {
compatible = "foo-company,bar-device";
};
Other ways nodes are matched to bindings
****************************************
If a node has more than one string in its ``compatible`` property, the build
system looks for compatible bindings in the listed order and uses the first
match.
Take this node as an example:
.. code-block:: devicetree
baz-device {
compatible = "foo-company,baz-device", "generic-baz-device";
};
The ``baz-device`` node would get matched to a binding with a ``compatible:
"generic-baz-device"`` line if the build system can't find a binding with a
``compatible: "foo-company,baz-device"`` line.
Nodes without compatible properties can be matched to bindings associated with
their parent nodes. These are called "child bindings". If a node describes
hardware on a bus, like I2C or SPI, then the bus type is also taken into
account when matching nodes to bindings. (See :ref:`dt-bindings-on-bus` for
details).
See :ref:`dt-zephyr-user` for information about a special node that doesn't
require any binding.
.. _dt-where-bindings-are-located:
Where bindings are located
**************************
Binding file names usually match their ``compatible:`` lines. For example, the
above example binding would be named :file:`foo-company,bar-device.yaml` by
convention.
The build system looks for bindings in :file:`dts/bindings`
subdirectories of the following places:
- the zephyr repository
- your :ref:`application source directory <application>`
- your :ref:`board directory <board_porting_guide>`
- any :ref:`shield directories <shields>`
- any directories manually included in the :ref:`DTS_ROOT <dts_root>`
CMake variable
- any :ref:`module <modules>` that defines a ``dts_root`` in its
:ref:`modules_build_settings`
The build system will consider any YAML file in any of these, including in any
subdirectories, when matching nodes to bindings. A file is considered YAML if
its name ends with ``.yaml`` or ``.yml``.
.. warning::
The binding files must be located somewhere inside the :file:`dts/bindings`
subdirectory of the above places.
For example, if :file:`my-app` is your application directory, then you must
place application-specific bindings inside :file:`my-app/dts/bindings`. So
:file:`my-app/dts/bindings/serial/my-company,my-serial-port.yaml` would be
found, but :file:`my-app/my-company,my-serial-port.yaml` would be ignored.
``` | /content/code_sandbox/doc/build/dts/bindings-intro.rst | restructuredtext | 2016-05-26T17:54:19 | 2024-08-16T18:09:06 | zephyr | zephyrproject-rtos/zephyr | 10,307 | 1,079 |
```restructuredtext
.. _dt-inferred-bindings:
.. _dt-zephyr-user:
The ``/zephyr,user`` node
#########################
Zephyr's devicetree scripts handle the ``/zephyr,user`` node as a special case:
you can put essentially arbitrary properties inside it and retrieve their
values without having to write a binding. It is meant as a convenient container
when only a few simple properties are needed.
.. note::
This node is meant for sample code and user applications. It should not be
used in the upstream Zephyr source code for device drivers, subsystems, etc.
Simple values
*************
You can store numeric or array values in ``/zephyr,user`` if you want them to
be configurable at build time via devicetree.
For example, with this devicetree overlay:
.. code-block:: devicetree
/ {
zephyr,user {
boolean;
bytes = [81 82 83];
number = <23>;
numbers = <1>, <2>, <3>;
string = "text";
strings = "a", "b", "c";
};
};
You can get the above property values in C/C++ code like this:
.. code-block:: C
#define ZEPHYR_USER_NODE DT_PATH(zephyr_user)
DT_PROP(ZEPHYR_USER_NODE, boolean) // 1
DT_PROP(ZEPHYR_USER_NODE, bytes) // {0x81, 0x82, 0x83}
DT_PROP(ZEPHYR_USER_NODE, number) // 23
DT_PROP(ZEPHYR_USER_NODE, numbers) // {1, 2, 3}
DT_PROP(ZEPHYR_USER_NODE, string) // "text"
DT_PROP(ZEPHYR_USER_NODE, strings) // {"a", "b", "c"}
Devices
*******
You can store :ref:`phandles <dt-phandles>` in ``/zephyr,user`` if you want to
be able to reconfigure which devices your application uses in simple cases
using devicetree overlays.
For example, with this devicetree overlay:
.. code-block:: devicetree
/ {
zephyr,user {
handle = <&gpio0>;
handles = <&gpio0>, <&gpio1>;
};
};
You can convert the phandles in the ``handle`` and ``handles`` properties to
device pointers like this:
.. code-block:: C
/*
* Same thing as:
*
* ... my_dev = DEVICE_DT_GET(DT_NODELABEL(gpio0));
*/
const struct device *my_device =
DEVICE_DT_GET(DT_PROP(ZEPHYR_USER_NODE, handle));
#define PHANDLE_TO_DEVICE(node_id, prop, idx) \
DEVICE_DT_GET(DT_PHANDLE_BY_IDX(node_id, prop, idx)),
/*
* Same thing as:
*
* ... *my_devices[] = {
* DEVICE_DT_GET(DT_NODELABEL(gpio0)),
* DEVICE_DT_GET(DT_NODELABEL(gpio1)),
* };
*/
const struct device *my_devices[] = {
DT_FOREACH_PROP_ELEM(ZEPHYR_USER_NODE, handles, PHANDLE_TO_DEVICE)
};
GPIOs
*****
The ``/zephyr,user`` node is a convenient place to store application-specific
GPIOs that you want to be able to reconfigure with a devicetree overlay.
For example, with this devicetree overlay:
.. code-block:: devicetree
#include <zephyr/dt-bindings/gpio/gpio.h>
/ {
zephyr,user {
signal-gpios = <&gpio0 1 GPIO_ACTIVE_HIGH>;
};
};
You can convert the pin defined in ``signal-gpios`` to a ``struct
gpio_dt_spec`` in your source code, then use it like this:
.. code-block:: C
#include <zephyr/drivers/gpio.h>
#define ZEPHYR_USER_NODE DT_PATH(zephyr_user)
const struct gpio_dt_spec signal =
GPIO_DT_SPEC_GET(ZEPHYR_USER_NODE, signal_gpios);
/* Configure the pin */
gpio_pin_configure_dt(&signal, GPIO_OUTPUT_INACTIVE);
/* Set the pin to its active level */
gpio_pin_set_dt(&signal, 1);
(See :c:struct:`gpio_dt_spec`, :c:macro:`GPIO_DT_SPEC_GET`, and
:c:func:`gpio_pin_configure_dt` for details on these APIs.)
``` | /content/code_sandbox/doc/build/dts/zephyr-user-node.rst | restructuredtext | 2016-05-26T17:54:19 | 2024-08-16T18:09:06 | zephyr | zephyrproject-rtos/zephyr | 10,307 | 984 |
```restructuredtext
.. _dt-bindings:
Devicetree bindings
###################
A devicetree on its own is only half the story for describing hardware, as it
is a relatively unstructured format. *Devicetree bindings* provide the other
half.
A devicetree binding declares requirements on the contents of nodes, and
provides semantic information about the contents of valid nodes. Zephyr
devicetree bindings are YAML files in a custom format (Zephyr does not use the
dt-schema tools used by the Linux kernel).
These pages introduce bindings, describe what they do, note where they are
found, and explain their data format.
.. note::
See the :ref:`devicetree_binding_index` for reference information on
bindings built in to Zephyr.
.. toctree::
:maxdepth: 2
bindings-intro.rst
bindings-syntax.rst
bindings-upstream.rst
``` | /content/code_sandbox/doc/build/dts/bindings.rst | restructuredtext | 2016-05-26T17:54:19 | 2024-08-16T18:09:06 | zephyr | zephyrproject-rtos/zephyr | 10,307 | 202 |
```restructuredtext
.. _devicetree-scope-purpose:
Scope and purpose
*****************
A *devicetree* is primarily a hierarchical data structure that describes
hardware. The `Devicetree specification`_ defines its source and binary
representations.
.. _Devicetree specification: path_to_url
Zephyr uses devicetree to describe:
- the hardware available on its :ref:`boards`
- that hardware's initial configuration
As such, devicetree is both a hardware description language and a configuration
language for Zephyr. See :ref:`dt_vs_kconfig` for some comparisons between
devicetree and Zephyr's other main configuration language, Kconfig.
There are two types of devicetree input files: *devicetree sources* and
*devicetree bindings*. The sources contain the devicetree itself. The bindings
describe its contents, including data types. The :ref:`build system
<build_overview>` uses devicetree sources and bindings to produce a generated C
header. The generated header's contents are abstracted by the ``devicetree.h``
API, which you can use to get information from your devicetree.
Here is a simplified view of the process:
.. figure:: zephyr_dt_build_flow.png
:figclass: align-center
Devicetree build flow
All Zephyr and application source code files can include and use
``devicetree.h``. This includes :ref:`device drivers <device_model_api>`,
:ref:`applications <application>`, :ref:`tests <testing>`, the kernel, etc.
The API itself is based on C macros. The macro names all start with ``DT_``. In
general, if you see a macro that starts with ``DT_`` in a Zephyr source file,
it's probably a ``devicetree.h`` macro. The generated C header contains macros
that start with ``DT_`` as well; you might see those in compiler error
messages. You always can tell a generated- from a non-generated macro:
generated macros have some lowercased letters, while the ``devicetree.h`` macro
names have all capital letters.
``` | /content/code_sandbox/doc/build/dts/intro-scope-purpose.rst | restructuredtext | 2016-05-26T17:54:19 | 2024-08-16T18:09:06 | zephyr | zephyrproject-rtos/zephyr | 10,307 | 472 |
```restructuredtext
.. _dt-phandles:
Phandles
########
The devicetree concept of a *phandle* is very similar to pointers in
C. You can use phandles to refer to nodes in devicetree similarly to the way
you can use pointers to refer to structures in C.
.. contents:: Contents
:local:
Getting phandles
****************
The usual way to get a phandle for a devicetree node is from one of its node
labels. For example, with this devicetree:
.. code-block:: DTS
/ {
lbl_a: node-1 {};
lbl_b: lbl_c: node-2 {};
};
You can write the phandle for:
- ``/node-1`` as ``&lbl_a``
- ``/node-2`` as either ``&lbl_b`` or ``&lbl_c``
Notice how the ``&nodelabel`` devicetree syntax is similar to the "address of"
C syntax.
Using phandles
**************
.. note::
"Type" in this section refers to one of the type names documented in
:ref:`dt-bindings-properties` in the devicetree bindings documentation.
Here are the main ways you will use phandles.
One node: phandle type
======================
You can use phandles to refer to ``node-b`` from ``node-a``, where ``node-b``
is related to ``node-a`` in some way.
One common example is when ``node-a`` represents some hardware that
generates an interrupt, and ``node-b`` represents the interrupt
controller that receives the asserted interrupt. In this case, you could
write:
.. code-block:: DTS
node_b: node-b {
interrupt-controller;
};
node-a {
interrupt-parent = <&node_b>;
};
This uses the standard ``interrupt-parent`` property defined in the
devicetree specification to capture the relationship between the two nodes.
These properties have type ``phandle``.
Zero or more nodes: phandles type
=================================
You can use phandles to make an array of references to other nodes.
One common example occurs in :ref:`pin control <pinctrl-guide>`. Pin control
properties like ``pinctrl-0``, ``pinctrl-1`` etc. may contain multiple
phandles, each of which "points" to a node containing information related to
pin configuration for that hardware peripheral. Here's an example of six
phandles in a single property:
.. code-block:: DTS
pinctrl-0 = <&quadspi_clk_pe10 &quadspi_ncs_pe11
&quadspi_bk1_io0_pe12 &quadspi_bk1_io1_pe13
&quadspi_bk1_io2_pe14 &quadspi_bk1_io3_pe15>;
These properties have type ``phandles``.
Zero or more nodes with metadata: phandle-array type
====================================================
You can use phandles to refer to and configure one or more resources that are
"owned" by some other node.
This is the most complex case. There are examples and more details in the
next section.
These properties have type ``phandle-array``.
.. _dt-phandle-arrays:
phandle-array properties
************************
These properties are commonly used to specify a resource that is owned by
another node along with additional metadata about the resource.
High level description
======================
Usually, properties with this type are written like ``phandle-array-prop`` in
this example:
.. code-block:: dts
node {
phandle-array-prop = <&foo 1 2>, <&bar 3>, <&baz 4 5>;
};
That is, the property's value is written as a comma-separated sequence of
"groups", where each "group" is written inside of angle brackets (``< ... >``).
Each "group" starts with a phandle (``&foo``, ``&bar``, ``&baz``). The values
that follow the phandle in each "group" are called *specifiers*. There are
three specifiers in the above example:
#. ``1 2``
#. ``3``
#. ``4 5``
The phandle in each "group" is used to "point" to the hardware that controls
the resource you are interested in. The specifier describes the resource
itself, along with any additional necessary metadata.
The rest of this section describes a common example. Subsequent sections
document more rules about how to use phandle-array properties in practice.
Example phandle-arrays: GPIOs
=============================
Perhaps the most common use case for phandle-array properties is specifying one
or more GPIOs on your SoC that another chip on your board connects to. For that
reason, we'll focus on that use case here. However, there are **many other use
cases** that are handled in devicetree with phandle-array properties.
For example, consider an external chip with an interrupt pin that is connected
to a GPIO on your SoC. You will typically need to provide that GPIO's
information (GPIO controller and pin number) to the :ref:`device driver
<device_model_api>` for that chip. You usually also need to provide other
metadata about the GPIO, like whether it is active low or high, what kind of
internal pull resistor within the SoC should be enabled in order to communicate
with the device, etc., to the driver.
In the devicetree, there will be a node that represents the GPIO controller
that controls a group of pins. This reflects the way GPIO IP blocks are usually
developed in hardware. Therefore, there is no single node in the devicetree
that represents a GPIO pin, and you can't use a single phandle to represent it.
Instead, you would use a phandle-array property, like this:
.. code-block::
my-external-ic {
irq-gpios = <&gpioX pin flags>;
};
In this example, ``irq-gpios`` is a phandle-array property with just one
"group" in its value. ``&gpioX`` is the phandle for the GPIO controller node
that controls the pin. ``pin`` is the pin number (0, 1, 2, ...). ``flags`` is a
bit mask describing pin metadata (for example ``(GPIO_ACTIVE_LOW |
GPIO_PULL_UP)``); see :zephyr_file:`include/zephyr/dt-bindings/gpio/gpio.h` for
more details.
The device driver handling the ``my-external-ic`` node can then use the
``irq-gpios`` property's value to set up interrupt handling for the chip as it
is used on your board. This lets you configure the device driver in devicetree,
without changing the driver's source code.
Such properties can contain multiple values as well:
.. code-block::
my-other-external-ic {
handshake-gpios = <&gpioX pinX flagsX>, <&gpioY pinY flagsY>;
};
The above example specifies two pins:
- ``pinX`` on the GPIO controller with phandle ``&gpioX``, flags ``flagsX``
- ``pinY`` on ``&gpioY``, flags ``flagsY``
You may be wondering how the "pin and flags" convention is established and
enforced. To answer this question, we'll need to introduce a concept called
specifier spaces before moving on to some information about devicetree
bindings.
.. _dt-specifier-spaces:
Specifier spaces
****************
*Specifier spaces* are a way to allow nodes to describe how you should
use them in phandle-array properties.
We'll start with an abstract, high level description of how specifier spaces
work in DTS files, before moving on to a concrete example and providing
references to further reading for how this all works in practice using DTS
files and bindings files.
High level description
======================
As described above, a phandle-array property is a sequence of "groups" of
phandles followed by some number of cells:
.. code-block:: dts
node {
phandle-array-prop = <&foo 1 2>, <&bar 3>;
};
The cells that follow each phandle are called a *specifier*. In this example,
there are two specifiers:
#. ``1 2``: two cells
#. ``3``: one cell
Every phandle-array property has an associated *specifier space*. This sounds
complex, but it's really just a way to assign a meaning to the cells that
follow each phandle in a hardware specific way. Every specifier space has a
unique name. There are a few "standard" names for commonly used hardware, but
you can create your own as well.
Devicetree nodes encode the number of cells that must appear in a specifier, by
name, using the ``#SPACE_NAME-cells`` property. For example, let's assume that
``phandle-array-prop``\ 's specifier space is named ``baz``. Then we would need
the ``foo`` and ``bar`` nodes to have the following ``#baz-cells`` properties:
.. code-block:: DTS
foo: node@1000 {
#baz-cells = <2>;
};
bar: node@2000 {
#baz-cells = <1>;
};
Without the ``#baz-cells`` property, the devicetree tooling would not be able
to validate the number of cells in each specifier in ``phandle-array-prop``.
This flexibility allows you to write down an array of hardware resources in a
single devicetree property, even though the amount of metadata you need to
describe each resource might be different for different nodes.
A single node can also have different numbers of cells in different specifier
spaces. For example, we might have:
.. code-block:: DTS
foo: node@1000 {
#baz-cells = <2>;
#bob-cells = <1>;
};
With that, if ``phandle-array-prop-2`` has specifier space ``bob``, we could
write:
.. code-block:: DTS
node {
phandle-array-prop = <&foo 1 2>, <&bar 3>;
phandle-array-prop-2 = <&foo 4>;
};
This flexibility allows you to have a node that manages multiple different
kinds of resources at the same time. The node describes the amount of metadata
needed to describe each kind of resource (how many cells are needed in each
case) using different ``#SPACE_NAME-cells`` properties.
Example specifier space: gpio
=============================
From the above example, you're already familiar with how one specifier space
works: in the "gpio" space, specifiers almost always have two cells:
#. a pin number
#. a bit mask of flags related to the pin
Therefore, almost all GPIO controller nodes you will see in practice will look
like this:
.. code-block:: DTS
gpioX: gpio-controller@deadbeef {
gpio-controller;
#gpio-cells = <2>;
};
Associating properties with specifier spaces
********************************************
Above, we have described that:
- each phandle-array property has an associated specifier space
- specifier spaces are identified by name
- devicetree nodes use ``#SPECIFIER_NAME-cells`` properties to
configure the number of cells which must appear in a specifier
In this section, we explain how phandle-array properties get their specifier
spaces.
High level description
======================
In general, a ``phandle-array`` property named ``foos`` implicitly has
specifier space ``foo``. For example:
.. code-block:: YAML
properties:
dmas:
type: phandle-array
pwms:
type: phandle-array
The ``dmas`` property's specifier space is "dma". The ``pwm`` property's
specifier space is ``pwm``.
Special case: GPIO
==================
``*-gpios`` properties are special-cased so that e.g. ``foo-gpios`` resolves to
``#gpio-cells`` rather than ``#foo-gpio-cells``.
Manually specifying a space
===========================
You can manually specify the specifier space for any ``phandle-array``
property. See :ref:`dt-bindings-specifier-space`.
Naming the cells in a specifier
*******************************
You should name the cells in each specifier space your hardware supports when
writing bindings. For details on how to do this, see :ref:`dt-bindings-cells`.
This allows C code to query information about and retrieve the values of cells
in a specifier by name using devicetree APIs like these:
- :c:macro:`DT_PHA_BY_IDX`
- :c:macro:`DT_PHA_BY_NAME`
This feature and these macros are used internally by numerous hardware-specific
APIs. Here are a few examples:
- :c:macro:`DT_GPIO_PIN_BY_IDX`
- :c:macro:`DT_PWMS_CHANNEL_BY_IDX`
- :c:macro:`DT_DMAS_CELL_BY_NAME`
- :c:macro:`DT_IO_CHANNELS_INPUT_BY_IDX`
- :c:macro:`DT_CLOCKS_CELL_BY_NAME`
See also
********
- :ref:`dt-writing-property-values`: how to write phandles in devicetree
properties
- :ref:`dt-bindings-properties`: how to write bindings for properties with
phandle types (``phandle``, ``phandles``, ``phandle-array``)
- :ref:`dt-bindings-specifier-space`: how to manually specify a phandle-array
property's specifier space
``` | /content/code_sandbox/doc/build/dts/phandles.rst | restructuredtext | 2016-05-26T17:54:19 | 2024-08-16T18:09:06 | zephyr | zephyrproject-rtos/zephyr | 10,307 | 2,895 |
```restructuredtext
.. _dt-syntax:
Syntax and structure
####################
As the name indicates, a devicetree is a tree. The human-readable text format
for this tree is called DTS (for devicetree source), and is defined in the
`Devicetree specification`_.
.. _Devicetree specification: path_to_url
This page's purpose is to introduce devicetree in a more gradual way than the
specification. However, you may still need to refer to the specification to
understand some detailed cases.
.. contents:: Contents
:local:
Example
*******
Here is an example DTS file:
.. code-block:: devicetree
/dts-v1/;
/ {
a-node {
subnode_nodelabel: a-sub-node {
foo = <3>;
};
};
};
The ``/dts-v1/;`` line means the file's contents are in version 1 of the DTS
syntax, which has replaced a now-obsolete "version 0".
Nodes
*****
Like any tree data structure, a devicetree has a hierarchy of *nodes*.
The above tree has three nodes:
#. A root node: ``/``
#. A node named ``a-node``, which is a child of the root node
#. A node named ``a-sub-node``, which is a child of ``a-node``
.. _dt-node-labels:
Nodes can be assigned *node labels*, which are unique shorthands that refer to
the labeled node. Above, ``a-sub-node`` has the node label
``subnode_nodelabel``. A node can have zero, one, or multiple node labels. You
can use node labels to refer to the node elsewhere in the devicetree.
Devicetree nodes have *paths* identifying their locations in the tree. Like
Unix file system paths, devicetree paths are strings separated by slashes
(``/``), and the root node's path is a single slash: ``/``. Otherwise, each
node's path is formed by concatenating the node's ancestors' names with the
node's own name, separated by slashes. For example, the full path to
``a-sub-node`` is ``/a-node/a-sub-node``.
Properties
**********
Devicetree nodes can also have *properties*. Properties are name/value pairs.
Property values can be any sequence of bytes. In some cases, the values are an
array of what are called *cells*. A cell is just a 32-bit unsigned integer.
Node ``a-sub-node`` has a property named ``foo``, whose value is a cell with
value 3. The size and type of ``foo``\ 's value are implied by the enclosing
angle brackets (``<`` and ``>``) in the DTS.
See :ref:`dt-writing-property-values` below for more example property values.
Devicetrees reflect hardware
****************************
In practice, devicetree nodes usually correspond to some hardware, and the node
hierarchy reflects the hardware's physical layout. For example, let's consider
a board with three I2C peripherals connected to an I2C bus controller on an SoC,
like this:
.. figure:: zephyr_dt_i2c_high_level.png
:alt: representation of a board with three I2C peripherals
:figclass: align-center
Nodes corresponding to the I2C bus controller and each I2C peripheral would be
present in the devicetree. Reflecting the hardware layout, the
I2C peripheral nodes would be children of the bus controller node.
Similar conventions exist for representing other types of hardware.
The DTS would look something like this:
.. code-block:: devicetree
/dts-v1/;
/ {
soc {
i2c-bus-controller {
i2c-peripheral-1 {
};
i2c-peripheral-2 {
};
i2c-peripheral-3 {
};
};
};
};
Properties in practice
**********************
In practice, properties usually describe or configure the hardware the node
represents. For example, an I2C peripheral's node has a property whose value is
the peripheral's address on the bus.
Here's a tree representing the same example, but with real-world node
names and properties you might see when working with I2C devices.
.. figure:: zephyr_dt_i2c_example.png
:figclass: align-center
I2C devicetree example with real-world names and properties.
Node names are at the top of each node with a gray background.
Properties are shown as "name=value" lines.
This is the corresponding DTS:
.. code-block:: devicetree
/dts-v1/;
/ {
soc {
i2c@40003000 {
compatible = "nordic,nrf-twim";
reg = <0x40003000 0x1000>;
apds9960@39 {
compatible = "avago,apds9960";
reg = <0x39>;
};
ti_hdc@43 {
compatible = "ti,hdc", "ti,hdc1010";
reg = <0x43>;
};
mma8652fc@1d {
compatible = "nxp,fxos8700", "nxp,mma8652fc";
reg = <0x1d>;
};
};
};
};
.. _dt-unit-address:
Unit addresses
**************
In addition to showing more real-world names and properties, the above example
introduces a new devicetree concept: unit addresses. Unit addresses are the
parts of node names after an "at" sign (``@``), like ``40003000`` in
``i2c@40003000``, or ``39`` in ``apds9960@39``. Unit addresses are optional:
the ``soc`` node does not have one.
In devicetree, unit addresses give a node's address in the
address space of its parent node. Here are some example unit addresses for
different types of hardware.
Memory-mapped peripherals
The peripheral's register map base address.
For example, the node named ``i2c@40003000`` represents an I2C controller
whose register map base address is 0x40003000.
I2C peripherals
The peripheral's address on the I2C bus.
For example, the child node ``apds9960@39`` of the I2C controller
in the previous section has I2C address 0x39.
SPI peripherals
An index representing the peripheral's chip select line number.
(If there is no chip select line, 0 is used.)
Memory
The physical start address.
For example, a node named ``memory@2000000`` represents RAM starting at
physical address 0x2000000.
Memory-mapped flash
Like RAM, the physical start address.
For example, a node named ``flash@8000000`` represents a flash device
whose physical start address is 0x8000000.
Fixed flash partitions
This applies when the devicetree is used to store a flash partition table.
The unit address is the partition's start offset within the flash memory.
For example, take this flash device and its partitions:
.. code-block:: devicetree
flash@8000000 {
/* ... */
partitions {
partition@0 { /* ... */ };
partition@20000 { /* ... */ };
/* ... */
};
};
The node named ``partition@0`` has offset 0 from the start of its flash
device, so its base address is 0x8000000. Similarly, the base address of
the node named ``partition@20000`` is 0x8020000.
.. _dt-important-props:
Important properties
********************
.. Documentation maintainers: If you add a property to this list,
make sure it gets linked to from gen_devicetree_rest.py too.
The devicetree specification defines several standard properties.
Some of the most important ones are:
compatible
The name of the hardware device the node represents.
The recommended format is ``"vendor,device"``, like ``"avago,apds9960"``,
or a sequence of these, like ``"ti,hdc", "ti,hdc1010"``. The ``vendor``
part is an abbreviated name of the vendor. The file
:zephyr_file:`dts/bindings/vendor-prefixes.txt` contains a list of commonly
accepted ``vendor`` names. The ``device`` part is usually taken from the
datasheet.
It is also sometimes a value like ``gpio-keys``, ``mmio-sram``, or
``fixed-clock`` when the hardware's behavior is generic.
The build system uses the compatible property to find the right
:ref:`bindings <dt-bindings>` for the node. Device drivers use
``devicetree.h`` to find nodes with relevant compatibles, in order to
determine the available hardware to manage.
The ``compatible`` property can have multiple values. Additional values are
useful when the device is a specific instance of a more general family, to
allow the system to match from most- to least-specific device drivers.
Within Zephyr's bindings syntax, this property has type ``string-array``.
reg
Information used to address the device. The value is specific to the device
(i.e. is different depending on the compatible property).
The ``reg`` property is a sequence of ``(address, length)`` pairs. Each
pair is called a "register block". Values are conventionally written
in hex.
Here are some common patterns:
- Devices accessed via memory-mapped I/O registers (like ``i2c@40003000``):
``address`` is usually the base address of the I/O register space, and
``length`` is the number of bytes occupied by the registers.
- I2C devices (like ``apds9960@39`` and its siblings):
``address`` is a slave address on the I2C bus. There is no ``length``
value.
- SPI devices: ``address`` is a chip select line number; there is no
``length``.
You may notice some similarities between the ``reg`` property and common
unit addresses described above. This is not a coincidence. The ``reg``
property can be seen as a more detailed view of the addressable resources
within a device than its unit address.
status
A string which describes whether the node is enabled.
The devicetree specification allows this property to have values
``"okay"``, ``"disabled"``, ``"reserved"``, ``"fail"``, and ``"fail-sss"``.
Only the values ``"okay"`` and ``"disabled"`` are currently relevant to
Zephyr; use of other values currently results in undefined behavior.
A node is considered enabled if its status property is either ``"okay"`` or
not defined (i.e. does not exist in the devicetree source). Nodes with
status ``"disabled"`` are explicitly disabled. (For backwards
compatibility, the value ``"ok"`` is treated the same as ``"okay"``, but
this usage is deprecated.) Devicetree nodes which correspond to physical
devices must be enabled for the corresponding ``struct device`` in the
Zephyr driver model to be allocated and initialized.
interrupts
Information about interrupts generated by the device, encoded as an array
of one or more *interrupt specifiers*. Each interrupt specifier has some
number of cells. See section 2.4, *Interrupts and Interrupt Mapping*, in the
`Devicetree Specification release v0.3`_ for more details.
Zephyr's devicetree bindings language lets you give a name to each cell in
an interrupt specifier.
.. _Devicetree Specification release v0.3:
path_to_url
.. highlight:: none
.. note::
Earlier versions of Zephyr made frequent use of the ``label`` property,
which is distinct from the standard :ref:`node label <dt-node-labels>`.
Use of the label *property* in new devicetree bindings, as well as use of
the :c:macro:`DT_LABEL` macro in new code, are actively discouraged. Label
properties continue to persist for historical reasons in some existing
bindings and overlays, but should not be used in new bindings or device
implementations.
.. _dt-writing-property-values:
Writing property values
***********************
This section describes how to write property values in DTS format. The property
types in the table below are described in detail in :ref:`dt-bindings`.
Some specifics are skipped in the interest of keeping things simple; if you're
curious about details, see the devicetree specification.
.. list-table::
:header-rows: 1
:widths: 1 4 4
* - Property type
- How to write
- Example
* - string
- Double quoted
- ``a-string = "hello, world!";``
* - int
- between angle brackets (``<`` and ``>``)
- ``an-int = <1>;``
* - boolean
- for ``true``, with no value (for ``false``, use ``/delete-property/``)
- ``my-true-boolean;``
* - array
- between angle brackets (``<`` and ``>``), separated by spaces
- ``foo = <0xdeadbeef 1234 0>;``
* - uint8-array
- in hexadecimal *without* leading ``0x``, between square brackets (``[`` and ``]``).
- ``a-byte-array = [00 01 ab];``
* - string-array
- separated by commas
- ``a-string-array = "string one", "string two", "string three";``
* - phandle
- between angle brackets (``<`` and ``>``)
- ``a-phandle = <&mynode>;``
* - phandles
- between angle brackets (``<`` and ``>``), separated by spaces
- ``some-phandles = <&mynode0 &mynode1 &mynode2>;``
* - phandle-array
- between angle brackets (``<`` and ``>``), separated by spaces
- ``a-phandle-array = <&mynode0 1 2>, <&mynode1 3 4>;``
Additional notes on the above:
- The values in the ``phandle``, ``phandles``, and ``phandle-array`` types are
described further in :ref:`dt-phandles`
- Boolean properties are true if present. They should not have a value.
A boolean property is only false if it is completely missing in the DTS.
- The ``foo`` property value above has three *cells* with values 0xdeadbeef, 1234,
and 0, in that order. Note that hexadecimal and decimal numbers are allowed and
can be intermixed. Since Zephyr transforms DTS to C sources, it is not
necessary to specify the endianness of an individual cell here.
- 64-bit integers are written as two 32-bit cells in big-endian order. The value
0xaaaa0000bbbb1111 would be written ``<0xaaaa0000 0xbbbb1111>``.
- The ``a-byte-array`` property value is the three bytes 0x00, 0x01, and 0xab, in
that order.
- Parentheses, arithmetic operators, and bitwise operators are allowed. The
``bar`` property contains a single cell with value 64:
.. code-block:: devicetree
bar = <(2 * (1 << 5))>;
Note that the entire expression must be parenthesized.
- Property values refer to other nodes in the devicetree by their *phandles*.
You can write a phandle using ``&foo``, where ``foo`` is a :ref:`node label
<dt-node-labels>`. Here is an example devicetree fragment:
.. code-block:: devicetree
foo: device@0 { };
device@1 {
sibling = <&foo 1 2>;
};
The ``sibling`` property of node ``device@1`` contains three cells, in this order:
#. The ``device@0`` node's phandle, which is written here as ``&foo`` since
the ``device@0`` node has a node label ``foo``
#. The value 1
#. The value 2
In the devicetree, a phandle value is a cell -- which again is just a 32-bit
unsigned int. However, the Zephyr devicetree API generally exposes these
values as *node identifiers*. Node identifiers are covered in more detail in
:ref:`dt-from-c`.
- Array and similar type property values can be split into several ``<>``
blocks, like this:
.. code-block:: devicetree
foo = <1 2>, <3 4>; // Okay for 'type: array'
foo = <&label1 &label2>, <&label3 &label4>; // Okay for 'type: phandles'
foo = <&label1 1 2>, <&label2 3 4>; // Okay for 'type: phandle-array'
This is recommended for readability when possible if the value can be
logically grouped into blocks of sub-values.
.. _dt-alias-chosen:
Aliases and chosen nodes
************************
There are two additional ways beyond :ref:`node labels <dt-node-labels>` to
refer to a particular node without specifying its entire path: by alias, or by
chosen node.
Here is an example devicetree which uses both:
.. code-block:: devicetree
/dts-v1/;
/ {
chosen {
zephyr,console = &uart0;
};
aliases {
my-uart = &uart0;
};
soc {
uart0: serial@12340000 {
...
};
};
};
The ``/aliases`` and ``/chosen`` nodes do not refer to an actual hardware
device. Their purpose is to specify other nodes in the devicetree.
Above, ``my-uart`` is an alias for the node with path ``/soc/serial@12340000``.
Using its node label ``uart0``, the same node is set as the value of the chosen
``zephyr,console`` node.
Zephyr sample applications sometimes use aliases to allow overriding the
particular hardware device used by the application in a generic way. For
example, :zephyr:code-sample:`blinky` uses this to abstract the LED to blink via the
``led0`` alias.
The ``/chosen`` node's properties are used to configure system- or
subsystem-wide values. See :ref:`devicetree-chosen-nodes` for more information.
``` | /content/code_sandbox/doc/build/dts/intro-syntax-structure.rst | restructuredtext | 2016-05-26T17:54:19 | 2024-08-16T18:09:06 | zephyr | zephyrproject-rtos/zephyr | 10,307 | 4,220 |
```unknown
; An RFC 7405 ABNF grammar for devicetree macros.
;
; This does *not* cover macros pulled out of DT via Kconfig,
; like CONFIG_SRAM_BASE_ADDRESS, etc. It only describes the
; ones that start with DT_ and are directly generated.
; your_sha256_hash----
; dt-macro: the top level nonterminal for a devicetree macro
;
; A dt-macro starts with uppercase "DT_", and is one of:
;
; - a <node-macro>, generated for a particular node
; - some <other-macro>, a catch-all for other types of macros
dt-macro = node-macro / other-macro
; your_sha256_hash----
; node-macro: a macro related to a node
; A macro about a property value
node-macro = property-macro
; A macro about the pinctrl properties in a node.
node-macro =/ pinctrl-macro
; A macro about the GPIO hog properties in a node.
node-macro =/ gpiohogs-macro
; EXISTS macro: node exists in the devicetree
node-macro =/ %s"DT_N" path-id %s"_EXISTS"
; Bus macros: the plain BUS is a way to access a node's bus controller.
; The additional dt-name suffix is added to match that node's bus type;
; the dt-name in this case is something like "spi" or "i2c".
node-macro =/ %s"DT_N" path-id %s"_BUS" ["_" dt-name]
; The reg property is special and has its own macros.
node-macro =/ %s"DT_N" path-id %s"_REG_NUM"
node-macro =/ %s"DT_N" path-id %s"_REG_IDX_" DIGIT "_EXISTS"
node-macro =/ %s"DT_N" path-id %s"_REG_IDX_" DIGIT
%s"_VAL_" ( %s"ADDRESS" / %s"SIZE")
node-macro =/ %s"DT_N" path-id %s"_REG_NAME_" dt-name
%s"_VAL_" ( %s"ADDRESS" / %s"SIZE")
node-macro =/ %s"DT_N" path-id %s"_REG_NAME_" dt-name "_EXISTS"
; The interrupts property is also special.
node-macro =/ %s"DT_N" path-id %s"_IRQ_NUM"
node-macro =/ %s"DT_N" path-id %s"_IRQ_LEVEL"
node-macro =/ %s"DT_N" path-id %s"_IRQ_IDX_" DIGIT "_EXISTS"
node-macro =/ %s"DT_N" path-id %s"_IRQ_IDX_" DIGIT
%s"_VAL_" dt-name [ %s"_EXISTS" ]
node-macro =/ %s"DT_N" path-id %s"_CONTROLLER"
node-macro =/ %s"DT_N" path-id %s"_IRQ_NAME_" dt-name
%s"_VAL_" dt-name [ %s"_EXISTS" ]
node-macro =/ %s"DT_N" path-id %s"_IRQ_NAME_" dt-name "_CONTROLLER"
; The ranges property is also special.
node-macro =/ %s"DT_N" path-id %s"_RANGES_NUM"
node-macro =/ %s"DT_N" path-id %s"_RANGES_IDX_" DIGIT "_EXISTS"
node-macro =/ %s"DT_N" path-id %s"_RANGES_IDX_" DIGIT
%s"_VAL_" ( %s"CHILD_BUS_FLAGS" / %s"CHILD_BUS_ADDRESS" /
%s"PARENT_BUS_ADDRESS" / %s"LENGTH")
node-macro =/ %s"DT_N" path-id %s"_RANGES_IDX_" DIGIT
%s"_VAL_CHILD_BUS_FLAGS_EXISTS"
node-macro =/ %s"DT_N" path-id %s"_FOREACH_RANGE"
; Subnodes of the fixed-partitions compatible get macros which contain
; a unique ordinal value for each partition
node-macro =/ %s"DT_N" path-id %s"_PARTITION_ID" DIGIT
; Macros are generated for each of a node's compatibles;
; dt-name in this case is something like "vnd_device".
node-macro =/ %s"DT_N" path-id %s"_COMPAT_MATCHES_" dt-name
node-macro =/ %s"DT_N" path-id %s"_COMPAT_VENDOR_IDX_" DIGIT "_EXISTS"
node-macro =/ %s"DT_N" path-id %s"_COMPAT_VENDOR_IDX_" DIGIT
node-macro =/ %s"DT_N" path-id %s"_COMPAT_MODEL_IDX_" DIGIT "_EXISTS"
node-macro =/ %s"DT_N" path-id %s"_COMPAT_MODEL_IDX_" DIGIT
; Every non-root node gets one of these macros, which expands to the node
; identifier for that node's parent in the devicetree.
node-macro =/ %s"DT_N" path-id %s"_PARENT"
; These are used internally by DT_FOREACH_PROP_ELEM(_SEP)(_VARGS), which
; iterates over each property element.
node-macro =/ %s"DT_N" path-id %s"_P_" prop-id %s"_FOREACH_PROP_ELEM"
node-macro =/ %s"DT_N" path-id %s"_P_" prop-id %s"_FOREACH_PROP_ELEM_SEP"
node-macro =/ %s"DT_N" path-id %s"_P_" prop-id %s"_FOREACH_PROP_ELEM_VARGS"
node-macro =/ %s"DT_N" path-id %s"_P_" prop-id %s"_FOREACH_PROP_ELEM_SEP_VARGS"
; These are used by DT_CHILD_NUM and DT_CHILD_NUM_STATUS_OKAY macros
node-macro =/ %s"DT_N" path-id %s"_CHILD_NUM"
node-macro =/ %s"DT_N" path-id %s"_CHILD_NUM_STATUS_OKAY"
; These are used internally by DT_FOREACH_CHILD, which iterates over
; each child node.
node-macro =/ %s"DT_N" path-id %s"_FOREACH_CHILD"
node-macro =/ %s"DT_N" path-id %s"_FOREACH_CHILD_SEP"
node-macro =/ %s"DT_N" path-id %s"_FOREACH_CHILD_VARGS"
node-macro =/ %s"DT_N" path-id %s"_FOREACH_CHILD_SEP_VARGS"
; These are used internally by DT_FOREACH_CHILD_STATUS_OKAY, which iterates
; over each child node with status "okay".
node-macro =/ %s"DT_N" path-id %s"_FOREACH_CHILD_STATUS_OKAY"
node-macro =/ %s"DT_N" path-id %s"_FOREACH_CHILD_STATUS_OKAY_SEP"
node-macro =/ %s"DT_N" path-id %s"_FOREACH_CHILD_STATUS_OKAY_VARGS"
node-macro =/ %s"DT_N" path-id %s"_FOREACH_CHILD_STATUS_OKAY_SEP_VARGS"
; These are used internally by DT_FOREACH_NODELABEL and
; DT_FOREACH_NODELABEL_VARGS, which iterate over a node's node labels.
node-macro =/ %s"DT_N" path-id %s"_FOREACH_NODELABEL" [ %s"_VARGS" ]
; These are used internally by DT_NUM_NODELABELS
node-macro =/ %s"DT_N" path-id %s"_NODELABEL_NUM"
; The node's zero-based index in the list of it's parent's child nodes.
node-macro =/ %s"DT_N" path-id %s"_CHILD_IDX"
; The node's status macro; dt-name in this case is something like "okay"
; or "disabled".
node-macro =/ %s"DT_N" path-id %s"_STATUS_" dt-name
; The node's dependency ordinal. This is a non-negative integer
; value that is used to represent dependency information.
node-macro =/ %s"DT_N" path-id %s"_ORD"
; The node's path, as a string literal
node-macro =/ %s"DT_N" path-id %s"_PATH"
; The node's name@unit-addr, as a string literal
node-macro =/ %s"DT_N" path-id %s"_FULL_NAME"
; The dependency ordinals of a node's requirements (direct dependencies).
node-macro =/ %s"DT_N" path-id %s"_REQUIRES_ORDS"
; The dependency ordinals of a node supports (reverse direct dependencies).
node-macro =/ %s"DT_N" path-id %s"_SUPPORTS_ORDS"
; your_sha256_hash----
; pinctrl-macro: a macro related to the pinctrl properties in a node
;
; These are a bit of a special case because they kind of form an array,
; but the array indexes correspond to pinctrl-DIGIT properties in a node.
;
; So they're related to a node, but not just one property within the node.
;
; The following examples assume something like this:
;
; foo {
; pinctrl-0 = <&bar>;
; pinctrl-1 = <&baz>;
; pinctrl-names = "default", "sleep";
; };
; Total number of pinctrl-DIGIT properties in the node. May be zero.
;
; #define DT_N_<node path>_PINCTRL_NUM 2
pinctrl-macro = %s"DT_N" path-id %s"_PINCTRL_NUM"
; A given pinctrl-DIGIT property exists.
;
; #define DT_N_<node path>_PINCTRL_IDX_0_EXISTS 1
; #define DT_N_<node path>_PINCTRL_IDX_1_EXISTS 1
pinctrl-macro =/ %s"DT_N" path-id %s"_PINCTRL_IDX_" DIGIT %s"_EXISTS"
; A given pinctrl property name exists.
;
; #define DT_N_<node path>_PINCTRL_NAME_default_EXISTS 1
; #define DT_N_<node path>_PINCTRL_NAME_sleep_EXISTS 1
pinctrl-macro =/ %s"DT_N" path-id %s"_PINCTRL_NAME_" dt-name %s"_EXISTS"
; The corresponding index number of a named pinctrl property.
;
; #define DT_N_<node path>_PINCTRL_NAME_default_IDX 0
; #define DT_N_<node path>_PINCTRL_NAME_sleep_IDX 1
pinctrl-macro =/ %s"DT_N" path-id %s"_PINCTRL_NAME_" dt-name %s"_IDX"
; The node identifier for the phandle in a named pinctrl property.
;
; #define DT_N_<node path>_PINCTRL_NAME_default_IDX_0_PH <node id for 'bar'>
;
; There's no need for a separate macro for access by index: that's
; covered by property-macro. We only need this because the map from
; names to properties is implicit in the structure of the DT.
pinctrl-macro =/ %s"DT_N" path-id %s"_PINCTRL_NAME_" dt-name %s"_IDX_" DIGIT %s"_PH"
; your_sha256_hash----
; gpiohogs-macro: a macro related to GPIO hog nodes
;
; The following examples assume something like this:
;
; gpio1: gpio@... {
; compatible = "vnd,gpio";
; #gpio-cells = <2>;
;
; node-1 {
; gpio-hog;
; gpios = <0x0 0x10>, <0x1 0x20>;
; output-high;
; };
;
; node-2 {
; gpio-hog;
; gpios = <0x2 0x30>;
; output-low;
; };
; };
;
; Bindings fragment for the vnd,gpio compatible:
;
; gpio-cells:
; - pin
; - flags
; The node contains GPIO hogs.
;
; #define DT_N_<node-1 path>_GPIO_HOGS_EXISTS 1
; #define DT_N_<node-2 path>_GPIO_HOGS_EXISTS 1
gpioshogs-macro = %s"DT_N" path-id %s"_GPIO_HOGS_EXISTS"
; Number of hogged GPIOs in a node.
;
; #define DT_N_<node-1 path>_GPIO_HOGS_NUM 2
; #define DT_N_<node-2 path>_GPIO_HOGS_NUM 1
gpioshogs-macro =/ %s"DT_N" path-id %s"_GPIO_HOGS_NUM"
; A given logical GPIO hog array index exists.
;
; #define DT_N_<node-1 path>_GPIO_HOGS_IDX_0_EXISTS 1
; #define DT_N_<node-1 path>_GPIO_HOGS_IDX_1_EXISTS 1
; #define DT_N_<node-2 path>_GPIO_HOGS_IDX_0_EXISTS 1
gpiohogs-macro =/ %s"DT_N" path-id %s"_GPIO_HOGS_IDX_" DIGIT %s"_EXISTS"
; The node identifier for the phandle of a logical index in the GPIO hogs array.
; These macros are currently unused by Zephyr.
;
; #define DT_N_<node-1 path>_GPIO_HOGS_IDX_0_PH <node id for 'gpio1'>
; #define DT_N_<node-1 path>_GPIO_HOGS_IDX_1_PH <node id for 'gpio1'>
; #define DT_N_<node-2 path>_GPIO_HOGS_IDX_0_PH <node id for 'gpio1'>
gpiohogs-macro =/ %s"DT_N" path-id %s"_GPIO_HOGS_IDX_" DIGIT %s"_PH"
; The pin cell of a logical index in the GPIO hogs array exists.
;
; #define DT_N_<node-1 path>_GPIO_HOGS_IDX_0_VAL_pin_EXISTS 1
; #define DT_N_<node-1 path>_GPIO_HOGS_IDX_1_VAL_pin_EXISTS 1
; #define DT_N_<node-2 path>_GPIO_HOGS_IDX_0_VAL_pin_EXISTS 1
gpiohogs-macro =/ %s"DT_N" path-id %s"_GPIO_HOGS_IDX_" DIGIT %s"_VAL_pin_EXISTS"
; The value of the pin cell of a logical index in the GPIO hogs array.
;
; #define DT_N_<node-1 path>_GPIO_HOGS_IDX_0_VAL_pin 0
; #define DT_N_<node-1 path>_GPIO_HOGS_IDX_1_VAL_pin 1
; #define DT_N_<node-2 path>_GPIO_HOGS_IDX_0_VAL_pin 2
gpiohogs-macro =/ %s"DT_N" path-id %s"_GPIO_HOGS_IDX_" DIGIT %s"_VAL_pin"
; The flags cell of a logical index in the GPIO hogs array exists.
;
; #define DT_N_<node-1 path>_GPIO_HOGS_IDX_0_VAL_flags_EXISTS 1
; #define DT_N_<node-1 path>_GPIO_HOGS_IDX_1_VAL_flags_EXISTS 1
; #define DT_N_<node-2 path>_GPIO_HOGS_IDX_0_VAL_flags_EXISTS 1
gpiohogs-macro =/ %s"DT_N" path-id %s"_GPIO_HOGS_IDX_" DIGIT %s"_VAL_flags_EXISTS"
; The value of the flags cell of a logical index in the GPIO hogs array.
;
; #define DT_N_<node-1 path>_GPIO_HOGS_IDX_0_VAL_flags 0x10
; #define DT_N_<node-1 path>_GPIO_HOGS_IDX_1_VAL_flags 0x20
; #define DT_N_<node-2 path>_GPIO_HOGS_IDX_0_VAL_flags 0x30
gpiohogs-macro =/ %s"DT_N" path-id %s"_GPIO_HOGS_IDX_" DIGIT %s"_VAL_flags"
; your_sha256_hash----
; property-macro: a macro related to a node property
;
; These combine a node identifier with a "lowercase-and-underscores form"
; property name. The value expands to something related to the property's
; value.
;
; The optional prop-suf suffix is when there's some specialized
; subvalue that deserves its own macro, like the macros for an array
; property's individual elements
;
; The "plain vanilla" macro for a property's value, with no prop-suf,
; looks like this:
;
; DT_N_<node path>_P_<property name>
;
; Components:
;
; - path-id: node's devicetree path converted to a C token
; - prop-id: node's property name converted to a C token
; - prop-suf: an optional property-specific suffix
property-macro = %s"DT_N" path-id %s"_P_" prop-id [prop-suf]
; your_sha256_hash----
; path-id: a node's path-based macro identifier
;
; This in "lowercase-and-underscores" form. I.e. it is
; the node's devicetree path converted to a C token by changing:
;
; - each slash (/) to _S_
; - all letters to lowercase
; - non-alphanumerics characters to underscores
;
; For example, the leaf node "bar-BAZ" in this devicetree:
;
; / {
; foo@123 {
; bar-BAZ {};
; };
; };
;
; has path-id "_S_foo_123_S_bar_baz".
path-id = 1*( %s"_S_" dt-name )
; your_sha256_hash------
; prop-id: a property identifier
;
; A property name converted to a C token by changing:
;
; - all letters to lowercase
; - non-alphanumeric characters to underscores
;
; Example node:
;
; chosen {
; zephyr,console = &uart1;
; WHY,AM_I_SHOUTING = "unclear";
; };
;
; The 'zephyr,console' property has prop-id 'zephyr_console'.
; 'WHY,AM_I_SHOUTING' has prop-id 'why_am_i_shouting'.
prop-id = dt-name
; your_sha256_hash------
; prop-suf: a property-specific macro suffix
;
; Extra macros are generated for properties:
;
; - that are special to the specification ("reg", "interrupts", etc.)
; - with array types (uint8-array, phandle-array, etc.)
; - with "enum:" in their bindings
; - that have zephyr device API specific macros for phandle-arrays
; - related to phandle specifier names ("foo-names")
;
; Here are some examples:
;
; - _EXISTS: property, index or name existence flag
; - _SIZE: logical property length
; - _IDX_<i>: values of individual array elements
; - _IDX_<DIGIT>_VAL_<dt-name>: values of individual specifier
; cells within a phandle array
; - _ADDR_<i>: for reg properties, the i-th register block address
; - _LEN_<i>: for reg properties, the i-th register block length
;
; The different cases are not exhaustively documented here to avoid
; this file going stale. Please see devicetree.h if you need to know
; the details.
prop-suf = 1*( "_" gen-name ["_" dt-name] )
; your_sha256_hash----
; other-macro: grab bag for everything that isn't a node-macro.
; See examples below.
other-macro = %s"DT_N_" alternate-id
; Total count of enabled instances of a compatible.
other-macro =/ %s"DT_N_INST_" dt-name %s"_NUM_OKAY"
; These are used internally by DT_FOREACH_NODE and
; DT_FOREACH_STATUS_OKAY_NODE respectively.
other-macro =/ %s"DT_FOREACH_HELPER"
other-macro =/ %s"DT_FOREACH_OKAY_HELPER"
; These are used internally by DT_FOREACH_STATUS_OKAY,
; which iterates over each enabled node of a compatible.
other-macro =/ %s"DT_FOREACH_OKAY_" dt-name
other-macro =/ %s"DT_FOREACH_OKAY_VARGS_" dt-name
; These are used internally by DT_INST_FOREACH_STATUS_OKAY,
; which iterates over each enabled instance of a compatible.
other-macro =/ %s"DT_FOREACH_OKAY_INST_" dt-name
other-macro =/ %s"DT_FOREACH_OKAY_INST_VARGS_" dt-name
; E.g.: #define DT_CHOSEN_zephyr_flash
other-macro =/ %s"DT_CHOSEN_" dt-name
; Declares that a compatible has at least one node on a bus.
; Example:
;
; #define DT_COMPAT_vnd_dev_BUS_spi 1
other-macro =/ %s"DT_COMPAT_" dt-name %s"_BUS_" dt-name
; Declares that a compatible has at least one status "okay" node.
; Example:
;
; #define DT_COMPAT_HAS_OKAY_vnd_dev 1
other-macro =/ %s"DT_COMPAT_HAS_OKAY_" dt-name
; Currently used to allow mapping a lowercase-and-underscores "label"
; property to a fixed-partitions node. See the flash map API docs
; for an example.
other-macro =/ %s"DT_COMPAT_" dt-name %s"_LABEL_" dt-name
; your_sha256_hash----
; alternate-id: another way to specify a node besides a path-id
;
; Example devicetree:
;
; / {
; aliases {
; dev = &dev_1;
; };
;
; soc {
; dev_1: device@123 {
; compatible = "vnd,device";
; };
; };
; };
;
; Node device@123 has these alternate-id values:
;
; - ALIAS_dev
; - NODELABEL_dev_1
; - INST_0_vnd_device
;
; The full alternate-id macros are:
;
; #define DT_N_INST_0_vnd_device DT_N_S_soc_S_device_123
; #define DT_N_ALIAS_dev DT_N_S_soc_S_device_123
; #define DT_N_NODELABEL_dev_1 DT_N_S_soc_S_device_123
;
; These mainly exist to allow pasting an alternate-id macro onto a
; "_P_<prop-id>" to access node properties given a node's alias, etc.
;
; Notice that "inst"-type IDs have a leading instance identifier,
; which is generated by the devicetree scripts. The other types of
; alternate-id begin immediately with names taken from the devicetree.
alternate-id = ( %s"ALIAS" / %s"NODELABEL" ) dt-name
alternate-id =/ %s"INST_" 1*DIGIT "_" dt-name
; your_sha256_hash----
; miscellaneous helper definitions
; A dt-name is one or more:
; - lowercase ASCII letters (a-z)
; - numbers (0-9)
; - underscores ("_")
;
; They are the result of converting names or combinations of names
; from devicetree to a valid component of a C identifier by
; lowercasing letters (in practice, this is a no-op) and converting
; non-alphanumeric characters to underscores.
;
; You'll see these referred to as "lowercase-and-underscores" forms of
; various devicetree identifiers throughout the documentation.
dt-name = 1*( lower / DIGIT / "_" )
; gen-name is used as a stand-in for a component of a generated macro
; name which does not come from devicetree (dt-name covers that case).
;
; - uppercase ASCII letters (a-z)
; - numbers (0-9)
; - underscores ("_")
gen-name = upper 1*( upper / DIGIT / "_" )
; "lowercase ASCII letter" turns out to be pretty annoying to specify
; in RFC-7405 syntax.
;
; This is just ASCII letters a (0x61) through z (0x7a).
lower = %x61-7A
; "uppercase ASCII letter" in RFC-7405 syntax
upper = %x41-5A
``` | /content/code_sandbox/doc/build/dts/macros.bnf | unknown | 2016-05-26T17:54:19 | 2024-08-16T18:09:06 | zephyr | zephyrproject-rtos/zephyr | 10,307 | 5,311 |
```restructuredtext
.. _devicetree-in-out-files:
Input and output files
######################
This section describes the input and output files shown in the figure in
:ref:`devicetree-scope-purpose` in more detail.
.. figure:: zephyr_dt_inputs_outputs.svg
:figclass: align-center
Devicetree input (green) and output (yellow) files
.. _dt-input-files:
Input files
***********
There are four types of devicetree input files:
- sources (``.dts``)
- includes (``.dtsi``)
- overlays (``.overlay``)
- bindings (``.yaml``)
The devicetree files inside the :file:`zephyr` directory look like this:
.. code-block:: none
boards/<ARCH>/<BOARD>/<BOARD>.dts
dts/common/skeleton.dtsi
dts/<ARCH>/.../<SOC>.dtsi
dts/bindings/.../binding.yaml
Generally speaking, every supported board has a :file:`BOARD.dts` file
describing its hardware. For example, the ``reel_board`` has
:zephyr_file:`boards/phytec/reel_board/reel_board.dts`.
:file:`BOARD.dts` includes one or more ``.dtsi`` files. These ``.dtsi`` files
describe the CPU or system-on-chip Zephyr runs on, perhaps by including other
``.dtsi`` files. They can also describe other common hardware features shared by
multiple boards. In addition to these includes, :file:`BOARD.dts` also describes
the board's specific hardware.
The :file:`dts/common` directory contains :file:`skeleton.dtsi`, a minimal
include file for defining a complete devicetree. Architecture-specific
subdirectories (:file:`dts/<ARCH>`) contain ``.dtsi`` files for CPUs or SoCs
which extend :file:`skeleton.dtsi`.
The C preprocessor is run on all devicetree files to expand macro references,
and includes are generally done with ``#include <filename>`` directives, even
though DTS has a ``/include/ "<filename>"`` syntax.
:file:`BOARD.dts` can be extended or modified using *overlays*. Overlays are
also DTS files; the :file:`.overlay` extension is just a convention which makes
their purpose clear. Overlays adapt the base devicetree for different purposes:
- Zephyr applications can use overlays to enable a peripheral that is disabled
by default, select a sensor on the board for an application specific purpose,
etc. Along with :ref:`kconfig`, this makes it possible to reconfigure the
kernel and device drivers without modifying source code.
- Overlays are also used when defining :ref:`shields`.
The build system automatically picks up :file:`.overlay` files stored in
certain locations. It is also possible to explicitly list the overlays to
include, via the :makevar:`DTC_OVERLAY_FILE` CMake variable. See
:ref:`set-devicetree-overlays` for details.
The build system combines :file:`BOARD.dts` and any :file:`.overlay` files by
concatenating them, with the overlays put last. This relies on DTS syntax which
allows merging overlapping definitions of nodes in the devicetree. See
:ref:`dt_k6x_example` for an example of how this works (in the context of
``.dtsi`` files, but the principle is the same for overlays). Putting the
contents of the :file:`.overlay` files last allows them to override
:file:`BOARD.dts`.
:ref:`dt-bindings` (which are YAML files) are essentially glue. They describe
the contents of devicetree sources, includes, and overlays in a way that allows
the build system to generate C macros usable by device drivers and
applications. The :file:`dts/bindings` directory contains bindings.
.. _dt-scripts:
Scripts and tools
*****************
The following libraries and scripts, located in :zephyr_file:`scripts/dts/`,
create output files from input files. Their sources have extensive
documentation.
:zephyr_file:`dtlib.py <scripts/dts/python-devicetree/src/devicetree/dtlib.py>`
A low-level DTS parsing library.
:zephyr_file:`edtlib.py <scripts/dts/python-devicetree/src/devicetree/edtlib.py>`
A library layered on top of dtlib that uses bindings to interpret
properties and give a higher-level view of the devicetree. Uses dtlib to do
the DTS parsing.
:zephyr_file:`gen_defines.py <scripts/dts/python-devicetree/src/devicetree/edtlib.py>`
A script that uses edtlib to generate C preprocessor macros from the
devicetree and bindings.
In addition to these, the standard ``dtc`` (devicetree compiler) tool is run on
the final devicetree if it is installed on your system. This is just to catch
errors or warnings. The output is unused. Boards may need to pass ``dtc``
additional flags, e.g. for warning suppression. Board directories can contain a
file named :file:`pre_dt_board.cmake` which configures these extra flags, like
this:
.. code-block:: cmake
list(APPEND EXTRA_DTC_FLAGS "-Wno-simple_bus_reg")
.. _dt-outputs:
Output files
************
These are created in your application's build directory.
.. warning::
Don't include the header files directly. :ref:`dt-from-c` explains
what to do instead.
:file:`<build>/zephyr/zephyr.dts.pre`
The preprocessed DTS source. This is an intermediate output file, which is
input to :file:`gen_defines.py` and used to create :file:`zephyr.dts` and
:file:`devicetree_generated.h`.
:file:`<build>/zephyr/include/generated/zephyr/devicetree_generated.h`
The generated macros and additional comments describing the devicetree.
Included by ``devicetree.h``.
:file:`<build>/zephyr/zephyr.dts`
The final merged devicetree. This file is output by :file:`gen_defines.py`.
It is useful for debugging any issues. If the devicetree compiler ``dtc`` is
installed, it is also run on this file, to catch any additional warnings or
errors.
``` | /content/code_sandbox/doc/build/dts/intro-input-output.rst | restructuredtext | 2016-05-26T17:54:19 | 2024-08-16T18:09:06 | zephyr | zephyrproject-rtos/zephyr | 10,307 | 1,438 |
```restructuredtext
.. _flashing-soc-board-config:
Flashing configuration
######################
Zephyr supports setting up configuration for flash runners (invoked from
:ref:`west flash<west-flashing>`) which allows for customising how commands are used when
programming boards. This configuration is used for :ref:`sysbuild` projects and allows for
configuring when commands are ran for groups of board targets. As an example: a multi-core SoC
might want to only allow the ``--erase`` argument to be used once for all of the cores in the SoC
which would prevent multiple erase tasks running in a single ``west flash`` invocation, which
could wrongly clear the memory which is used by the other images being programmed.
Priority
********
Flashing configuration is singular, it will only be read from a single location, this
configuration can reside in the following files starting with the highest priority:
* ``soc.yml`` (in soc folder)
* ``board.yml`` (in board folder)
Configuration
*************
Configuration is applied in the yml file by using a ``runners`` map with a single ``run_once``
child, this then contains a map of commands as they would be provided to the flash runner e.g.
``--reset`` followed by a list which specifies the settings for each of these commands (these
are grouped by flash runner, and by qualifiers/boards). Commands have associated runners that
they apply to using a ``runners`` list value, this can contain ``all`` if it applies to all
runners, otherwise must contain each runner that it applies to using the runner-specific name.
Groups of board targets can be specified using the ``groups`` key which has a list of board
target sets. Board targets are regular expression matches, for ``soc.yml`` files each set of
board targets must be in a ``qualifiers`` key (only regular expression matches for board
qualifiers are allowed, board names must be omitted from these entries). For ``board.yml``
files each set of board targets must be in a ``boards`` key, these are lists containing the
matches which form a singular group. A final parameter ``run`` can be set to ``first`` which
means that the command will be ran once with the first image flashing process per set of board
targets, or to ``last`` which will be ran once for the final image flash per set of board targets.
An example flashing configuration for a ``soc.yml`` is shown below in which the ``--recover``
command will only be used once for any board targets which used the nRF5340 SoC application or
network CPU cores, and will only reset the network or application core after all images for the
respective core have been flashed.
.. code-block:: yaml
runners:
run_once:
'--recover':
- run: first
runners:
- nrfjprog
groups:
- qualifiers:
- nrf5340/cpunet
- nrf5340/cpuapp
- nrf5340/cpuapp/ns
'--reset':
- run: last
runners:
- nrfjprog
- jlink
groups:
- qualifiers:
- nrf5340/cpunet
- qualifiers:
- nrf5340/cpuapp
- nrf5340/cpuapp/ns
# Made up non-real world example to show how to specify different options for different
# flash runners
- run: first
runners:
- some_other_runner
groups:
- qualifiers:
- nrf5340/cpunet
- qualifiers:
- nrf5340/cpuapp
- nrf5340/cpuapp/ns
Usage
*****
Commands that are supported by flash runners can be used as normal when flashing non-sysbuild
applications, the run once configuration will not be used. When flashing a sysbuild project with
multiple images, the flash runner run once configuration will be applied.
For example, building the :zephyr:code-sample:`smp-svr` sample for the nrf5340dk which will
include MCUboot as a secondary image:
.. code-block:: console
cmake -GNinja -Sshare/sysbuild/ -Bbuild -DBOARD=nrf5340dk/nrf5340/cpuapp -DAPP_DIR=samples/subsys/mgmt/mcumgr/smp_svr
cmake --build build
Once built with an nrf5340dk connected, the following command can be used to flash the board with
both applications and will only perform a single device recovery operation when programming the
first image:
.. code-block:: console
west flash --recover
If the above was ran without the flashing configuration, the recovery process would be ran twice
and the device would be unbootable.
``` | /content/code_sandbox/doc/build/flashing/configuration.rst | restructuredtext | 2016-05-26T17:54:19 | 2024-08-16T18:09:06 | zephyr | zephyrproject-rtos/zephyr | 10,307 | 1,009 |
```restructuredtext
.. _flashing:
Flashing
########
.. toctree::
:maxdepth: 1
configuration.rst
``` | /content/code_sandbox/doc/build/flashing/index.rst | restructuredtext | 2016-05-26T17:54:19 | 2024-08-16T18:09:06 | zephyr | zephyrproject-rtos/zephyr | 10,307 | 29 |
```restructuredtext
.. _devicetree_api:
Devicetree API
##############
This is a reference page for the ``<zephyr/devicetree.h>`` API. The API is macro
based. Use of these macros has no impact on scheduling. They can be used from
any calling context and at file scope.
Some of these -- the ones beginning with ``DT_INST_`` -- require a special
macro named ``DT_DRV_COMPAT`` to be defined before they can be used; these are
discussed individually below. These macros are generally meant for use within
:ref:`device drivers <device_model_api>`, though they can be used outside of
drivers with appropriate care.
.. contents:: Contents
:local:
.. _devicetree-generic-apis:
Generic APIs
************
The APIs in this section can be used anywhere and do not require
``DT_DRV_COMPAT`` to be defined.
Node identifiers and helpers
============================
A *node identifier* is a way to refer to a devicetree node at C preprocessor
time. While node identifiers are not C values, you can use them to access
devicetree data in C rvalue form using, for example, the
:ref:`devicetree-property-access` API.
The root node ``/`` has node identifier ``DT_ROOT``. You can create node
identifiers for other devicetree nodes using :c:func:`DT_PATH`,
:c:func:`DT_NODELABEL`, :c:func:`DT_ALIAS`, and :c:func:`DT_INST`.
There are also :c:func:`DT_PARENT` and :c:func:`DT_CHILD` macros which can be
used to create node identifiers for a given node's parent node or a particular
child node, respectively.
The following macros create or operate on node identifiers.
.. doxygengroup:: devicetree-generic-id
.. _devicetree-property-access:
Property access
===============
The following general-purpose macros can be used to access node properties.
There are special-purpose APIs for accessing the :ref:`devicetree-ranges-property`,
:ref:`devicetree-reg-property` and :ref:`devicetree-interrupts-property`.
Property values can be read using these macros even if the node is disabled,
as long as it has a matching binding.
.. doxygengroup:: devicetree-generic-prop
.. _devicetree-ranges-property:
``ranges`` property
===================
Use these APIs instead of :ref:`devicetree-property-access` to access the
``ranges`` property. Because this property's semantics are defined by the
devicetree specification, these macros can be used even for nodes without
matching bindings. However, they take on special semantics when the node's
binding indicates it is a PCIe bus node, as defined in the
`PCI Bus Binding to: IEEE Std 1275-1994 Standard for Boot (Initialization Configuration) Firmware`_
.. _PCI Bus Binding to\: IEEE Std 1275-1994 Standard for Boot (Initialization Configuration) Firmware:
path_to_url
.. doxygengroup:: devicetree-ranges-prop
.. _devicetree-reg-property:
``reg`` property
================
Use these APIs instead of :ref:`devicetree-property-access` to access the
``reg`` property. Because this property's semantics are defined by the
devicetree specification, these macros can be used even for nodes without
matching bindings.
.. doxygengroup:: devicetree-reg-prop
.. _devicetree-interrupts-property:
``interrupts`` property
=======================
Use these APIs instead of :ref:`devicetree-property-access` to access the
``interrupts`` property.
Because this property's semantics are defined by the devicetree specification,
some of these macros can be used even for nodes without matching bindings. This
does not apply to macros which take cell names as arguments.
.. doxygengroup:: devicetree-interrupts-prop
For-each macros
===============
There is currently only one "generic" for-each macro,
:c:func:`DT_FOREACH_CHILD`, which allows iterating over the children of a
devicetree node.
There are special-purpose for-each macros, like
:c:func:`DT_INST_FOREACH_STATUS_OKAY`, but these require ``DT_DRV_COMPAT`` to
be defined before use.
.. doxygengroup:: devicetree-generic-foreach
Existence checks
================
This section documents miscellaneous macros that can be used to test if a node
exists, how many nodes of a certain type exist, whether a node has certain
properties, etc. Some macros used for special purposes (such as
:c:func:`DT_IRQ_HAS_IDX` and all macros which require ``DT_DRV_COMPAT``) are
documented elsewhere on this page.
.. doxygengroup:: devicetree-generic-exist
.. _devicetree-dep-ord:
Inter-node dependencies
=======================
The ``devicetree.h`` API has some support for tracking dependencies between
nodes. Dependency tracking relies on a binary "depends on" relation between
devicetree nodes, which is defined as the `transitive closure
<path_to_url`_ of the following "directly
depends on" relation:
- every non-root node directly depends on its parent node
- a node directly depends on any nodes its properties refer to by phandle
- a node directly depends on its ``interrupt-parent`` if it has an
``interrupts`` property
- a parent node inherits all dependencies from its child nodes
A *dependency ordering* of a devicetree is a list of its nodes, where each node
``n`` appears earlier in the list than any nodes that depend on ``n``. A node's
*dependency ordinal* is then its zero-based index in that list. Thus, for two
distinct devicetree nodes ``n1`` and ``n2`` with dependency ordinals ``d1`` and
``d2``, we have:
- ``d1 != d2``
- if ``n1`` depends on ``n2``, then ``d1 > d2``
- ``d1 > d2`` does **not** necessarily imply that ``n1`` depends on ``n2``
The Zephyr build system chooses a dependency ordering of the final devicetree
and assigns a dependency ordinal to each node. Dependency related information
can be accessed using the following macros. The exact dependency ordering
chosen is an implementation detail, but cyclic dependencies are detected and
cause errors, so it's safe to assume there are none when using these macros.
There are instance number-based conveniences as well; see
:c:func:`DT_INST_DEP_ORD` and subsequent documentation.
.. doxygengroup:: devicetree-dep-ord
Bus helpers
===========
Zephyr's devicetree bindings language supports a ``bus:`` key which allows
bindings to declare that nodes with a given compatible describe system buses.
In this case, child nodes are considered to be on a bus of the given type, and
the following APIs may be used.
.. doxygengroup:: devicetree-generic-bus
.. _devicetree-inst-apis:
Instance-based APIs
*******************
These are recommended for use within device drivers. To use them, define
``DT_DRV_COMPAT`` to the lowercase-and-underscores compatible the device driver
implements support for. Here is an example devicetree fragment:
.. code-block:: devicetree
serial@40001000 {
compatible = "vnd,serial";
status = "okay";
current-speed = <115200>;
};
Example usage, assuming ``serial@40001000`` is the only enabled node
with compatible ``vnd,serial``:
.. code-block:: c
#define DT_DRV_COMPAT vnd_serial
DT_DRV_INST(0) // node identifier for serial@40001000
DT_INST_PROP(0, current_speed) // 115200
.. warning::
Be careful making assumptions about instance numbers. See :c:func:`DT_INST`
for the API guarantees.
As shown above, the ``DT_INST_*`` APIs are conveniences for addressing nodes by
instance number. They are almost all defined in terms of one of the
:ref:`devicetree-generic-apis`. The equivalent generic API can be found by
removing ``INST_`` from the macro name. For example, ``DT_INST_PROP(inst,
prop)`` is equivalent to ``DT_PROP(DT_DRV_INST(inst), prop)``. Similarly,
``DT_INST_REG_ADDR(inst)`` is equivalent to ``DT_REG_ADDR(DT_DRV_INST(inst))``,
and so on. There are some exceptions: :c:func:`DT_ANY_INST_ON_BUS_STATUS_OKAY`
and :c:func:`DT_INST_FOREACH_STATUS_OKAY` are special-purpose helpers without
straightforward generic equivalents.
Since ``DT_DRV_INST()`` requires ``DT_DRV_COMPAT`` to be defined, it's an error
to use any of these without that macro defined.
Note that there are also helpers available for
specific hardware; these are documented in :ref:`devicetree-hw-api`.
.. doxygengroup:: devicetree-inst
.. _devicetree-hw-api:
Hardware specific APIs
**********************
The following APIs can also be used by including ``<devicetree.h>``;
no additional include is needed.
.. _devicetree-can-api:
CAN
===
These conveniences may be used for nodes which describe CAN
controllers/transceivers, and properties related to them.
.. doxygengroup:: devicetree-can
Clocks
======
These conveniences may be used for nodes which describe clock sources, and
properties related to them.
.. doxygengroup:: devicetree-clocks
DMA
===
These conveniences may be used for nodes which describe direct memory access
controllers or channels, and properties related to them.
.. doxygengroup:: devicetree-dmas
.. _devicetree-flash-api:
Fixed flash partitions
======================
These conveniences may be used for the special-purpose ``fixed-partitions``
compatible used to encode information about flash memory partitions in the
device tree. See See :dtcompatible:`fixed-partition` for more details.
.. doxygengroup:: devicetree-fixed-partition
.. _devicetree-gpio-api:
GPIO
====
These conveniences may be used for nodes which describe GPIO controllers/pins,
and properties related to them.
.. doxygengroup:: devicetree-gpio
IO channels
===========
These are commonly used by device drivers which need to use IO
channels (e.g. ADC or DAC channels) for conversion.
.. doxygengroup:: devicetree-io-channels
.. _devicetree-mbox-api:
MBOX
====
These conveniences may be used for nodes which describe MBOX controllers/users,
and properties related to them.
.. doxygengroup:: devicetree-mbox
.. _devicetree-pinctrl-api:
Pinctrl (pin control)
=====================
These are used to access pin control properties by name or index.
Devicetree nodes may have properties which specify pin control (sometimes known
as pin mux) settings. These are expressed using ``pinctrl-<index>`` properties
within the node, where the ``<index>`` values are contiguous integers starting
from 0. These may also be named using the ``pinctrl-names`` property.
Here is an example:
.. code-block:: DTS
node {
...
pinctrl-0 = <&foo &bar ...>;
pinctrl-1 = <&baz ...>;
pinctrl-names = "default", "sleep";
};
Above, ``pinctrl-0`` has name ``"default"``, and ``pinctrl-1`` has name
``"sleep"``. The ``pinctrl-<index>`` property values contain phandles. The
``&foo``, ``&bar``, etc. phandles within the properties point to nodes whose
contents vary by platform, and which describe a pin configuration for the node.
.. doxygengroup:: devicetree-pinctrl
PWM
===
These conveniences may be used for nodes which describe PWM controllers and
properties related to them.
.. doxygengroup:: devicetree-pwms
Reset Controller
================
These conveniences may be used for nodes which describe reset controllers and
properties related to them.
.. doxygengroup:: devicetree-reset-controller
SPI
===
These conveniences may be used for nodes which describe either SPI controllers
or devices, depending on the case.
.. doxygengroup:: devicetree-spi
.. _devicetree-chosen-nodes:
Chosen nodes
************
The special ``/chosen`` node contains properties whose values describe
system-wide settings. The :c:func:`DT_CHOSEN()` macro can be used to get a node
identifier for a chosen node.
.. doxygengroup:: devicetree-generic-chosen
Zephyr-specific chosen nodes
****************************
The following table documents some commonly used Zephyr-specific chosen nodes.
Sometimes, a chosen node's label property will be used to set the default value
of a Kconfig option which in turn configures a hardware-specific device. This
is usually for backwards compatibility in cases when the Kconfig option
predates devicetree support in Zephyr. In other cases, there is no Kconfig
option, and the devicetree node is used directly in the source code to select a
device.
.. Documentation maintainers: please keep this sorted by property name
.. list-table:: Zephyr-specific chosen properties
:header-rows: 1
:widths: 25 75
* - Property
- Purpose
* - zephyr,bt-c2h-uart
- Selects the UART used for host communication in the
:ref:`bluetooth-hci-uart-sample`
* - zephyr,bt-mon-uart
- Sets UART device used for the Bluetooth monitor logging
* - zephyr,bt-hci
- Selects the HCI device used by the Bluetooth host stack
* - zephyr,canbus
- Sets the default CAN controller
* - zephyr,ccm
- Core-Coupled Memory node on some STM32 SoCs
* - zephyr,code-partition
- Flash partition that the Zephyr image's text section should be linked
into
* - zephyr,console
- Sets UART device used by console driver
* - zephyr,display
- Sets the default display controller
* - zephyr,keyboard-scan
- Sets the default keyboard scan controller
* - zephyr,dtcm
- Data Tightly Coupled Memory node on some Arm SoCs
* - zephyr,entropy
- A device which can be used as a system-wide entropy source
* - zephyr,flash
- A node whose ``reg`` is sometimes used to set the defaults for
:kconfig:option:`CONFIG_FLASH_BASE_ADDRESS` and :kconfig:option:`CONFIG_FLASH_SIZE`
* - zephyr,flash-controller
- The node corresponding to the flash controller device for
the ``zephyr,flash`` node
* - zephyr,gdbstub-uart
- Sets UART device used by the :ref:`gdbstub` subsystem
* - zephyr,ieee802154
- Used by the networking subsystem to set the IEEE 802.15.4 device
* - zephyr,ipc
- Used by the OpenAMP subsystem to specify the inter-process communication
(IPC) device
* - zephyr,ipc_shm
- A node whose ``reg`` is used by the OpenAMP subsystem to determine the
base address and size of the shared memory (SHM) usable for
interprocess-communication (IPC)
* - zephyr,itcm
- Instruction Tightly Coupled Memory node on some Arm SoCs
* - zephyr,log-uart
- Sets the UART device(s) used by the logging subsystem's UART backend.
If defined, the UART log backend would output to the devices listed in this node.
* - zephyr,ocm
- On-chip memory node on Xilinx Zynq-7000 and ZynqMP SoCs
* - zephyr,osdp-uart
- Sets UART device used by OSDP subsystem
* - zephyr,ot-uart
- Used by the OpenThread to specify UART device for Spinel protocol
* - zephyr,pcie-controller
- The node corresponding to the PCIe Controller
* - zephyr,ppp-uart
- Sets UART device used by PPP
* - zephyr,settings-partition
- Fixed partition node. If defined this selects the partition used
by the NVS and FCB settings backends.
* - zephyr,shell-uart
- Sets UART device used by serial shell backend
* - zephyr,sram
- A node whose ``reg`` sets the base address and size of SRAM memory
available to the Zephyr image, used during linking
* - zephyr,tracing-uart
- Sets UART device used by tracing subsystem
* - zephyr,uart-mcumgr
- UART used for :ref:`device_mgmt`
* - zephyr,uart-pipe
- Sets UART device used by serial pipe driver
* - zephyr,usb-device
- USB device node. If defined and has a ``vbus-gpios`` property, these
will be used by the USB subsystem to enable/disable VBUS
``` | /content/code_sandbox/doc/build/dts/api/api.rst | restructuredtext | 2016-05-26T17:54:19 | 2024-08-16T18:09:06 | zephyr | zephyrproject-rtos/zephyr | 10,307 | 3,894 |
```restructuredtext
.. _snippets:
Snippets
########
Snippets are a way to save build system settings in one place, and then use
those settings when you build any Zephyr application. This lets you save common
configuration separately when it applies to multiple different applications.
Some example use cases for snippets are:
- changing your board's console backend from a "real" UART to a USB CDC-ACM UART
- enabling frequently-used debugging options
- applying interrelated configuration settings to your "main" CPU and a
co-processor core on an AMP SoC
The following pages document this feature.
.. toctree::
:maxdepth: 1
using.rst
/snippets/index.rst
writing.rst
design.rst
``` | /content/code_sandbox/doc/build/snippets/index.rst | restructuredtext | 2016-05-26T17:54:19 | 2024-08-16T18:09:06 | zephyr | zephyrproject-rtos/zephyr | 10,307 | 162 |
```restructuredtext
Snippets Design
###############
This page documents design goals for the snippets feature.
Further information can be found in `Issue #51834`_.
.. _Issue #51834: path_to_url
- **extensible**: for example, it is possible to add board support for an
existing built-in snippet without modifying the zephyr repository
- **composable**: it is possible to use multiple snippets at once, for example
using:
.. code-block:: console
west build -S <snippet1> -S <snippet2> ...
- **able to combine multiple types of configuration**: snippets make it possible
to store multiple different types of build system settings in one place, and
apply them all together
- **specializable**: for example, it is possible to customize a snippet's
behavior for a particular board, or board revision
- **future-proof and backwards-compatible**: arbitrary future changes to the
snippets feature will be possible without breaking backwards compatibility
for older snippets
- **applicable to purely "software" changes**: unlike the shields feature,
snippets do not assume the presence of a "daughterboard", "shield", "hat", or
any other type of external assembly which is connected to the main board
- **DRY** (don't repeat yourself): snippets allow you to skip unnecessary
repetition; for example, you can apply the same board-specific configuration
to boards ``foo`` and ``bar`` by specifying ``/(foo|bar)/`` as a regular
expression for the settings, which will then apply to both boards
``` | /content/code_sandbox/doc/build/snippets/design.rst | restructuredtext | 2016-05-26T17:54:19 | 2024-08-16T18:09:06 | zephyr | zephyrproject-rtos/zephyr | 10,307 | 334 |
```restructuredtext
.. _using-snippets:
Using Snippets
##############
.. tip::
See :ref:`built-in-snippets` for a list of snippets that are provided by
Zephyr.
Snippets have names. You use snippets by giving their names to the build
system.
With west build
***************
To use a snippet named ``foo`` when building an application ``app``:
.. code-block:: console
west build -S foo app
To use multiple snippets:
.. code-block:: console
west build -S snippet1 -S snippet2 [...] app
With cmake
**********
If you are running CMake directly instead of using ``west build``, use the
``SNIPPET`` variable. This is a whitespace- or semicolon-separated list of
snippet names you want to use. For example:
.. code-block:: console
cmake -Sapp -Bbuild -DSNIPPET="snippet1;snippet2" [...]
cmake --build build
Application required snippets
*****************************
If an application should always be compiled with a given snippet, it
can be added to that application's ``CMakeLists.txt`` file. For example:
.. code-block:: cmake
if(NOT snippet1 IN_LIST SNIPPET)
set(SNIPPET snippet1 ${SNIPPET} CACHE STRING "" FORCE)
endif()
find_package(Zephyr ....)
``` | /content/code_sandbox/doc/build/snippets/using.rst | restructuredtext | 2016-05-26T17:54:19 | 2024-08-16T18:09:06 | zephyr | zephyrproject-rtos/zephyr | 10,307 | 290 |
```restructuredtext
.. _cmake-details:
Build System (CMake)
********************
CMake is used to build your application together with the Zephyr kernel. A
CMake build is done in two stages. The first stage is called
**configuration**. During configuration, the CMakeLists.txt build scripts are
executed. After configuration is finished, CMake has an internal model of the
Zephyr build, and can generate build scripts that are native to the host
platform.
CMake supports generating scripts for several build systems, but only Ninja and
Make are tested and supported by Zephyr. After configuration, you begin the
**build** stage by executing the generated build scripts. These build scripts
can recompile the application without involving CMake following
most code changes. However, after certain changes, the configuration step must
be executed again before building. The build scripts can detect some of these
situations and reconfigure automatically, but there are cases when this must be
done manually.
Zephyr uses CMake's concept of a 'target' to organize the build. A
target can be an executable, a library, or a generated file. For
application developers, the library target is the most important to
understand. All source code that goes into a Zephyr build does so by
being included in a library target, even application code.
Library targets have source code, that is added through CMakeLists.txt
build scripts like this:
.. code-block:: cmake
target_sources(app PRIVATE src/main.c)
In the above :file:`CMakeLists.txt`, an existing library target named ``app``
is configured to include the source file :file:`src/main.c`. The ``PRIVATE``
keyword indicates that we are modifying the internals of how the library is
being built. Using the keyword ``PUBLIC`` would modify how other
libraries that link with app are built. In this case, using ``PUBLIC``
would cause libraries that link with ``app`` to also include the
source file :file:`src/main.c`, behavior that we surely do not want. The
``PUBLIC`` keyword could however be useful when modifying the include
paths of a target library.
Build and Configuration Phases
==============================
The Zephyr build process can be divided into two main phases: a configuration
phase (driven by CMake) and a build phase (driven by Make or Ninja).
.. _build_configuration_phase:
Configuration Phase
-------------------
The configuration phase begins when the user invokes *CMake* to generate a
build system, specifying a source application directory and a board target.
.. figure:: build-config-phase.svg
:align: center
:alt: Zephyr's build configuration phase
:figclass: align-center
:width: 80%
CMake begins by processing the :file:`CMakeLists.txt` file in the application
directory, which refers to the :file:`CMakeLists.txt` file in the Zephyr
top-level directory, which in turn refers to :file:`CMakeLists.txt` files
throughout the build tree (directly and indirectly). Its primary output is a
set of Makefiles or Ninja files to drive the build process, but the CMake
scripts also do some processing of their own, which is explained here.
Note that paths beginning with :file:`build/` below refer to the build
directory you create when running CMake.
Devicetree
:file:`*.dts` (*devicetree source*) and :file:`*.dtsi` (*devicetree source
include*) files are collected from the target's architecture, SoC, board,
and application directories.
:file:`*.dtsi` files are included by :file:`*.dts` files via the C
preprocessor (often abbreviated *cpp*, which should not be confused with
C++). The C preprocessor is also used to merge in any devicetree
:file:`*.overlay` files, and to expand macros in :file:`*.dts`,
:file:`*.dtsi`, and :file:`*.overlay` files. The preprocessor output is
placed in :file:`build/zephyr/zephyr.dts.pre`.
The preprocessed devicetree sources are parsed by
:zephyr_file:`gen_defines.py <scripts/dts/gen_defines.py>` to generate a
:file:`build/zephyr/include/generated/zephyr/devicetree_generated.h` header with
preprocessor macros.
Source code should access preprocessor macros generated from devicetree by
including the :zephyr_file:`devicetree.h <include/zephyr/devicetree.h>` header,
which includes :file:`devicetree_generated.h`.
:file:`gen_defines.py` also writes the final devicetree to
:file:`build/zephyr/zephyr.dts` in the build directory. This file's contents
may be useful for debugging.
If the devicetree compiler ``dtc`` is installed, it is run on
:file:`build/zephyr/zephyr.dts` to catch any extra warnings and errors
generated by this tool. The output from ``dtc`` is unused otherwise, and
this step is skipped if ``dtc`` is not installed.
The above is just a brief overview. For more information on devicetree, see
:ref:`dt-guide`.
Kconfig
:file:`Kconfig` files define available configuration options for the
target architecture, SoC, board, and application, as well as dependencies
between options.
Kconfig configurations are stored in *configuration files*. The initial
configuration is generated by merging configuration fragments from the board
and application (e.g. :file:`prj.conf`).
The output from Kconfig is an :file:`autoconf.h` header with preprocessor
assignments, and a :file:`.config` file that acts both as a saved
configuration and as configuration output (used by CMake). The definitions in
:file:`autoconf.h` are automatically exposed at compile time, so there is no
need to include this header.
Information from devicetree is available to Kconfig, through the functions
defined in :zephyr_file:`kconfigfunctions.py
<scripts/kconfig/kconfigfunctions.py>`.
See :ref:`the Kconfig section of the manual <kconfig>` for more information.
Build Phase
-----------
The build phase begins when the user invokes ``make`` or ``ninja``. Its
ultimate output is a complete Zephyr application in a format suitable for
loading/flashing on the desired target board (:file:`zephyr.elf`,
:file:`zephyr.hex`, etc.) The build phase can be broken down, conceptually,
into four stages: the pre-build, first-pass binary, final binary, and
post-processing.
Pre-build
+++++++++
Pre-build occurs before any source files are compiled, because during
this phase header files used by the source files are generated.
Offset generation
Access to high-level data structures and members is sometimes
required when the definitions of those structures is not
immediately accessible (e.g., assembly language). The generation of
*offsets.h* (by *gen_offset_header.py*) facilitates this.
System call boilerplate
The *gen_syscall.py* and *parse_syscalls.py* scripts work
together to bind potential system call functions with their
implementations.
.. figure:: build-build-phase-1.svg
:align: center
:alt: Zephyr's build stage I
:figclass: align-center
:width: 80%
Intermediate binaries
+++++++++++++++++++++
Compilation proper begins with the first intermediate binary. Source files (C
and assembly) are collected from various subsystems (which ones is
decided during the configuration phase), and compiled into archives
(with reference to header files in the tree, as well as those
generated during the configuration phase and the pre-build stage(s)).
.. figure:: build-build-phase-2.svg
:align: center
:alt: Zephyr's build stage II
:figclass: align-center
:width: 80%
The exact number of intermediate binaries is decided during the configuration
phase.
If memory protection is enabled, then:
Partition grouping
The *gen_app_partitions.py* script scans all the
generated archives and outputs linker scripts to ensure that
application partitions are properly grouped and aligned for the
targets memory protection hardware.
Then *cpp* is used to combine linker script fragments from the targets
architecture/SoC, the kernel tree, optionally the partition output if
memory protection is enabled, and any other fragments selected during
the configuration process, into a *linker.cmd* file. The compiled
archives are then linked with *ld* as specified in the
*linker.cmd*.
Unfixed size binary
The unfixed size intermediate binary is produced when :ref:`usermode_api`
is enabled or :ref:`devicetree` is in use.
It produces a binary where sizes are not fixed and thus it may be used
by post-process steps that will impact the size of the final binary.
.. figure:: build-build-phase-3.svg
:align: center
:alt: Zephyr's build stage III
:figclass: align-center
:width: 80%
Fixed size binary
The fixed size intermediate binary is produced when :ref:`usermode_api`
is enabled or when generated IRQ tables are used,
:kconfig:option:`CONFIG_GEN_ISR_TABLES`
It produces a binary where sizes are fixed and thus the size must not change
between the intermediate binary and the final binary.
.. figure:: build-build-phase-4.svg
:align: center
:alt: Zephyr's build stage IV
:figclass: align-center
:width: 80%
Intermediate binaries post-processing
+++++++++++++++++++++++++++++++++++++
The binaries from the previous stage are incomplete, with empty and/or
placeholder sections that must be filled in by, essentially, reflection.
To complete the build procedure the following scripts are executed on the
intermediate binaries to produce the missing pieces needed for the final
binary.
When :ref:`usermode_api` is enabled:
Partition alignment
The *gen_app_partitions.py* script scans the unfixed size binary and
generates an app shared memory aligned linker script snippet where the
partitions are sorted in descending order.
.. figure:: build-postprocess-1.svg
:align: center
:alt: Zephyr's intermediate binary post-process I
:figclass: align-center
:width: 80%
When :ref:`devicetree` is used:
Device dependencies
The *gen_device_deps.py* script scans the unfixed size binary to determine
relationships between devices that were recorded from devicetree data,
and replaces the encoded relationships with values that are optimized to
locate the devices actually present in the application.
.. figure:: build-postprocess-2.svg
:align: center
:alt: Zephyr's intermediate binary post-process II
:figclass: align-center
:width: 80%
When :kconfig:option:`CONFIG_GEN_ISR_TABLES` is enabled:
The *gen_isr_tables.py* script scans the fixed size binary and creates
an isr_tables.c source file with a hardware vector table and/or software
IRQ table.
.. figure:: build-postprocess-3.svg
:align: center
:alt: Zephyr's intermediate binary post-process III
:figclass: align-center
:width: 80%
When :ref:`usermode_api` is enabled:
Kernel object hashing
The *gen_kobject_list.py* scans the *ELF DWARF*
debug data to find the address of the all kernel objects. This
list is passed to *gperf*, which generates a perfect hash function and
table of those addresses, then that output is optimized by
*process_gperf.py*, using known properties of our special case.
.. figure:: build-postprocess-4.svg
:align: center
:alt: Zephyr's intermediate binary post-process IV
:figclass: align-center
:width: 80%
When no intermediate binary post-processing is required then the first
intermediate binary will be directly used as the final binary.
Final binary
++++++++++++
The binary from the previous stage is incomplete, with empty and/or
placeholder sections that must be filled in by, essentially, reflection.
The link from the previous stage is repeated, this time with the missing
pieces populated.
.. figure:: build-build-phase-5.svg
:align: center
:alt: Zephyr's build final stage
:figclass: align-center
:width: 80%
Post processing
+++++++++++++++
Finally, if necessary, the completed kernel is converted from *ELF* to
the format expected by the loader and/or flash tool required by the
target. This is accomplished in a straightforward manner with *objdump*.
.. figure:: build-build-phase-6.svg
:align: center
:alt: Zephyr's build final stage post-process
:figclass: align-center
:width: 80%
.. _build_system_scripts:
Supporting Scripts and Tools
============================
The following is a detailed description of the scripts used during the build process.
.. _gen_syscalls.py:
:zephyr_file:`scripts/build/gen_syscalls.py`
--------------------------------------------
.. include:: ../../../scripts/build/gen_syscalls.py
:start-after: """
:end-before: """
.. _gen_device_deps.py:
:zephyr_file:`scripts/build/gen_device_deps.py`
-----------------------------------------------
.. include:: ../../../scripts/build/gen_device_deps.py
:start-after: """
:end-before: """
.. _gen_kobject_list.py:
:zephyr_file:`scripts/build/gen_kobject_list.py`
------------------------------------------------
.. include:: ../../../scripts/build/gen_kobject_list.py
:start-after: """
:end-before: """
.. _gen_offset_header.py:
:zephyr_file:`scripts/build/gen_offset_header.py`
-------------------------------------------------
.. include:: ../../../scripts/build/gen_offset_header.py
:start-after: """
:end-before: """
.. _parse_syscalls.py:
:zephyr_file:`scripts/build/parse_syscalls.py`
----------------------------------------------
.. include:: ../../../scripts/build/parse_syscalls.py
:start-after: """
:end-before: """
.. _gen_idt.py:
:zephyr_file:`arch/x86/gen_idt.py`
----------------------------------
.. include:: ../../../arch/x86/gen_idt.py
:start-after: """
:end-before: """
.. _gen_gdt.py:
:zephyr_file:`arch/x86/gen_gdt.py`
----------------------------------
.. include:: ../../../arch/x86/gen_gdt.py
:start-after: """
:end-before: """
.. _gen_relocate_app.py:
:zephyr_file:`scripts/build/gen_relocate_app.py`
------------------------------------------------
.. include:: ../../../scripts/build/gen_relocate_app.py
:start-after: """
:end-before: """
.. _process_gperf.py:
:zephyr_file:`scripts/build/process_gperf.py`
---------------------------------------------
.. include:: ../../../scripts/build/process_gperf.py
:start-after: """
:end-before: """
:zephyr_file:`scripts/build/gen_app_partitions.py`
--------------------------------------------------
.. include:: ../../../scripts/build/gen_app_partitions.py
:start-after: """
:end-before: """
.. _check_init_priorities.py:
:zephyr_file:`scripts/build/check_init_priorities.py`
-----------------------------------------------------
.. include:: ../../../scripts/build/check_init_priorities.py
:start-after: """
:end-before: """
``` | /content/code_sandbox/doc/build/cmake/index.rst | restructuredtext | 2016-05-26T17:54:19 | 2024-08-16T18:09:06 | zephyr | zephyrproject-rtos/zephyr | 10,307 | 3,406 |
```restructuredtext
Writing Snippets
################
.. contents::
:local:
Basics
******
Snippets are defined using YAML files named :file:`snippet.yml`.
A :file:`snippet.yml` file contains the name of the snippet, along with
additional build system settings, like this:
.. code-block:: yaml
name: snippet-name
# ... build system settings go here ...
Build system settings go in other keys in the file as described later on in
this page.
You can combine settings whenever they appear under the same keys. For example,
you can combine a snippet-specific devicetree overlay and a ``.conf`` file like
this:
.. code-block:: yaml
name: foo
append:
EXTRA_DTC_OVERLAY_FILE: foo.overlay
EXTRA_CONF_FILE: foo.conf
Namespacing
***********
When writing devicetree overlays in a snippet, use ``snippet_<name>`` or
``snippet-<name>`` as a namespace prefix when choosing names for node labels,
node names, etc. This avoids namespace conflicts.
For example, if your snippet is named ``foo-bar``, write your devicetree
overlays like this:
.. code-block:: DTS
chosen {
zephyr,baz = &snippet_foo_bar_dev;
};
snippet_foo_bar_dev: device@12345678 {
/* ... */
};
Where snippets are located
**************************
The build system looks for snippets in these places:
#. In directories configured by the :makevar:`SNIPPET_ROOT` CMake variable.
This always includes the zephyr repository (so
:zephyr_file:`snippets/` is always a source of snippets) and the
application source directory (so :file:`<app>/snippets` is also).
Additional directories can be added manually at CMake time.
The variable is a whitespace- or semicolon-separated list of directories
which may contain snippet definitions.
For each directory in the list, the build system looks for
:file:`snippet.yml` files underneath a subdirectory named :file:`snippets/`,
if one exists.
For example, if :makevar:`SNIPPET_ROOT` is set to ``/foo;/bar``, the build
system will look for :file:`snippet.yml` files underneath the following
subdirectories:
- :file:`/foo/snippets/`
- :file:`/bar/snippets/`
The :file:`snippet.yml` files can be nested anywhere underneath these
locations.
#. In any :ref:`module <modules>` whose :file:`module.yml` file provides a
``snippet_root`` setting.
For example, in a zephyr module named ``baz``, you can add this to your
:file:`module.yml` file:
.. code-block:: yaml
settings:
snippet_root: .
And then any :file:`snippet.yml` files in ``baz/snippets`` will
automatically be discovered by the build system, just as if
the path to ``baz`` had appeared in :makevar:`SNIPPET_ROOT`.
Processing order
****************
Snippets are processed in the order they are listed in the :makevar:`SNIPPET`
variable, or in the order of the ``-S`` arguments if using west.
To apply ``bar`` after ``foo``:
.. code-block:: console
cmake -Sapp -Bbuild -DSNIPPET="foo;bar" [...]
cmake --build build
The same can be achieved with west as follows:
.. code-block:: console
west build -S foo -S bar [...] app
When multiple snippets set the same configuration, the configuration value set
by the last processed snippet ends up in the final configurations.
For instance, if ``foo`` sets ``CONFIG_FOO=1`` and ``bar`` sets
``CONFIG_FOO=2`` in the above example, the resulting final configuration will
be ``CONFIG_FOO=2`` because ``bar`` is processed after ``foo``.
This principle applies to both Kconfig fragments (``.conf`` files) and
devicetree overlays (``.overlay`` files).
.. _snippets-devicetree-overlays:
Devicetree overlays (``.overlay``)
**********************************
This :file:`snippet.yml` adds :file:`foo.overlay` to the build:
.. code-block:: yaml
name: foo
append:
EXTRA_DTC_OVERLAY_FILE: foo.overlay
The path to :file:`foo.overlay` is relative to the directory containing
:file:`snippet.yml`.
.. _snippets-conf-files:
``.conf`` files
***************
This :file:`snippet.yml` adds :file:`foo.conf` to the build:
.. code-block:: yaml
name: foo
append:
EXTRA_CONF_FILE: foo.conf
The path to :file:`foo.conf` is relative to the directory containing
:file:`snippet.yml`.
``DTS_EXTRA_CPPFLAGS``
**********************
This :file:`snippet.yml` adds ``DTS_EXTRA_CPPFLAGS`` CMake Cache variables
to the build:
.. code-block:: yaml
name: foo
append:
DTS_EXTRA_CPPFLAGS: -DMY_DTS_CONFIGURE
Adding these flags enables control over the content of a devicetree file.
Board-specific settings
***********************
You can write settings that only apply to some boards.
The settings described here are applied in **addition** to snippet settings
that apply to all boards. (This is similar, for example, to the way that an
application with both :file:`prj.conf` and :file:`boards/foo.conf` files will
use both ``.conf`` files in the build when building for board ``foo``, instead
of just :file:`boards/foo.conf`)
By name
=======
.. code-block:: yaml
name: ...
boards:
bar: # settings for board "bar" go here
append:
EXTRA_DTC_OVERLAY_FILE: bar.overlay
baz: # settings for board "baz" go here
append:
EXTRA_DTC_OVERLAY_FILE: baz.overlay
The above example uses :file:`bar.overlay` when building for board ``bar``, and
:file:`baz.overlay` when building for ``baz``.
By regular expression
=====================
You can enclose the board name in slashes (``/``) to match the name against a
regular expression in the `CMake syntax`_. The regular expression must match
the entire board name.
.. _CMake syntax:
path_to_url#regex-specification
For example:
.. code-block:: yaml
name: foo
boards:
/my_vendor_.*/:
append:
EXTRA_DTC_OVERLAY_FILE: my_vendor.overlay
The above example uses devicetree overlay :file:`my_vendor.overlay` when
building for either board ``my_vendor_board1`` or ``my_vendor_board2``. It
would not use the overlay when building for either ``another_vendor_board`` or
``x_my_vendor_board``.
``` | /content/code_sandbox/doc/build/snippets/writing.rst | restructuredtext | 2016-05-26T17:54:19 | 2024-08-16T18:09:06 | zephyr | zephyrproject-rtos/zephyr | 10,307 | 1,501 |
```restructuredtext
.. _app-version-details:
Application version management
******************************
Zephyr supports an application version management system for applications which is built around the
system that Zephyr uses for its own version system management. This allows applications to define a
version file and have application (or module) code include the auto-generated file and be able to
access it, just as they can with the kernel version. This version information is available from
multiple scopes, including:
* Code (C/C++)
* Kconfig
* CMake
which makes it a very versatile system for lifecycle management of applications. In addition, it
can be used when building applications which target supported bootloaders (e.g. MCUboot) allowing
images to be signed with correct version of the application automatically - no manual signing
steps are required.
VERSION file
============
Application version information is set on a per-application basis in a file named :file:`VERSION`,
which must be placed at the base directory of the application, where the CMakeLists.txt file is
located. This is a simple text file which contains the various version information fields, each on
a newline. The basic ``VERSION`` file has the following structure:
.. code-block:: cfg
VERSION_MAJOR =
VERSION_MINOR =
PATCHLEVEL =
VERSION_TWEAK =
EXTRAVERSION =
Each field and the values it supports is described below. Zephyr limits the value of each numeric
field to a single byte (note that there may be further restrictions depending upon what the version
is used for, e.g. bootloaders might only support some of these fields or might place limits on the
maximum values of fields):
+---------------+----------------------------------------+
| Field | Data type |
+---------------+----------------------------------------+
| VERSION_MAJOR | Numerical (0-255) |
+---------------+----------------------------------------+
| VERSION_MINOR | Numerical (0-255) |
+---------------+----------------------------------------+
| PATCHLEVEL | Numerical (0-255) |
+---------------+----------------------------------------+
| VERSION_TWEAK | Numerical (0-255) |
+---------------+----------------------------------------+
| EXTRAVERSION | Alphanumerical (Lowercase a-z and 0-9) |
+---------------+----------------------------------------+
When an application is configured using CMake, the version file will be automatically processed,
and will be checked automatically each time the version is changed, so CMake does not need to be
manually re-ran for changes to this file.
For the sections below, examples are provided for the following :file:`VERSION` file:
.. code-block:: cfg
VERSION_MAJOR = 1
VERSION_MINOR = 2
PATCHLEVEL = 3
VERSION_TWEAK = 4
EXTRAVERSION = unstable
Use in code
===========
To use the version information in application code, the version file must be included, then the
fields can be freely used. The include file name is :file:`app_version.h` (no path is needed), the
following defines are available:
+-----------------------------+-------------------+------------------------------------------------------+-------------------------+
| Define | Type | Field(s) | Example |
+-----------------------------+-------------------+------------------------------------------------------+-------------------------+
| APPVERSION | Numerical | ``VERSION_MAJOR`` (left shifted by 24 bits), |br| | 0x1020304 |
| | | ``VERSION_MINOR`` (left shifted by 16 bits), |br| | |
| | | ``PATCHLEVEL`` (left shifted by 8 bits), |br| | |
| | | ``VERSION_TWEAK`` | |
+-----------------------------+-------------------+------------------------------------------------------+-------------------------+
| APP_VERSION_NUMBER | Numerical | ``VERSION_MAJOR`` (left shifted by 16 bits), |br| | 0x10203 |
| | | ``VERSION_MINOR`` (left shifted by 8 bits), |br| | |
| | | ``PATCHLEVEL`` | |
+-----------------------------+-------------------+------------------------------------------------------+-------------------------+
| APP_VERSION_MAJOR | Numerical | ``VERSION_MAJOR`` | 1 |
+-----------------------------+-------------------+------------------------------------------------------+-------------------------+
| APP_VERSION_MINOR | Numerical | ``VERSION_MINOR`` | 2 |
+-----------------------------+-------------------+------------------------------------------------------+-------------------------+
| APP_PATCHLEVEL | Numerical | ``PATCHLEVEL`` | 3 |
+-----------------------------+-------------------+------------------------------------------------------+-------------------------+
| APP_TWEAK | Numerical | ``VERSION_TWEAK`` | 4 |
+-----------------------------+-------------------+------------------------------------------------------+-------------------------+
| APP_VERSION_STRING | String (quoted) | ``VERSION_MAJOR``, |br| | "1.2.3-unstable" |
| | | ``VERSION_MINOR``, |br| | |
| | | ``PATCHLEVEL``, |br| | |
| | | ``EXTRAVERSION`` |br| | |
+-----------------------------+-------------------+------------------------------------------------------+-------------------------+
| APP_VERSION_EXTENDED_STRING | String (quoted) | ``VERSION_MAJOR``, |br| | "1.2.3-unstable+4" |
| | | ``VERSION_MINOR``, |br| | |
| | | ``PATCHLEVEL``, |br| | |
| | | ``EXTRAVERSION`` |br| | |
| | | ``VERSION_TWEAK`` |br| | |
+-----------------------------+-------------------+------------------------------------------------------+-------------------------+
| APP_VERSION_TWEAK_STRING | String (quoted) | ``VERSION_MAJOR``, |br| | "1.2.3+4" |
| | | ``VERSION_MINOR``, |br| | |
| | | ``PATCHLEVEL``, |br| | |
| | | ``VERSION_TWEAK`` |br| | |
+-----------------------------+-------------------+------------------------------------------------------+-------------------------+
| APP_BUILD_VERSION | String (unquoted) | None (value of ``git describe --abbrev=12 --always`` | v3.3.0-18-g2c85d9224fca |
| | | from application repository) | |
+-----------------------------+-------------------+------------------------------------------------------+-------------------------+
Use in Kconfig
==============
The following variables are available for usage in Kconfig files:
+--------------------------------+-----------+--------------------------+------------------+
| Variable | Type | Field(s) | Example |
+--------------------------------+-----------+--------------------------+------------------+
| $(VERSION_MAJOR) | Numerical | ``VERSION_MAJOR`` | 1 |
+--------------------------------+-----------+--------------------------+------------------+
| $(VERSION_MINOR) | Numerical | ``VERSION_MINOR`` | 2 |
+--------------------------------+-----------+--------------------------+------------------+
| $(PATCHLEVEL) | Numerical | ``PATCHLEVEL`` | 3 |
+--------------------------------+-----------+--------------------------+------------------+
| $(VERSION_TWEAK) | Numerical | ``VERSION_TWEAK`` | 4 |
+--------------------------------+-----------+--------------------------+------------------+
| $(APPVERSION) | String | ``VERSION_MAJOR``, |br| | 1.2.3-unstable |
| | | ``VERSION_MINOR``, |br| | |
| | | ``PATCHLEVEL``, |br| | |
| | | ``EXTRAVERSION`` | |
+--------------------------------+-----------+--------------------------+------------------+
| $(APP_VERSION_EXTENDED_STRING) | String | ``VERSION_MAJOR``, |br| | 1.2.3-unstable+4 |
| | | ``VERSION_MINOR``, |br| | |
| | | ``PATCHLEVEL``, |br| | |
| | | ``EXTRAVERSION``, |br| | |
| | | ``VERSION_TWEAK`` | |
+--------------------------------+-----------+--------------------------+------------------+
| $(APP_VERSION_TWEAK_STRING) | String | ``VERSION_MAJOR``, |br| | 1.2.3+4 |
| | | ``VERSION_MINOR``, |br| | |
| | | ``PATCHLEVEL``, |br| | |
| | | ``VERSION_TWEAK`` | |
+--------------------------------+-----------+--------------------------+------------------+
Use in CMake
============
The following variable are available for usage in CMake files:
+-----------------------------+-----------------+---------------------------------------------------+------------------+
| Variable | Type | Field(s) | Example |
+-----------------------------+-----------------+---------------------------------------------------+------------------+
| APPVERSION | Numerical (hex) | ``VERSION_MAJOR`` (left shifted by 24 bits), |br| | 0x1020304 |
| | | ``VERSION_MINOR`` (left shifted by 16 bits), |br| | |
| | | ``PATCHLEVEL`` (left shifted by 8 bits), |br| | |
| | | ``VERSION_TWEAK`` | |
+-----------------------------+-----------------+---------------------------------------------------+------------------+
| APP_VERSION_NUMBER | Numerical (hex) | ``VERSION_MAJOR`` (left shifted by 16 bits), |br| | 0x10203 |
| | | ``VERSION_MINOR`` (left shifted by 8 bits), |br| | |
| | | ``PATCHLEVEL`` | |
+-----------------------------+-----------------+---------------------------------------------------+------------------+
| APP_VERSION_MAJOR | Numerical | ``VERSION_MAJOR`` | 1 |
+-----------------------------+-----------------+---------------------------------------------------+------------------+
| APP_VERSION_MINOR | Numerical | ``VERSION_MINOR`` | 2 |
+-----------------------------+-----------------+---------------------------------------------------+------------------+
| APP_PATCHLEVEL | Numerical | ``PATCHLEVEL`` | 3 |
+-----------------------------+-----------------+---------------------------------------------------+------------------+
| APP_VERSION_TWEAK | Numerical | ``VERSION_TWEAK`` | 4 |
+-----------------------------+-----------------+---------------------------------------------------+------------------+
| APP_VERSION_STRING | String | ``VERSION_MAJOR``, |br| | 1.2.3-unstable |
| | | ``VERSION_MINOR``, |br| | |
| | | ``PATCHLEVEL``, |br| | |
| | | ``EXTRAVERSION`` | |
+-----------------------------+-----------------+---------------------------------------------------+------------------+
| APP_VERSION_EXTENDED_STRING | String | ``VERSION_MAJOR``, |br| | 1.2.3-unstable+4 |
| | | ``VERSION_MINOR``, |br| | |
| | | ``PATCHLEVEL``, |br| | |
| | | ``EXTRAVERSION``, |br| | |
| | | ``VERSION_TWEAK`` | |
+-----------------------------+-----------------+---------------------------------------------------+------------------+
| APP_VERSION_TWEAK_STRING | String | ``VERSION_MAJOR``, |br| | 1.2.3+4 |
| | | ``VERSION_MINOR``, |br| | |
| | | ``PATCHLEVEL``, |br| | |
| | | ``VERSION_TWEAK`` | |
+-----------------------------+-----------------+---------------------------------------------------+------------------+
Use in MCUboot-supported applications
=====================================
No additional configuration needs to be done to the target application so long as it is configured
to support MCUboot and a signed image is generated, the version information will be automatically
included in the image data.
``` | /content/code_sandbox/doc/build/version/index.rst | restructuredtext | 2016-05-26T17:54:19 | 2024-08-16T18:09:06 | zephyr | zephyrproject-rtos/zephyr | 10,307 | 2,562 |
```restructuredtext
.. _kconfig_extensions:
Kconfig extensions
##################
Zephyr uses the `Kconfiglib <path_to_url`__
implementation of `Kconfig
<path_to_url`__,
which includes some Kconfig extensions:
- Default values can be applied to existing symbols without
:ref:`weakening <multiple_symbol_definitions>` the symbols dependencies
through the use of ``configdefault``.
.. code-block:: none
config FOO
bool "FOO"
depends on BAR
configdefault FOO
default y if FIZZ
The statement above is equivalent to:
.. code-block:: none
config FOO
bool "Foo"
default y if FIZZ
depends on BAR
``configdefault`` symbols cannot contain any fields other than ``default``,
however they can be wrapped in ``if`` statements. The two statements below
are equivalent:
.. code-block:: none
configdefault FOO
default y if BAR
if BAR
configdefault FOO
default y
endif # BAR
- Environment variables in ``source`` statements are expanded directly, meaning
no "bounce" symbols with ``option env="ENV_VAR"`` need to be defined.
.. note::
``option env`` has been removed from the C tools as of Linux 4.18 as well.
The recommended syntax for referencing environment variables is ``$(FOO)``
rather than ``$FOO``. This uses the new `Kconfig preprocessor
<path_to_url`__.
The ``$FOO`` syntax for expanding environment variables is only supported for
backwards compatibility.
- The ``source`` statement supports glob patterns and includes each matching
file. A pattern is required to match at least one file.
Consider the following example:
.. code-block:: kconfig
source "foo/bar/*/Kconfig"
If the pattern ``foo/bar/*/Kconfig`` matches the files
:file:`foo/bar/baz/Kconfig` and :file:`foo/bar/qaz/Kconfig`, the statement
above is equivalent to the following two ``source`` statements:
.. code-block:: kconfig
source "foo/bar/baz/Kconfig"
source "foo/bar/qaz/Kconfig"
If no files match the pattern, an error is generated.
The wildcard patterns accepted are the same as for the Python `glob
<path_to_url`__ module.
For cases where it's okay for a pattern to match no files (or for a plain
filename to not exist), a separate ``osource`` (*optional source*) statement
is available. ``osource`` is a no-op if no file matches.
.. note::
``source`` and ``osource`` are analogous to ``include`` and
``-include`` in Make.
- An ``rsource`` statement is available for including files specified with a
relative path. The path is relative to the directory of the :file:`Kconfig`
file that contains the ``rsource`` statement.
As an example, assume that :file:`foo/Kconfig` is the top-level
:file:`Kconfig` file, and that :file:`foo/bar/Kconfig` has the following
statements:
.. code-block:: kconfig
source "qaz/Kconfig1"
rsource "qaz/Kconfig2"
This will include the two files :file:`foo/qaz/Kconfig1` and
:file:`foo/bar/qaz/Kconfig2`.
``rsource`` can be used to create :file:`Kconfig` "subtrees" that can be
moved around freely.
``rsource`` also supports glob patterns.
A drawback of ``rsource`` is that it can make it harder to figure out where a
file gets included, so only use it if you need it.
- An ``orsource`` statement is available that combines ``osource`` and
``rsource``.
For example, the following statement will include :file:`Kconfig1` and
:file:`Kconfig2` from the current directory (if they exist):
.. code-block:: kconfig
orsource "Kconfig[12]"
- ``def_int``, ``def_hex``, and ``def_string`` keywords are available,
analogous to ``def_bool``. These set the type and add a ``default`` at the
same time.
``` | /content/code_sandbox/doc/build/kconfig/extensions.rst | restructuredtext | 2016-05-26T17:54:19 | 2024-08-16T18:09:06 | zephyr | zephyrproject-rtos/zephyr | 10,307 | 950 |
```restructuredtext
.. _sysbuild:
Sysbuild (System build)
#######################
Sysbuild is a higher-level build system that can be used to combine multiple
other build systems together. It is a higher-level layer that combines one
or more Zephyr build systems and optional additional build systems
into a hierarchical build system.
For example, you can use sysbuild to build a Zephyr application together
with the MCUboot bootloader, flash them both onto your device, and
debug the results.
Sysbuild works by configuring and building at least a Zephyr application and, optionally, as many
additional projects as you want. The additional projects can be either Zephyr applications
or other types of builds you want to run.
Like Zephyr's :ref:`build system <build_overview>`, sysbuild is written in
CMake and uses :ref:`Kconfig <kconfig>`.
Definitions
***********
The following are some key concepts used in this document:
Single-image build
When sysbuild is used to create and manage just one Zephyr application's
build system.
Multi-image build
When sysbuild is used to manage multiple build systems.
The word "image" is used because your main goal is usually to generate the binaries of the firmware
application images from each build system.
Domain
Every Zephyr CMake build system managed by sysbuild.
Multi-domain
When more than one Zephyr CMake build system (domain) is managed by sysbuild.
Architectural Overview
**********************
This figure is an overview of sysbuild's inputs, outputs, and user interfaces:
.. figure:: sysbuild.svg
:align: center
:alt: Sysbuild architectural overview
:figclass: align-center
:width: 80%
The following are some key sysbuild features indicated in this figure:
- You can run sysbuild either with :ref:`west build
<west-building>` or directly via ``cmake``.
- You can use sysbuild to generate application images from each build system,
shown above as ELF, BIN, and HEX files.
- You can configure sysbuild or any of the build systems it manages using
various configuration variables. These variables are namespaced so that
sysbuild can direct them to the right build system. In some cases, such as
the ``BOARD`` variable, these are shared among multiple build systems.
- Sysbuild itself is also configured using Kconfig. For example, you can
instruct sysbuild to build the MCUboot bootloader, as well as to build and
link your main Zephyr application as an MCUboot-bootable image, using sysbuild's
Kconfig files.
- Sysbuild integrates with west's :ref:`west-build-flash-debug` commands. It
does this by managing the :ref:`west-runner`, and specifically the
:file:`runners.yaml` files that each Zephyr build system will contain. These
are packaged into a global view of how to flash and debug each build system
in a :file:`domains.yaml` file generated and managed by sysbuild.
- Build names are prefixed with the target name and an underscore, for example
the sysbuild target is prefixed with ``sysbuild_`` and if MCUboot is enabled
as part of sysbuild, it will be prefixed with ``mcuboot_``. This also allows
for running things like menuconfig with the prefix, for example (if using
ninja) ``ninja sysbuild_menuconfig`` to configure sysbuild or (if using make)
``make mcuboot_menuconfig``.
Building with sysbuild
**********************
As mentioned above, you can run sysbuild via ``west build`` or ``cmake``.
.. tabs::
.. group-tab:: ``west build``
Here is an example. For details, see :ref:`west-multi-domain-builds` in
the ``west build documentation``.
.. zephyr-app-commands::
:tool: west
:app: samples/hello_world
:board: reel_board
:goals: build
:west-args: --sysbuild
:compact:
.. tip::
To configure ``west build`` to use ``--sysbuild`` by default from now on,
run:
.. code-block:: shell
west config build.sysbuild True
Since sysbuild supports both single- and multi-image builds, this lets you
use sysbuild all the time, without worrying about what type of build you are
running.
To turn this off, run this before generating your build system:
.. code-block:: shell
west config build.sysbuild False
To turn this off for just one ``west build`` command, run:
.. code-block:: shell
west build --no-sysbuild ...
.. group-tab:: ``cmake``
Here is an example using CMake and Ninja.
.. zephyr-app-commands::
:tool: cmake
:app: share/sysbuild
:board: reel_board
:goals: build
:gen-args: -DAPP_DIR=samples/hello_world
:compact:
To use sysbuild directly with CMake, you must specify the sysbuild
project as the source folder, and give ``-DAPP_DIR=<path-to-sample>`` as
an extra CMake argument. ``APP_DIR`` is the path to the main Zephyr
application managed by sysbuild.
Configuration namespacing
*************************
When building a single Zephyr application without sysbuild, all CMake cache
settings and Kconfig build options given on the command line as
``-D<var>=<value>`` or ``-DCONFIG_<var>=<value>`` are handled by the Zephyr
build system.
However, when sysbuild combines multiple Zephyr build systems, there could be
Kconfig settings exclusive to sysbuild (and not used by any of the applications).
To handle this, sysbuild has namespaces for configuration variables. You can use these
namespaces to direct settings either to sysbuild itself or to a specific Zephyr
application managed by sysbuild using the information in these sections.
The following example shows how to build :ref:`hello_world` with MCUboot enabled,
applying to both images debug optimizations:
.. tabs::
.. group-tab:: ``west build``
.. zephyr-app-commands::
:tool: west
:app: samples/hello_world
:board: reel_board
:goals: build
:west-args: --sysbuild
:gen-args: -DSB_CONFIG_BOOTLOADER_MCUBOOT=y -DCONFIG_DEBUG_OPTIMIZATIONS=y -Dmcuboot_CONFIG_DEBUG_OPTIMIZATIONS=y
:compact:
.. group-tab:: ``cmake``
.. zephyr-app-commands::
:tool: cmake
:app: share/sysbuild
:board: reel_board
:goals: build
:gen-args: -DAPP_DIR=samples/hello_world -DSB_CONFIG_BOOTLOADER_MCUBOOT=y -DCONFIG_DEBUG_OPTIMIZATIONS=y -Dmcuboot_CONFIG_DEBUG_OPTIMIZATIONS=y
:compact:
See the following subsections for more information.
.. _sysbuild_cmake_namespace:
CMake variable namespacing
==========================
CMake variable settings can be passed to CMake using ``-D<var>=<value>`` on the
command line. You can also set Kconfig options via CMake as
``-DCONFIG_<var>=<value>`` or ``-D<namespace>_CONFIG_<var>=<value>``.
Since sysbuild is the entry point for the build system, and sysbuild is written
in CMake, all CMake variables are first processed by sysbuild.
Sysbuild creates a namespace for each domain. The namespace prefix is the
domain's application name. See :ref:`sysbuild_zephyr_application` for more
information.
To set the variable ``<var>`` in the namespace ``<namespace>``, use this syntax::
-D<namespace>_<var>=<value>
For example, to set the CMake variable ``FOO`` in the ``my_sample`` application
build system to the value ``BAR``, run the following commands:
.. tabs::
.. group-tab:: ``west build``
.. code-block:: shell
west build --sysbuild ... -- -Dmy_sample_FOO=BAR
.. group-tab:: ``cmake``
.. code-block:: shell
cmake -Dmy_sample_FOO=BAR ...
.. _sysbuild_kconfig_namespacing:
Kconfig namespacing
===================
To set the sysbuild Kconfig option ``<var>`` to the value ``<value>``, use this syntax::
-DSB_CONFIG_<var>=<value>
In the previous example, ``SB_CONFIG`` is the namespace prefix for sysbuild's Kconfig
options.
To set a Zephyr application's Kconfig option instead, use this syntax::
-D<namespace>_CONFIG_<var>=<value>
In the previous example, ``<namespace>`` is the application name discussed above in
:ref:`sysbuild_cmake_namespace`.
For example, to set the Kconfig option ``FOO`` in the ``my_sample`` application
build system to the value ``BAR``, run the following commands:
.. tabs::
.. group-tab:: ``west build``
.. code-block:: shell
west build --sysbuild ... -- -Dmy_sample_CONFIG_FOO=BAR
.. group-tab:: ``cmake``
.. code-block:: shell
cmake -Dmy_sample_CONFIG_FOO=BAR ...
.. tip::
When no ``<namespace>`` is used, the Kconfig setting is passed to the main
Zephyr application ``my_sample``.
This means that passing ``-DCONFIG_<var>=<value>`` and
``-Dmy_sample_CONFIG_<var>=<value>`` are equivalent.
This allows you to build the same application with or without sysbuild using
the same syntax for setting Kconfig values at CMake time.
For example, the following commands will work in the same way:
.. code-block:: shell
west build -b <board> my_sample -- -DCONFIG_FOO=BAR
.. code-block:: shell
west build -b <board> --sysbuild my_sample -- -DCONFIG_FOO=BAR
Sysbuild flashing using ``west flash``
**************************************
You can use :ref:`west flash <west-flashing>` to flash applications with
sysbuild.
When invoking ``west flash`` on a build consisting of multiple images, each
image is flashed in sequence. Extra arguments such as ``--runner jlink`` are
passed to each invocation.
For more details, see :ref:`west-multi-domain-flashing`.
Sysbuild debugging using ``west debug``
***************************************
You can use ``west debug`` to debug the main application, whether you are using sysbuild or not.
Just follow the existing :ref:`west debug <west-debugging>` guide to debug the main sample.
To debug a different domain (Zephyr application), such as ``mcuboot``, use
the ``--domain`` argument, as follows::
west debug --domain mcuboot
For more details, see :ref:`west-multi-domain-debugging`.
Building a sample with MCUboot
******************************
Sysbuild supports MCUboot natively.
To build a sample like ``hello_world`` with MCUboot,
enable MCUboot and build and flash the sample as follows:
.. tabs::
.. group-tab:: ``west build``
.. zephyr-app-commands::
:tool: west
:app: samples/hello_world
:board: reel_board
:goals: build
:west-args: --sysbuild
:gen-args: -DSB_CONFIG_BOOTLOADER_MCUBOOT=y
:compact:
.. group-tab:: ``cmake``
.. zephyr-app-commands::
:tool: cmake
:app: share/sysbuild
:board: reel_board
:goals: build
:gen-args: -DAPP_DIR=samples/hello_world -DSB_CONFIG_BOOTLOADER_MCUBOOT=y
:compact:
This builds ``hello_world`` and ``mcuboot`` for the ``reel_board``, and then
flashes both the ``mcuboot`` and ``hello_world`` application images to the
board.
More detailed information regarding the use of MCUboot with Zephyr can be found
in the `MCUboot with Zephyr`_ documentation page on the MCUboot website.
.. note::
The deprecated MCUBoot Kconfig option ``CONFIG_ZEPHYR_TRY_MASS_ERASE`` will
perform a full chip erase when flashed. If this option is enabled, then
flashing only MCUBoot, for example using ``west flash --domain mcuboot``, may
erase the entire flash, including the main application image.
Sysbuild Kconfig file
*********************
You can set sysbuild's Kconfig options for a single application using
configuration files. By default, sysbuild looks for a configuration file named
``sysbuild.conf`` in the application top-level directory.
In the following example, there is a :file:`sysbuild.conf` file that enables building and flashing with
MCUboot whenever sysbuild is used:
.. code-block:: none
<home>/application
CMakeLists.txt
prj.conf
sysbuild.conf
.. code-block:: cfg
SB_CONFIG_BOOTLOADER_MCUBOOT=y
You can set a configuration file to use with the
``-DSB_CONF_FILE=<sysbuild-conf-file>`` CMake build setting.
For example, you can create ``sysbuild-mcuboot.conf`` and then
specify this file when building with sysbuild, as follows:
.. tabs::
.. group-tab:: ``west build``
.. zephyr-app-commands::
:tool: west
:app: samples/hello_world
:board: reel_board
:goals: build
:west-args: --sysbuild
:gen-args: -DSB_CONF_FILE=sysbuild-mcuboot.conf
:compact:
.. group-tab:: ``cmake``
.. zephyr-app-commands::
:tool: cmake
:app: share/sysbuild
:board: reel_board
:goals: build
:gen-args: -DAPP_DIR=samples/hello_world -DSB_CONF_FILE=sysbuild-mcuboot.conf
:compact:
Sysbuild targets
****************
Sysbuild creates build targets for each image (including sysbuild itself) for
the following modes:
* menuconfig
* hardenconfig
* guiconfig
For the main application (as is the same without using sysbuild) these can be
ran normally without any prefix. For other images (including sysbuild), these
are ran with a prefix of the image name and an underscore e.g. ``sysbuild_`` or
``mcuboot_``, using ninja or make - for details on how to run image build
targets that do not have mapped build targets in sysbuild, see the
:ref:`sysbuild_dedicated_image_build_targets` section.
.. _sysbuild_dedicated_image_build_targets:
Dedicated image build targets
*****************************
Not all build targets for images are given equivalent prefixed build targets
when sysbuild is used, for example build targets like ``ram_report``,
``rom_report``, ``footprint``, ``puncover`` and ``pahole`` are not exposed.
When using :ref:`Trusted Firmware <tfm_build_system>`, this includes build
targets prefix with ``tfm_`` and ``bl2_``, for example: ``tfm_rom_report``
and ``bl2_ram_report``. To run these build targets, the build directory of the
image can be provided to west/ninja/make along with the name of the build
target to execute and it will run.
.. tabs::
.. group-tab:: ``west``
Assuming that a project has been configured and built using ``west``
using sysbuild with mcuboot enabled in the default ``build`` folder
location, the ``rom_report`` build target for ``mcuboot`` can be ran
with:
.. code-block:: shell
west build -d build/mcuboot -t rom_report
.. group-tab:: ``ninja``
Assuming that a project has been configured using ``cmake`` and built
using ``ninja`` using sysbuild with mcuboot enabled, the ``rom_report``
build target for ``mcuboot`` can be ran with:
.. code-block:: shell
ninja -C mcuboot rom_report
.. group-tab:: ``make``
Assuming that a project has been configured using ``cmake`` and built
using ``make`` using sysbuild with mcuboot enabled, the ``rom_report``
build target for ``mcuboot`` can be ran with:
.. code-block:: shell
make -C mcuboot rom_report
.. _sysbuild_zephyr_application:
Adding Zephyr applications to sysbuild
**************************************
You can use the ``ExternalZephyrProject_Add()`` function to add Zephyr
applications as sysbuild domains. Call this CMake function from your
application's :file:`sysbuild.cmake` file, or any other CMake file you know will
run as part sysbuild CMake invocation.
Targeting the same board
========================
To include ``my_sample`` as another sysbuild domain, targeting the same board
as the main image, use this example:
.. code-block:: cmake
ExternalZephyrProject_Add(
APPLICATION my_sample
SOURCE_DIR <path-to>/my_sample
)
This could be useful, for example, if your board requires you to build and flash an
SoC-specific bootloader along with your main application.
Targeting a different board
===========================
In sysbuild and Zephyr CMake build system a board may refer to:
* A physical board with a single core SoC.
* A specific core on a physical board with a multi-core SoC, such as
:ref:`nrf5340dk_nrf5340`.
* A specific SoC on a physical board with multiple SoCs, such as
:ref:`nrf9160dk_nrf9160` and :ref:`nrf9160dk_nrf52840`.
If your main application, for example, is built for ``mps2_an521``, and your
helper application must target the ``mps2_an521_remote`` board (cpu1), add
a CMake function call that is structured as follows:
.. code-block:: cmake
ExternalZephyrProject_Add(
APPLICATION my_sample
SOURCE_DIR <path-to>/my_sample
BOARD mps2_an521_remote
)
This could be useful, for example, if your main application requires another
helper Zephyr application to be built and flashed alongside it, but the helper
runs on another core in your SoC.
Targeting conditionally using Kconfig
=====================================
You can control whether extra applications are included as sysbuild domains
using Kconfig.
If the extra application image is specific to the board or an application,
you can create two additional files: :file:`sysbuild.cmake` and :file:`Kconfig.sysbuild`.
For an application, this would look like this:
.. code-block:: none
<home>/application
CMakeLists.txt
prj.conf
Kconfig.sysbuild
sysbuild.cmake
In the previous example, :file:`sysbuild.cmake` would be structured as follows:
.. code-block:: cmake
if(SB_CONFIG_SECOND_SAMPLE)
ExternalZephyrProject_Add(
APPLICATION second_sample
SOURCE_DIR <path-to>/second_sample
)
endif()
:file:`Kconfig.sysbuild` would be structured as follows:
.. code-block:: kconfig
source "sysbuild/Kconfig"
config SECOND_SAMPLE
bool "Second sample"
default y
This will include ``second_sample`` by default, while still allowing you to
disable it using the Kconfig option ``SECOND_SAMPLE``.
For more information on setting sysbuild Kconfig options,
see :ref:`sysbuild_kconfig_namespacing`.
Building without flashing
=========================
You can mark ``my_sample`` as a build-only application in this manner:
.. code-block:: cmake
ExternalZephyrProject_Add(
APPLICATION my_sample
SOURCE_DIR <path-to>/my_sample
BUILD_ONLY TRUE
)
As a result, ``my_sample`` will be built as part of the sysbuild build invocation,
but it will be excluded from the default image sequence used by ``west flash``.
Instead, you may use the outputs of this domain for other purposes - for example,
to produce a secondary image for DFU, or to merge multiple images together.
You can also replace ``TRUE`` with another boolean constant in CMake, such as
a Kconfig option, which would make ``my_sample`` conditionally build-only.
.. note::
Applications marked as build-only can still be flashed manually, using
``west flash --domain my_sample``. As such, the ``BUILD_ONLY`` option only
controls the default behavior of ``west flash``.
.. _sysbuild_application_configuration:
Zephyr application configuration
================================
When adding a Zephyr application to sysbuild, such as MCUboot, then the
configuration files from the application (MCUboot) itself will be used.
When integrating multiple applications with each other, then it is often
necessary to make adjustments to the configuration of extra images.
Sysbuild gives users the ability of creating Kconfig fragments or devicetree
overlays that will be used together with the application's default configuration.
Sysbuild also allows users to change :ref:`application-configuration-directory`
in order to give users full control of an image's configuration.
Zephyr application Kconfig fragment and devicetree overlay
----------------------------------------------------------
In the folder of the main application, create a Kconfig fragment or a devicetree
overlay under a sysbuild folder, where the name of the file is
:file:`<image>.conf` or :file:`<image>.overlay`, for example if your main
application includes ``my_sample`` then create a :file:`sysbuild/my_sample.conf`
file or a devicetree overlay :file:`sysbuild/my_sample.overlay`.
A Kconfig fragment could look as:
.. code-block:: cfg
# sysbuild/my_sample.conf
CONFIG_FOO=n
Zephyr application configuration directory
------------------------------------------
In the folder of the main application, create a new folder under
:file:`sysbuild/<image>/`.
This folder will then be used as ``APPLICATION_CONFIG_DIR`` when building
``<image>``.
As an example, if your main application includes ``my_sample`` then create a
:file:`sysbuild/my_sample/` folder and place any configuration files in
there as you would normally do:
.. code-block:: none
<home>/application
CMakeLists.txt
prj.conf
sysbuild
my_sample
prj.conf
app.overlay
boards
<board_A>.conf
<board_A>.overlay
<board_B>.conf
<board_B>.overlay
All configuration files under the :file:`sysbuild/my_sample/` folder will now
be used when ``my_sample`` is included in the build, and the default
configuration files for ``my_sample`` will be ignored.
This give you full control on how images are configured when integrating those
with ``application``.
.. _sysbuild_file_suffixes:
Sysbuild file suffix support
----------------------------
File suffix support through the makevar:`FILE_SUFFIX` is supported in sysbuild
(see :ref:`application-file-suffixes` for details on this feature in applications). For sysbuild,
a globally provided option will be passed down to all images. In addition, the image configuration
file will have this value applied and used (instead of the build type) if the file exists.
Given the example project:
.. code-block:: none
<home>/application
CMakeLists.txt
prj.conf
sysbuild.conf
sysbuild_test_key.conf
sysbuild
mcuboot.conf
mcuboot_max_log.conf
my_sample.conf
* If ``FILE_SUFFIX`` is not defined and both ``mcuboot`` and ``my_sample`` images are included,
``mcuboot`` will use the ``mcuboot.conf`` Kconfig fragment file and ``my_sample`` will use the
``my_sample.conf`` Kconfig fragment file. Sysbuild itself will use the ``sysbuild.conf``
Kconfig fragment file.
* If ``FILE_SUFFIX`` is set to ``max_log`` and both ``mcuboot`` and ``my_sample`` images are
included, ``mcuboot`` will use the ``mcuboot_max_log.conf`` Kconfig fragment file and
``my_sample`` will use the ``my_sample.conf`` Kconfig fragment file (as it will fallback to the
file without the suffix). Sysbuild itself will use the ``sysbuild.conf`` Kconfig fragment file
(as it will fallback to the file without the suffix).
* If ``FILE_SUFFIX`` is set to ``test_key`` and both ``mcuboot`` and ``my_sample`` images are
included, ``mcuboot`` will use the ``mcuboot.conf`` Kconfig fragment file and
``my_sample`` will use the ``my_sample.conf`` Kconfig fragment file (as it will fallback to the
files without the suffix). Sysbuild itself will use the ``sysbuild_test_key.conf`` Kconfig
fragment file. This can be used to apply a different sysbuild configuration, for example to use
a different signing key in MCUboot and when signing the main application.
The ``FILE_SUFFIX`` can also be applied only to single images by prefixing the variable with the
image name:
.. tabs::
.. group-tab:: ``west build``
.. zephyr-app-commands::
:tool: west
:app: file_suffix_example
:board: reel_board
:goals: build
:west-args: --sysbuild
:gen-args: -DSB_CONFIG_BOOTLOADER_MCUBOOT=y -Dmcuboot_FILE_SUFFIX="max_log"
:compact:
.. group-tab:: ``cmake``
.. zephyr-app-commands::
:tool: cmake
:app: share/sysbuild
:board: reel_board
:goals: build
:gen-args: -DAPP_DIR=<app_dir> -DSB_CONFIG_BOOTLOADER_MCUBOOT=y -Dmcuboot_FILE_SUFFIX="max_log"
:compact:
.. _sysbuild_zephyr_application_dependencies:
Adding dependencies among Zephyr applications
=============================================
Sometimes, in a multi-image build, you may want certain Zephyr applications to
be configured or flashed in a specific order. For example, if you need some
information from one application's build system to be available to another's,
then the first thing to do is to add a configuration dependency between them.
Separately, you can also add flashing dependencies to control the sequence of
images used by ``west flash``; this could be used if a specific flashing order
is required by an SoC, a _runner_, or something else.
By default, sysbuild will configure and flash applications in the order that
they are added, as ``ExternalZephyrProject_Add()`` calls are processed by CMake.
You can use the ``sysbuild_add_dependencies()`` function to make adjustments to
this order, according to your needs. Its usage is similar to the standard
``add_dependencies()`` function in CMake.
Here is an example of adding configuration dependencies for ``my_sample``:
.. code-block:: cmake
sysbuild_add_dependencies(IMAGE CONFIGURE my_sample sample_a sample_b)
This will ensure that sysbuild will run CMake for ``sample_a`` and ``sample_b``
(in some order) before doing the same for ``my_sample``, when building these
domains in a single invocation.
If you want to add flashing dependencies instead, then do it like this:
.. code-block:: cmake
sysbuild_add_dependencies(IMAGE FLASH my_sample sample_a sample_b)
As a result, ``my_sample`` will be flashed after ``sample_a`` and ``sample_b``
(in some order), when flashing these domains in a single invocation.
.. note::
Adding flashing dependencies is not allowed for build-only applications.
If ``my_sample`` had been created with ``BUILD_ONLY TRUE``, then the above
call to ``sysbuild_add_dependencies()`` would have produced an error.
Adding non-Zephyr applications to sysbuild
******************************************
You can include non-Zephyr applications in a multi-image build using the
standard CMake module `ExternalProject`_. Please refer to the CMake
documentation for usage details.
When using ``ExternalProject``, the non-Zephyr application will be built as
part of the sysbuild build invocation, but ``west flash`` or ``west debug``
will not be aware of the application. Instead, you must manually flash and
debug the application.
.. _MCUboot with Zephyr: path_to_url
.. _ExternalProject: path_to_url
Extending sysbuild
******************
Sysbuild can be extended by other modules to give it additional functionality
or include other configuration or images, an example could be to add support
for another bootloader or external signing method.
Modules can be extended by adding custom CMake or Kconfig files as normal
:ref:`modules <module-yml>` do, this will cause the files to be included in
each image that is part of a project. Alternatively, there are
:ref:`sysbuild-specific module extension <sysbuild_module_integration>` files
which can be used to include CMake and Kconfig files for the overall sysbuild
image itself, this is where e.g. a custom image for a particular board or SoC
can be added.
``` | /content/code_sandbox/doc/build/sysbuild/index.rst | restructuredtext | 2016-05-26T17:54:19 | 2024-08-16T18:09:06 | zephyr | zephyrproject-rtos/zephyr | 10,307 | 6,469 |
```restructuredtext
.. _kconfig-functions:
Custom Kconfig Preprocessor Functions
#####################################
Kconfiglib supports custom Kconfig preprocessor functions written in Python.
These functions are defined in
:zephyr_file:`scripts/kconfig/kconfigfunctions.py`.
.. note::
The official Kconfig preprocessor documentation can be found `here
<path_to_url`__.
See the Python docstrings in :zephyr_file:`scripts/kconfig/kconfigfunctions.py`
for detailed documentation.
Most of the custom preprocessor functions are used to get devicetree
information into Kconfig. For example, the default value of a Kconfig symbol
can be fetched from a devicetree ``reg`` property.
Devicetree-related Functions
****************************
The functions listed below are used to get devicetree information into Kconfig.
The ``*_int`` version of each function returns the value as a decimal integer,
while the ``*_hex`` version returns a hexadecimal value starting with ``0x``.
.. code-block:: none
$(dt_alias_enabled,<node alias>)
$(dt_chosen_bool_prop, <property in /chosen>, <prop>)
$(dt_chosen_enabled,<property in /chosen>)
$(dt_chosen_has_compat,<property in /chosen>)
$(dt_chosen_label,<property in /chosen>)
$(dt_chosen_partition,addr_hex,<chosen>[,<index>,<unit>])
$(dt_chosen_partition,addr_int,<chosen>[,<index>,<unit>])
$(dt_chosen_path,<property in /chosen>)
$(dt_chosen_reg_addr_hex,<property in /chosen>[,<index>,<unit>])
$(dt_chosen_reg_addr_int,<property in /chosen>[,<index>,<unit>])
$(dt_chosen_reg_size_hex,<property in /chosen>[,<index>,<unit>])
$(dt_chosen_reg_size_int,<property in /chosen>[,<index>,<unit>])
$(dt_compat_any_has_prop,<compatible string>,<prop>)
$(dt_compat_any_on_bus,<compatible string>,<prop>)
$(dt_compat_enabled,<compatible string>)
$(dt_compat_on_bus,<compatible string>,<bus>)
$(dt_gpio_hogs_enabled)
$(dt_has_compat,<compatible string>)
$(dt_has_compat_enabled,<compatible string>)
$(dt_node_array_prop_hex,<node path>,<prop>,<index>[,<unit>])
$(dt_node_array_prop_int,<node path>,<prop>,<index>[,<unit>])
$(dt_node_bool_prop,<node path>,<prop>)
$(dt_node_has_compat,<node path>,<compatible string>)
$(dt_node_has_prop,<node path>,<prop>)
$(dt_node_int_prop_hex,<node path>,<prop>[,<unit>])
$(dt_node_int_prop_int,<node path>,<prop>[,<unit>])
$(dt_node_parent,<node path>)
$(dt_node_ph_array_prop_hex,<node path>,<prop>,<index>,<cell>[,<unit>])
$(dt_node_ph_array_prop_int,<node path>,<prop>,<index>,<cell>[,<unit>])
$(dt_node_ph_prop_path,<node path>,<prop>)
$(dt_node_reg_addr_hex,<node path>[,<index>,<unit>])
$(dt_node_reg_addr_int,<node path>[,<index>,<unit>])
$(dt_node_reg_size_hex,<node path>[,<index>,<unit>])
$(dt_node_reg_size_int,<node path>[,<index>,<unit>])
$(dt_node_str_prop_equals,<node path>,<prop>,<value>)
$(dt_nodelabel_array_prop_has_val, <node label>, <prop>, <value>)
$(dt_nodelabel_bool_prop,<node label>,<prop>)
$(dt_nodelabel_enabled,<node label>)
$(dt_nodelabel_enabled_with_compat,<node label>,<compatible string>)
$(dt_nodelabel_has_compat,<node label>,<compatible string>)
$(dt_nodelabel_has_prop,<node label>,<prop>)
$(dt_nodelabel_path,<node label>)
$(dt_nodelabel_reg_addr_hex,<node label>[,<index>,<unit>])
$(dt_nodelabel_reg_addr_int,<node label>[,<index>,<unit>])
$(dt_nodelabel_reg_size_hex,<node label>[,<index>,<unit>])
$(dt_nodelabel_reg_size_int,<node label>[,<index>,<unit>])
$(dt_path_enabled,<node path>)
Integer functions
*****************
The functions listed below can be used to do arithmetic operations
on integer variables, such as addition, subtraction and more.
.. code-block:: none
$(add,<value>[,value]...)
$(dec,<value>[,value]...)
$(div,<value>[,value]...)
$(inc,<value>[,value]...)
$(max,<value>[,value]...)
$(min,<value>[,value]...)
$(mod,<value>[,value]...)
$(mul,<value>[,value]...)
$(sub,<value>[,value]...)
String functions
****************
The functions listed below can be used to modify string variables.
.. code-block:: none
$(normalize_upper,<string>)
$(substring,<string>,<start>[,<stop>])
Other functions
***************
Functions to perform specific operations, currently only a check if a shield
name is specified.
.. code-block:: none
$(shields_list_contains,<shield name>)
Example Usage
=============
Assume that the devicetree for some board looks like this:
.. code-block:: devicetree
{
soc {
#address-cells = <1>;
#size-cells = <1>;
spi0: spi@10014000 {
compatible = "sifive,spi0";
reg = <0x10014000 0x1000 0x20010000 0x3c0900>;
reg-names = "control", "mem";
...
};
};
The second entry in ``reg`` in ``spi@1001400`` (``<0x20010000 0x3c0900>``)
corresponds to ``mem``, and has the address ``0x20010000``. This address can be
inserted into Kconfig as follows:
.. code-block:: kconfig
config FLASH_BASE_ADDRESS
default $(dt_node_reg_addr_hex,/soc/spi@1001400,1)
After preprocessor expansion, this turns into the definition below:
.. code-block:: kconfig
config FLASH_BASE_ADDRESS
default 0x20010000
``` | /content/code_sandbox/doc/build/kconfig/preprocessor-functions.rst | restructuredtext | 2016-05-26T17:54:19 | 2024-08-16T18:09:06 | zephyr | zephyrproject-rtos/zephyr | 10,307 | 1,428 |
```restructuredtext
.. _kconfig:
Configuration System (Kconfig)
*******************************
The Zephyr kernel and subsystems can be configured at build time to adapt them
for specific application and platform needs. Configuration is handled through
Kconfig, which is the same configuration system used by the Linux kernel. The
goal is to support configuration without having to change any source code.
Configuration options (often called *symbols*) are defined in :file:`Kconfig`
files, which also specify dependencies between symbols that determine what
configurations are valid. Symbols can be grouped into menus and sub-menus to
keep the interactive configuration interfaces organized.
The output from Kconfig is a header file :file:`autoconf.h` with macros that
can be tested at build time. Code for unused features can be compiled out to
save space.
The following sections explain how to set Kconfig configuration options, go
into detail on how Kconfig is used within the Zephyr project, and have some
tips and best practices for writing :file:`Kconfig` files.
.. toctree::
:maxdepth: 1
menuconfig.rst
setting.rst
tips.rst
preprocessor-functions.rst
extensions.rst
Users interested in optimizing their configuration for security should refer
to the Zephyr Security Guide's section on the :ref:`hardening`.
``` | /content/code_sandbox/doc/build/kconfig/index.rst | restructuredtext | 2016-05-26T17:54:19 | 2024-08-16T18:09:06 | zephyr | zephyrproject-rtos/zephyr | 10,307 | 284 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.