| { |
| "title": "Toward Attention-based TinyML: A Heterogeneous Accelerated Architecture and Automated Deployment Flow", |
| "abstract": "One of the challenges for Tiny Machine Learning (tinyML) is keeping up with the evolution of Machine Learning models from Convolutional Neural Networks to Transformers.\nWe address this by leveraging a heterogeneous architectural template coupling RISC-V processors with hardwired accelerators supported by an automated deployment flow.\nWe demonstrate Attention-based models in a tinyML power envelope with an octa-core cluster coupled with an accelerator for quantized Attention.\nOur deployment flow enables end-to-end 8-bit Transformer inference, achieving leading-edge energy efficiency and throughput of \\qty[detect-all=true]2960 G / and \\qty[detect-all=true]154 G / (\\qty[detect-all=true]0.65, \\qty[detect-all=true]22 n FD-SOI technology).", |
| "sections": [ |
| { |
| "section_id": "1", |
| "parent_section_id": null, |
| "section_name": "Introduction", |
| "text": "In recent years, Tiny Machine Learning (tinyML) has attracted much attention, bringing compute-intensive Artificial Intelligence (AI) models towards deployment on Microcontroller (MCU) class devices with power envelopes of a few milliwatts.\nEmbedding Deep Neural Networks (DNNs) in small, low-power devices is highly relevant for numerous applications ranging from multi-modal sensing and keyword spotting to anomaly detection and smart wake-up [1 ###reference_b1###].\nCompared with cloud-only inference, tinyML offers lower network utilization, higher privacy, and more predictable latency.\nHowever, extreme-edge devices typically run with a tightly constrained memory budget, without fully-fledged operating systems and advanced hardware features such as Memory-Management Units (MMUs) and fully automated cache hierarchies.\nOne of the key research challenges is whether it is possible to build systems that respect the tight hardware and software cost and power constraints of tinyML systems while supporting the rapid advancement of models.\nA key consideration for addressing this research question is the trade-off between specialization and generality on the computer architecture level.\nAlthough numerous model-specific accelerators have been proposed in recent years [1 ###reference_b1###], designing Systems-on-Chip (SoCs) that can integrate these accelerators while remaining adaptable to evolving AI models remains an open challenge, particularly under tight memory constraints.\nMoreover, automatically and efficiently deploying rapidly evolving DNN models, especially the increasingly popular Attention-based networks, on accelerator-enhanced MCUs remains a significant challenge.\nAdditionally, fast product cycles make it difficult to accommodate the time and cost associated with hand-tuning each model for deployment.\nIn this paper, we address what we believe to be a fundamental question for the future of tinyML:\nHow can we move from classical perceptive AI and Convolutional Neural Network (CNN) models toward leading Attention-based Transformer models?\nUnlike in CNNs, complex dataflow operations like Softmax in Transformers can lead to high latency despite their low arithmetic complexity.\nWhile General Matrix Multiplication (GEMM) accelerators handle most computations in Transformer networks efficiently, the remaining operations can become a bottleneck.\nTo address this challenge, we leverage a flexible MCU-class architectural template for efficiently integrating specialized hardware accelerators with multi-core clusters over a low-latency Tightly-Coupled Data Memory (TCDM) interconnect.\nAt its core, we use a RISC-V (RV32) compute cluster based on the latency-tolerant Snitch core [2 ###reference_b2###].\nTo the best of our knowledge, this is the first heterogeneous Snitch-based cluster integrating Hardware Processing Engines (HWPEs) 111https://hwpe-doc.readthedocs.io/en/latest/index.html ###reference_index.html###, advancing beyond previous configurations which focused on instruction extension units tightly coupled to the pipeline.\nWe introduce an extensible deployment flow based on a bottom-up DNN compiler, Deeploy, that enables fast and automated End-to-End (E2E) deployment.\nUsing this template, we integrate an extended version of the Integer Transformer Accelerator (ITA) [3 ###reference_b3###] and prove our hardware-software co-design flow on Attention-based models.\nAs a concrete use case, we showcase the E2E deployment of MobileBERT [4 ###reference_b4###], DINOv2 [5 ###reference_b5###], and Whisper\u2019s encoder [6 ###reference_b6###] within a power envelope of (GlobalFoundries fully-depleted silicon-on-insulator technology at ).\nThe contributions of this paper are as follows:\nWe propose a novel, flexible hardware-software architecture template designed to meet the dataflow and compute requirements of emerging Attention-based AI workloads. Our hardware architecture allows the co-integration of a multi-core latency-tolerant Snitch compute cluster with complex hardware accelerators over a high-bandwidth, low-latency TCDM interconnect. At the same time, our co-optimized software template facilitates efficient E2E workload mapping.\nWe demonstrate that our hardware-software template enables starvation-free contention for resources in the shared memory with its tunable interconnect bandwidth and the Direct Memory Access (DMA) engine.\nAs a result, we achieve accelerator utilization of up to .\nAs a concrete use-case, we integrate ITA, a Transformer accelerator tuned for the specific dataflow of the Attention calculation, into our hardware-software template and extend Deeploy222https://github.com/pulp-platform/Deeploy ###reference_### with an accelerator model to enable automated mapping, scheduling, tiling, and code generation. We evaluate the performance through post-layout power analysis, achieving a peak performance of and energy efficiency of up to .\nThe integration incurs only a decrease in utilization compared to the standalone accelerator, demonstrating the low overhead of our template.\nWe showcase the capability of our hardware-software template to support a range of Attention-based tinyML models, including MobileBERT, DINOv2, and Whisper\u2019s encoder.\nOur flow unlocks the potential for collaborative execution between the cluster and the hardware accelerator, which optimizes performance and energy efficiency and prevents resource starvation. By enabling this collaborative execution, we significantly enhance E2E inference energy efficiency by 102 compared to inference without the accelerator, achieving an E2E throughput of up to and energy efficiency of ." |
| }, |
| { |
| "section_id": "2", |
| "parent_section_id": null, |
| "section_name": "II The Challenges of TinyML Acceleration", |
| "text": "" |
| }, |
| { |
| "section_id": "2.1", |
| "parent_section_id": "2", |
| "section_name": "II-A HW Integration Challenges for Attention-based Networks", |
| "text": "Over the years, several approaches to integrating hardware accelerators into SoCs were proposed, varying in their degree of coupling to the SoC\u2019s processor.\nA well-developed approach relies on closely coupling the accelerator with the processor cores through instruction-set extensions [7 ###reference_b7###]. While this approach enables ample flexibility in workload mapping, it is inadequate for Attention accelerators that require large bandwidth. In fact, instruction extensions are limited by the core\u2019s load/store interface, the bandwidth and size of the register file, and instruction fetch bandwidth.\nOn the other end of the spectrum is the loosely coupled integration of accelerators with internal private memory [8 ###reference_b8###].\nWhile this approach eliminates memory access contention during inference, it requires a large in-accelerator and fully private integrated memory to store the intermediate tensors generated for Attention. This causes large area requirements, which increase the cost of the accelerator.\nIt also hinders collaboration between different engines, as data must be moved explicitly between memory hierarchy levels with a significant energy overhead.\nAn interesting middle ground between these two extremes is to couple the accelerator and cores through shared memory [8 ###reference_b8###].\nUnlike private memory solutions, this approach facilitates data exchange between the accelerators and cores. This is a key feature for Attention-based networks since it allows cores to perform auxiliary operations easily without memory copy overheads. These operations vary significantly across different model variants, often preventing hardware acceleration.\nIn this work, we propose a novel architectural template, integrating a cluster of RISC-V cores with an accelerator over shared L1 memory.\nWe show that our proposed design enables close interaction between the cluster cores and the accelerator, supporting emerging and evolving variations of non-linearities and normalization layers found in Attention-based models while exploiting the accelerator for supported operators." |
| }, |
| { |
| "section_id": "2.2", |
| "parent_section_id": "2", |
| "section_name": "II-B TinyML Software Deployment Challenges", |
| "text": "Deploying Transformers at the extreme edge on devices with hardware accelerators comes with many difficulties as they require significant software effort to unlock the performance and efficiency of the accelerators. First and foremost, tinyML devices have highly constrained on-chip memory, in the order of , and no operating systems.\nHence, one must tile layers to process tensors from the lowest level of the memory hierarchy.\nMoreover, these systems often feature software-managed scratchpad memory hierarchies. Thus, explicit and uncached DMA transfers are required to transfer tiled tensors. Furthermore, static memory allocation is crucial to guarantee conflict-free memory transfers.\nWhile several code generation tools for CNNs have been demonstrated [1 ###reference_b1###], most do not generalize to Attention-based models. While CNNs use few branches in their dataflow graphs and therefore do not require sophisticated memory allocation strategies, the highly parallel and branching structure of Attention-based networks requires novel lifetime analysis and tiling strategies to effectively tile and schedule their execution." |
| }, |
| { |
| "section_id": "3", |
| "parent_section_id": null, |
| "section_name": "III Architecture Template", |
| "text": "###figure_1### ###figure_2### ###figure_3### ###figure_4### ###figure_5### ###figure_6### In this Section, we describe a flexible architecture template, shown in Figure 1 ###reference_###, that combines multiple Digital Signal Processing (DSP) optimized RISC-V cores into a compute cluster and facilitates the integration of newly developed hardware accelerators using the HWPE infrastructure and automated deployment.\nCompared to a single-core system, this enables efficient operation through higher performance and parallelism and enhances adaptability and scalability.\nThe HWPE interface developed for the Parallel Ultra Low Power (PULP) platform facilitates the integration of accelerators with multi-core compute cluster into a shared memory cluster.\nOur template integrates the area-efficient Snitch cores, occupying each [2 ###reference_b2###].\nSnitch is a single-stage, in-order core implementing integer base RV32I, RV32M subset for integer multiply/divide instructions, and standard atomic instruction extension RV32A. Unlike CV32E40P cores used in other PULP-derived clusters333https://docs.openhwgroup.org/projects/cv32e40p-user-manual ###reference_e40p-user-manual###, Snitch cores are significantly smaller (-56%) and provide a decoupled memory interface, allowing latency-tolerant memory access by pipelining multiple loads and stores.\nWe couple the cores and accelerators through the shared interleaved L1 TCDM to facilitate energy-efficient data exchange between the compute elements. This is especially crucial for rapidly evolving Attention-based networks as various auxiliary operations need to be computed on the cluster while the majority of the computation is conducted on the accelerator.\nTo reduce banking conflicts and provide the high bandwidth Attention accelerators need, we use 32 banks with each, resulting in a total capacity of .\nThe multi-banked memory makes it unnecessary to attach additional private memory to the accelerator, as data can be accessed by both the accelerator and the cluster\u2019s cores simultaneously.\nWe use a 64-bit TCDM interconnect, which is implemented as a combinatorial crossbar, resulting in single-cycle latency in the absence of conflicts with bandwidth towards the L1.\nEach core has one master port with decoupled request and response path connected to the TCDM interconnect, and the HWPE subsystem features a parametric number of master ports to allow the integration accelerators.\nThe cluster includes two parametrizable AXI interconnects: a wide crossbar with a bit data width and a narrow crossbar with a bit data width.\nThe wide AXI interconnect is used to load instructions into the shared instruction cache and to transfer data from and to the SoC level memory system in conjunction with the DMA.\nThe narrow AXI interconnect is intended to connect to the SoC interconnect to attach peripherals and communicate with a host system.\nMoreover, one Snitch core is coupled with a DMA to manage data movements within the cluster, facilitating double buffering to maintain high accelerator utilization." |
| }, |
| { |
| "section_id": "3.1", |
| "parent_section_id": "3", |
| "section_name": "III-A HWPE Subsystem", |
| "text": "The HWPE template provides three modules: a controller, one or multiple streamers, and the engine.\nThe controller is the interface between the cores in the cluster and the accelerator.\nIt has a Finite State Machine (FSM) specific to the engine to govern the operation of the accelerator and a memory-mapped register file to keep parameters for the accelerator.\nThe register file can hold a sequence of multiple tasks that can be programmed by any core in the cluster through the controller interface over the narrow AXI interconnect. A task represents a set of configuration values used by the accelerator.\nThe streamers act as a special-purpose low-cost DMA to load and store data from the shared TCDM.\nFinally, the engine contains a hardware accelerator that accepts the streamer\u2019s data and the controller\u2019s configuration.\nHWPE allows connecting accelerators seamlessly to PULP clusters and makes programming straightforward over the peripheral interface accessible via AXI.\nThree steps are necessary to integrate an accelerator into the HWPE subsystem. First, the required number of streamers must be instantiated in accordance with the accelerator\u2019s data ports. Next, the streamers must be connected with the accelerator\u2019s data ports and the TCDM interconnect. Finally, an FSM controlling the accelerator and streamers must be implemented.\nHWPE provides two types of streamers: one for input, source streamers and one for output, sink streamers.\nThe streamers utilize a simple valid-ready handshake protocol on the accelerator side, ensuring compatibility with most accelerators.\nAdditionally, HWPE includes first-in, first-out (FIFO) buffers on both the TCDM and accelerator sides, which can be instantiated and sized according to the specific needs of the accelerator and cluster.\nWe time-multiplex multiple streamers to a multi-port interface with ports and connect to the TCDM interconnect.\nThe final step of integrating an accelerator into the HWPE involves designing an FSM to control both the accelerator and the streamers.\nWe use a controller that supports a programmable multi-context register file, allowing the cores to offload the next task while the accelerator runs, thereby hiding configuration latency.\nThe FSM designed around the control slave is straightforward: it reads the configuration for the accelerator from the register file, transfers it to the engine, and configures the streamers accordingly." |
| }, |
| { |
| "section_id": "3.2", |
| "parent_section_id": "3", |
| "section_name": "III-B Neural Network Deployment Framework", |
| "text": "To execute Transformer models on the proposed architectural template, we integrate our hardware template in the Deeploy compiler [9 ###reference_b9###], which maps neural networks to user-defined, platform-specific C code kernel templates.\nDeeploy is a DNN compiler that offers architecture-agnostic tinyML optimizations like double-buffering, memory-aware operator tiling, DMA-aware code generation, and fully static offline memory layout generation.\nThese features allow us to accommodate the custom tiling required for operators exclusively present in Transformer networks.\nIn this way, Deeploy generates code to offload supported DNN operators onto accelerators while providing highly optimized fallback kernel implementations for unsupported operators on the cluster. This bottom-up approach guarantees that emerging DNNs operators can be mapped to our general-purpose cores while fully leveraging integrated accelerators for their supported operators. This is especially useful when considering the numerous variants of Attention-based models, which contain the same Attention mechanism but have slightly different activation or normalization functions.\nTo integrate a new HWPE accelerator, Deeploy only requires a minimal accelerator model; first, the accelerator model must specify the geometrical tiling constraints for operators it can run. Second, the model must provide minimal arithmetic templates for running each supported operator. All other necessary performance optimizations, including memory-aware operator tiling, static memory layout generation, double-buffering code generation, and DMA-aware memory transfers, are inserted by Deeploy automatically.\nBy integrating a model of the hardware template with Deeploy, we propose a low-overhead, adaptable hardware-software architecture template that minimizes the development effort for both hardware and software integration while meeting the strict requirements of extreme edge Attention-based model deployment." |
| }, |
| { |
| "section_id": "4", |
| "parent_section_id": null, |
| "section_name": "IV Implementation", |
| "text": "###figure_7### ###figure_8### ###figure_9### ###figure_10### ###figure_11### ###figure_12### ###figure_13### As a concrete implementation of our template, we show a platform that couples a cluster with 8+1 RV32IMA Snitch cores with the Integer Transformer Accelerator (ITA) [3 ###reference_b3###].\nThe ITA accelerator enables the acceleration of 8-bit GEMM and the more complex multi-head Attention (MHA) present in Transformer networks. ITA used in this work is an extended version of the accelerator presented in [3 ###reference_b3###], featuring additional functionality through the inclusion of a partial sum buffer and an activation unit supporting ReLU and GeLU. Furthermore, it is wrapped with HWPE components." |
| }, |
| { |
| "section_id": "4.1", |
| "parent_section_id": "4", |
| "section_name": "IV-A Integer Transformer Accelerator (ITA)", |
| "text": "ITA is an accelerator for encoder-only Transformer models and performs efficient inference in 8-bit arithmetic, using an integer-only Softmax approximation.\nFigure 2 ###reference_### shows the architecture of ITA.\nAt the core of ITA, there are dot product units that compute the dot product between two vectors of length .\nITA integrates a Softmax approximation, referred to as ITAMax, that operates on integer values in a streaming mode.\nThis enables computing Softmax on the fly.\nSoftmax is defined as\nand normalizes the input matrix row-wise, transforming them into probabilities.\nThis is used in Transformers to calculate the Attention with\nThe ITAMax unit has three stages of operation as illustrated in Figure 2 ###reference_###.\nThe first Denominator Accumulation (DA) stage operates on the 8-bit quantized dot product results from the matrix multiplication.\nIt determines the maximum of the partial row results and accumulates the denominator of the Softmax with the current maximum. The current maximum and the accumulated denominator are stored in buffers.\nAt every iteration, if the local row maximum differs from the previous one, the partial sum is renormalized, and the global maximum is updated.\nOnce ITAMax processes the entire row and accumulates the denominator with the global maximum of the row, it inverts the denominator in the Denominator Inversion (DI) stage and stores it internally.\nThe Element Normalization (EN) stage only starts when the post-Softmax activations are required as input to ITA in the next matrix multiplication ().\nThis stage normalizes the values from the calculation on the fly to produce .\nWith this unique dataflow, ITAMax performs Softmax without additional latency and data fetching from the L1 memory with a low area and power overhead.\nSince ITA integrates a datapath for single-head Attention, MHA must be calculated sequentially head-by-head. Therefore, ITA operates on a single head at a time and computes the partial output projection for each head. The partial outputs of each head need to be summed by the external cluster.\nAdditionally, ITA integrates activation units that fully operate in integer arithmetic.\nThe activation unit has three modes of operation: Identity, ReLU, and GeLU, which can be selected for each computation via the configuration interface of HWPE.\nFor the integer approximation of GeLU, we use the i-GeLU [10 ###reference_b10###] performed in -bit and quantized the results to 8-bit.\nThis allows using ITA as a GEMM accelerator with activation functions accelerated in hardware." |
| }, |
| { |
| "section_id": "4.2", |
| "parent_section_id": "4", |
| "section_name": "IV-B Accelerator Integration", |
| "text": "For ITA, we use dot product units with a -bit accumulator to support matrix dimensions up to 512 and a vector length of .\nWe choose this configuration to exploit the memory-side bandwidth the TCDM offers.\nAs ITA features three input ports (input, weight, bias) and one output port, three input streamers and one output streamer are required.\nAs the four streamers are multiplexed in time, ITA requires of maximum bandwidth to fetch two input vectors per cycle; therefore, we use 16 master ports on the TCDM interconnect for the HWPE subsystem.\nTo produce one output tile, ITA takes at least and the DMA needs to fetch at most two 8-bit inputs/weights, 24-bit bias values and write back 8-bit outputs from and to the L2 memory.\nThis results in a worst-case average bandwidth of towards the SoC memory.\nConsequently, we use a 512-bit wide data AXI interconnect to provide enough bandwidth for the instructions cache and ITA.\nMoreover, we use 64-bit for the narrow AXI interconnect to enable the integration of the cluster into a 64-bit host system.\nFinally, in ITA, we use a dual-context register file that can be programmed via the narrow 64-bit AXI interconnect.\nAs the HWPE Controller uses the peripheral interface, we place an adapter between the AXI bus and the module." |
| }, |
| { |
| "section_id": "4.3", |
| "parent_section_id": "4", |
| "section_name": "IV-C Physical Implementation", |
| "text": "To evaluate our architecture in a tinyML-friendly technology node, we implemented the complete Snitch cluster with an extended version of the ITA accelerator in GlobalFoundries\u2019 \u2009FDX fully-depleted silicon-on-insulator (FD-SOI) technology, targeting an operating frequency of under typical conditions (TT, , ), and in the energy-efficient core voltage configuration (TT, , ).\nThe extended design includes a partial sum buffer, an activation unit, and the HWPE components.\nThe complete cluster requires (5 MGE) with the HWPE subsystem occupying of the total area.\nThe longest paths of the design are located between the input to the output of the dot product units in the HWPE, within the DMA, and the instruction cache to the data mover core with gate delays of 12, 11, and 11, respectively." |
| }, |
| { |
| "section_id": "4.4", |
| "parent_section_id": "4", |
| "section_name": "IV-D Neural Network Deployment", |
| "text": "To extend Deeploy with our architecture template, including the cluster and ITA, the mapping process of ITA-compatible operators is implemented in a multi-step approach.\nDeeploy starts by matching an MHA pattern and fuses it to form a monolithic node in the graph.\nThis node is then split along the head dimension to map the MHA operator head-by-head on ITA. Finally, a head accumulation layer is inserted at the end, which runs on the cluster cores.\nAs described in Section III-B ###reference_###, we extend Deeploy with a model for ITA to support HW-specific optimizations.\nTo solve the tiling problem, we specify geometrical tiling constraints to ensure all inputs and outputs have shapes compatible with ITA\u2019s requirements.\nIn the kernel, we preprogram the next tile using the dual-context register file and configure ITA to load the weights for the next step in the current one.\nThis enables us to achieve a fully double-buffered dataflow without starvation.\nTo the best of our knowledge, this is the first deployment flow that supports the E2E acceleration of Attention-based Transformers at the edge." |
| }, |
| { |
| "section_id": "5", |
| "parent_section_id": null, |
| "section_name": "Results", |
| "text": "MobileNetV1(x0.25) with\nMicroNet Medium, MobileNetV2 1.0, Yolo-Fastest v4, Tiny Wav2letter Pruned, https://alifsemi.com/ ###reference_cing-low-power-consumption/###\nTinyissimoYOLO\nper inference with sequence length\nper inference with sequence length\nper inference with sequence length\nTo measure the power consumption and latency of deployed workloads on our design, we perform cycle-accurate post-layout simulation of the entire cluster using Siemens QuestaSim for latency and throughput evaluation at and post-layout gate-level simulations for power measurement under typical conditions (TT, , ). We choose the operating corner to maximize energy efficiency. Our simulation setup accounts for latency and energy costs of memory transfers between the L1 and the system\u2019s background memory via the DMA, programming of the accelerator and cores, and execution of the operators, both on the cluster and ITA.\nIn the following sections, we profile representative microbenchmarks and the execution of three different Transformer networks. Finally, we compare our results with state-of-the-art MCU-class heterogeneous SoCs for tinyML." |
| }, |
| { |
| "section_id": "5.1", |
| "parent_section_id": "5", |
| "section_name": "Microbenchmarking Result", |
| "text": "We analyze the performance and efficiency of GEMM and the more complex Attention operations and compare the multi-core cluster without any accelerator with the ITA integrated cluster.\nOur heterogenous cluster achieves a throughput of and energy efficiency of in GEMM computation, corresponding to and improvement respectively compared to the cluster without ITA with a peak accelerator utilization of .\nRunning single-head Attention operation offers an even higher performance improvement of more than 3 orders of magnitudes and a better energy efficiency resulting in and with accelerator utilization.\nThe standalone accelerator achieves a slightly higher utilization of , with the integration into the template incurring only a small decrease of . This demonstrates that the template has minimal impact on the accelerator utilization.\nThis trend can be attributed to the efficient Softmax implementation in ITA, which does not add latency and thus avoids bottlenecking the overall efficiency." |
| }, |
| { |
| "section_id": "5.2", |
| "parent_section_id": "5", |
| "section_name": "End-To-End Deployment Results", |
| "text": "To benchmark the execution of a complete model, we quantize MobileBERT444, , , , , (Sequence Length, Embedding Size, Projection Dimension, Attention Heads, Layers, Feed-Forward), DINOv2555, , , , , and Whisper\u2019s666, , , , , encoder using the QuantLib777https://github.com/pulp-platform/quantlib ###reference_### library to perform 8-bit full integer inference.\nDue to the extensive simulation time, we measure each layer separately and sum their execution times to extrapolate to the entire network.\nTable I ###reference_### display the E2E results for two scenarios: multi-core cluster without the accelerator and multi-core cluster with the ITA accelerator.\nIn the scenario with a multi-core cluster, using ITA improves throughput up to at higher energy efficiency." |
| }, |
| { |
| "section_id": "5.3", |
| "parent_section_id": "5", |
| "section_name": "Comparison with the State-of-the-art", |
| "text": "To compare our work with the state-of-the-art in tinyML computer architectures, we present the throughput and energy efficiency for various devices\nin Table I ###reference_###.\nDue to the lack of E2E benchmarks for Transformers on similar devices, we compare against CNNs instead.\nIt is important to note that Transformers pose a greater challenge for accelerators due to their complex dataflow and computational demands.\nThe Syntiant NDP120888https://www.syntiant.com/hardware ###reference_www.syntiant.com/hardware### MCU implemented in UMC ULP technology uses the Syntiant Core 2 tensor processor coupled with an Arm Cortex M0 processor and a HiFi-3 DSP.\nIt achieves up to at in MLPerf Tiny Inference on MobileNetV1 [11 ###reference_b11###].\nWe also compare with the Ensemble E3 AI MCU from Alif Semiconductor999https://alifsemi.com/ ###reference_cing-low-power-consumption/### which couples Ethos-U55 Machine Learning (ML) processors with ARM Cortex M55 processors.\nDepending on the network it achieves up to at .\nCompared to both devices, we achieve at least more throughput with a higher energy efficiency.\nA comparison with a very similar architecture is possible against GreenWaves GAP9 MCU containing the NE16 neural engine.\nThe SoC implemented in technology contains a fabric controller and a compute cluster with nine RISC-V cores and shared L1 memory.\nIn the MLPerf Tiny Inference benchmark on MobileNetV1 it achieves at while Moosmann et al. [12 ###reference_b12###] report better numbers with up to at for a different network.\nIn comparison, we achieve more throughput and higher energy efficiency even though we deploy a more complex network." |
| }, |
| { |
| "section_id": "6", |
| "parent_section_id": null, |
| "section_name": "VI Conclusion", |
| "text": "We presented a flexible hardware-software architecture template, enabling collaborative accelerated execution of emerging Attention-based workloads that can be easily extended for the demands of future networks.\nBy integrating our hardware template in Deeploy, we demonstrate a flexible deployment flow capable of efficiently mapping both accelerator-specific and generic DNN operators on our target architecture.\nWe demonstrate the first E2E deployment of multiple Transformer-based encoder models on a parallel heterogeneous accelerator-enhanced MCU.\nOur implementation, which leverages ITA for computing the MHA and Linear layers and the cluster cores for auxiliary operators, achieves state-of-the-art throughput of with an energy efficiency of .\nThis enables inference rates of at for MobileBERT, at for DINOv2-Small, and at for encoder block of Whisper." |
| } |
| ], |
| "appendix": [], |
| "tables": { |
| "1": { |
| "table_html": "<figure class=\"ltx_table\" id=\"S5.T1\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">TABLE I: </span>End-To-End Network Performance Metrics and Comparison to DNNs on Commercial tinyML Devices</figcaption><div class=\"ltx_flex_figure\">\n<div class=\"ltx_flex_cell ltx_flex_size_1\">\n<table class=\"ltx_tabular ltx_centering ltx_figure_panel ltx_guessed_headers ltx_align_middle\" id=\"S5.T1.4\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S5.T1.4.5.1\">\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_tt\" id=\"S5.T1.4.5.1.1\"></th>\n<th class=\"ltx_td ltx_th ltx_th_column ltx_th_row ltx_border_tt\" id=\"S5.T1.4.5.1.2\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" colspan=\"2\" id=\"S5.T1.4.5.1.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.4.5.1.3.1\">Ours</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" colspan=\"3\" id=\"S5.T1.4.5.1.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.4.5.1.4.1\">Commercial Devices</span></th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.4.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row\" id=\"S5.T1.4.4.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.4.4.5.1\">Metric</span></th>\n<th class=\"ltx_td ltx_align_justify ltx_th ltx_th_column ltx_th_row\" id=\"S5.T1.4.4.6\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T1.4.4.6.1\">\n<span class=\"ltx_p\" id=\"S5.T1.4.4.6.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.4.4.6.1.1.1\">Unit</span></span>\n</span>\n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T1.4.4.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.4.4.7.1\">Multi-Core</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T1.4.4.8\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.4.4.8.1\">Multi-Core + ITA</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T1.1.1.1\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.1.1.1.1\">Syntiant NDP120<sup class=\"ltx_sup\" id=\"S5.T1.1.1.1.1.1\"><span class=\"ltx_text ltx_font_medium\" id=\"S5.T1.1.1.1.1.1.1\">\u2006</span><span class=\"ltx_text ltx_font_medium\" id=\"S5.T1.1.1.1.1.1.2\">\u2021</span></sup></span>\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2408.02473v2#bib.bib11\" title=\"\">11</a>]</cite>\n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T1.2.2.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.2.2.2.1\">AlifSemi E3<sup class=\"ltx_sup\" id=\"S5.T1.2.2.2.1.1\"><span class=\"ltx_text ltx_font_medium\" id=\"S5.T1.2.2.2.1.1.1\">\u2006</span><span class=\"ltx_text ltx_font_medium\" id=\"S5.T1.2.2.2.1.1.2\">\u00a7</span></sup></span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T1.4.4.4\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.4.4.4.2\">GreenWaves GAP9<sup class=\"ltx_sup\" id=\"S5.T1.4.4.4.2.2\"><span class=\"ltx_text ltx_font_medium\" id=\"S5.T1.4.4.4.2.2.1\">\u2006</span><span class=\"ltx_text ltx_font_medium\" id=\"S5.T1.4.4.4.2.2.2\">*\u2006</span><span class=\"ltx_text ltx_font_medium\" id=\"S5.T1.4.4.4.2.2.3\">\u2021</span></sup></span>\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2408.02473v2#bib.bib12\" title=\"\">12</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2408.02473v2#bib.bib11\" title=\"\">11</a>]</cite>\n</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T1.4.6.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S5.T1.4.6.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.4.6.1.1.1\">Throughput</span></th>\n<th class=\"ltx_td ltx_align_justify ltx_th ltx_th_row ltx_border_t\" id=\"S5.T1.4.6.1.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T1.4.6.1.2.1\">\n<span class=\"ltx_p\" id=\"S5.T1.4.6.1.2.1.1\">[GOp/s]</span>\n</span>\n</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.4.6.1.3\">0.74</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.4.6.1.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.4.6.1.4.1\">56-154</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.4.6.1.5\">2-7</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.4.6.1.6\">2-45</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.4.6.1.7\">10-60</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.4.7.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S5.T1.4.7.2.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.4.7.2.1.1\">Energy Efficiency</span></th>\n<th class=\"ltx_td ltx_align_justify ltx_th ltx_th_row\" id=\"S5.T1.4.7.2.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T1.4.7.2.2.1\">\n<span class=\"ltx_p\" id=\"S5.T1.4.7.2.2.1.1\">[GOp/J]</span>\n</span>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.4.7.2.3\">28.9</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.4.7.2.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.4.7.2.4.1\">1600-2960</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.4.7.2.5\">280-400</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.4.7.2.6\">50-560</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.4.7.2.7\">150-650</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.4.8.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_b\" id=\"S5.T1.4.8.3.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.4.8.3.1.1\">Power</span></th>\n<th class=\"ltx_td ltx_align_justify ltx_th ltx_th_row ltx_border_b\" id=\"S5.T1.4.8.3.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T1.4.8.3.2.1\">\n<span class=\"ltx_p\" id=\"S5.T1.4.8.3.2.1.1\">[mW]</span>\n</span>\n</th>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S5.T1.4.8.3.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.4.8.3.3.1\">26.0</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S5.T1.4.8.3.4\">35.2-52.0</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S5.T1.4.8.3.5\">-</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S5.T1.4.8.3.6\">-</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S5.T1.4.8.3.7\">-</td>\n</tr>\n</tbody>\n</table>\n</div>\n<div class=\"ltx_flex_break\"></div>\n<div class=\"ltx_flex_cell ltx_flex_size_1\">\n<table class=\"ltx_tabular ltx_centering ltx_figure_panel ltx_guessed_headers ltx_align_middle\" id=\"S5.T1.7\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S5.T1.7.3\">\n<th class=\"ltx_td ltx_th ltx_th_row\" id=\"S5.T1.7.3.4\"></th>\n<th class=\"ltx_td ltx_th ltx_th_column ltx_th_row\" id=\"S5.T1.7.3.5\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" colspan=\"2\" id=\"S5.T1.5.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.5.1.1.1\">MobileBERT<sup class=\"ltx_sup\" id=\"S5.T1.5.1.1.1.1\"><span class=\"ltx_text ltx_font_medium\" id=\"S5.T1.5.1.1.1.1.1\">\u2006</span><span class=\"ltx_text ltx_font_medium\" id=\"S5.T1.5.1.1.1.1.2\">a</span></sup></span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" colspan=\"2\" id=\"S5.T1.6.2.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.6.2.2.1\">DINOv2-Small<sup class=\"ltx_sup\" id=\"S5.T1.6.2.2.1.1\"><span class=\"ltx_text ltx_font_medium\" id=\"S5.T1.6.2.2.1.1.1\">\u2006</span><span class=\"ltx_text ltx_font_medium\" id=\"S5.T1.6.2.2.1.1.2\">b</span></sup></span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" colspan=\"2\" id=\"S5.T1.7.3.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.7.3.3.1\">Whisper-Tiny Encoder<sup class=\"ltx_sup\" id=\"S5.T1.7.3.3.1.1\"><span class=\"ltx_text ltx_font_medium\" id=\"S5.T1.7.3.3.1.1.1\">\u2006</span><span class=\"ltx_text ltx_font_medium\" id=\"S5.T1.7.3.3.1.1.2\">c</span></sup></span></th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.7.4.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row\" id=\"S5.T1.7.4.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.7.4.1.1.1\">Metric</span></th>\n<th class=\"ltx_td ltx_align_justify ltx_th ltx_th_column ltx_th_row\" id=\"S5.T1.7.4.1.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T1.7.4.1.2.1\">\n<span class=\"ltx_p\" id=\"S5.T1.7.4.1.2.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.7.4.1.2.1.1.1\">Unit</span></span>\n</span>\n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T1.7.4.1.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.7.4.1.3.1\">Multi-Core</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T1.7.4.1.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.7.4.1.4.1\">Multi-Core + ITA</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T1.7.4.1.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.7.4.1.5.1\">Multi-Core</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T1.7.4.1.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.7.4.1.6.1\">Multi-Core + ITA</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T1.7.4.1.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.7.4.1.7.1\">Multi-Core</span></th>\n<th class=\"ltx_td ltx_nopad_r ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T1.7.4.1.8\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.7.4.1.8.1\">Multi-Core + ITA</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T1.7.5.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S5.T1.7.5.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.7.5.1.1.1\">Energy per Inference</span></th>\n<th class=\"ltx_td ltx_align_justify ltx_th ltx_th_row ltx_border_t\" id=\"S5.T1.7.5.1.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T1.7.5.1.2.1\">\n<span class=\"ltx_p\" id=\"S5.T1.7.5.1.2.1.1\">[mJ/Inf]</span>\n</span>\n</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.7.5.1.3\">164</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.7.5.1.4\">1.60</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.7.5.1.5\">407</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.7.5.1.6\">7.31</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.7.5.1.7\">340</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center ltx_border_t\" id=\"S5.T1.7.5.1.8\">5.55</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.7.6.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb\" id=\"S5.T1.7.6.2.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.7.6.2.1.1\">Inference per Second</span></th>\n<th class=\"ltx_td ltx_align_justify ltx_th ltx_th_row ltx_border_bb\" id=\"S5.T1.7.6.2.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S5.T1.7.6.2.2.1\">\n<span class=\"ltx_p\" id=\"S5.T1.7.6.2.2.1.1\">[Inf/s]</span>\n</span>\n</th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T1.7.6.2.3\">0.16</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T1.7.6.2.4\">32.5</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T1.7.6.2.5\">0.06</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T1.7.6.2.6\">4.83</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T1.7.6.2.7\">0.08</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center ltx_border_bb\" id=\"S5.T1.7.6.2.8\">6.52</td>\n</tr>\n</tbody>\n</table>\n</div>\n<div class=\"ltx_flex_break\"></div>\n<div class=\"ltx_flex_cell ltx_flex_size_1\">\n<ul class=\"ltx_itemize ltx_centering ltx_figure_panel\" id=\"S5.I1\">\n<li class=\"ltx_item\" id=\"S5.I1.ix1\" style=\"list-style-type:none;\">\n<span class=\"ltx_tag ltx_tag_item\">\u2021</span>\n<div class=\"ltx_para\" id=\"S5.I1.ix1.p1\">\n<p class=\"ltx_p\" id=\"S5.I1.ix1.p1.1\">MobileNetV1(x0.25) with </p>\n</div>\n</li>\n<li class=\"ltx_item\" id=\"S5.I1.ix2\" style=\"list-style-type:none;\">\n<span class=\"ltx_tag ltx_tag_item\">\u00a7</span>\n<div class=\"ltx_para\" id=\"S5.I1.ix2.p1\">\n<p class=\"ltx_p\" id=\"S5.I1.ix2.p1.1\">MicroNet Medium, MobileNetV2 1.0, Yolo-Fastest v4, Tiny Wav2letter Pruned, <a class=\"ltx_ref ltx_href\" href=\"https://alifsemi.com/faster-ai-mcu-inferencing-low-power-consumption/\" title=\"\">https://alifsemi.com/ ###reference_cing-low-power-consumption/###</a></p>\n</div>\n</li>\n<li class=\"ltx_item\" id=\"S5.I1.ix3\" style=\"list-style-type:none;\">\n<span class=\"ltx_tag ltx_tag_item\">*</span>\n<div class=\"ltx_para\" id=\"S5.I1.ix3.p1\">\n<p class=\"ltx_p\" id=\"S5.I1.ix3.p1.1\">TinyissimoYOLO</p>\n</div>\n</li>\n<li class=\"ltx_item\" id=\"S5.I1.ix4\" style=\"list-style-type:none;\">\n<span class=\"ltx_tag ltx_tag_item\">a</span>\n<div class=\"ltx_para\" id=\"S5.I1.ix4.p1\">\n<p class=\"ltx_p\" id=\"S5.I1.ix4.p1.2\"> per inference with sequence length </p>\n</div>\n</li>\n<li class=\"ltx_item\" id=\"S5.I1.ix5\" style=\"list-style-type:none;\">\n<span class=\"ltx_tag ltx_tag_item\">b</span>\n<div class=\"ltx_para\" id=\"S5.I1.ix5.p1\">\n<p class=\"ltx_p\" id=\"S5.I1.ix5.p1.2\"> per inference with sequence length </p>\n</div>\n</li>\n<li class=\"ltx_item\" id=\"S5.I1.ix6\" style=\"list-style-type:none;\">\n<span class=\"ltx_tag ltx_tag_item\">c</span>\n<div class=\"ltx_para\" id=\"S5.I1.ix6.p1\">\n<p class=\"ltx_p\" id=\"S5.I1.ix6.p1.2\"> per inference with sequence length </p>\n</div>\n</li>\n</ul>\n</div>\n</div>\n</figure>", |
| "capture": "TABLE I: End-To-End Network Performance Metrics and Comparison to DNNs on Commercial tinyML Devices" |
| } |
| }, |
| "image_paths": { |
| "1(a)": { |
| "figure_path": "2408.02473v2_figure_1(a).png", |
| "caption": "Figure 1: Overview of the Hardware-Software Architecture Template.\nThe flexible template allows modular integration of accelerators into an SoC and deployment of different workloads with Deeploy.\nThe workflow is as follows: Integrate an accelerator as an HWPE engine, a configurable interface designed for efficient integration of memory-coupled accelerators, enabling streamlined data transfer and control between the accelerator and shared memory. Ensure sufficient bandwidth for the accelerator by tuning the wide AXI interconnect, allowing high-bandwidth access to L2 memory via the DMA.\n Configure the operator mapping in Deepooy and provide the workload as an ONNX graph. Define the tiling constraints according to the accelerator buffer and datapath sizes and provide minimal kernel templates to control the accelerator via a register interface. Use Deeploy to perform automated graph optimization and scheduling, to co-optimize operator tiling and static memory allocation, and to generate C code. This code orchestrates memory transfers using the DMA and coordinates execution on the compute cores and the accelerator.", |
| "url": "http://arxiv.org/html/2408.02473v2/x1.png" |
| }, |
| "1(b)": { |
| "figure_path": "2408.02473v2_figure_1(b).png", |
| "caption": "Figure 1: Overview of the Hardware-Software Architecture Template.\nThe flexible template allows modular integration of accelerators into an SoC and deployment of different workloads with Deeploy.\nThe workflow is as follows: Integrate an accelerator as an HWPE engine, a configurable interface designed for efficient integration of memory-coupled accelerators, enabling streamlined data transfer and control between the accelerator and shared memory. Ensure sufficient bandwidth for the accelerator by tuning the wide AXI interconnect, allowing high-bandwidth access to L2 memory via the DMA.\n Configure the operator mapping in Deepooy and provide the workload as an ONNX graph. Define the tiling constraints according to the accelerator buffer and datapath sizes and provide minimal kernel templates to control the accelerator via a register interface. Use Deeploy to perform automated graph optimization and scheduling, to co-optimize operator tiling and static memory allocation, and to generate C code. This code orchestrates memory transfers using the DMA and coordinates execution on the compute cores and the accelerator.", |
| "url": "http://arxiv.org/html/2408.02473v2/x7.png" |
| }, |
| "1(c)": { |
| "figure_path": "2408.02473v2_figure_1(c).png", |
| "caption": "Figure 1: Overview of the Hardware-Software Architecture Template.\nThe flexible template allows modular integration of accelerators into an SoC and deployment of different workloads with Deeploy.\nThe workflow is as follows: Integrate an accelerator as an HWPE engine, a configurable interface designed for efficient integration of memory-coupled accelerators, enabling streamlined data transfer and control between the accelerator and shared memory. Ensure sufficient bandwidth for the accelerator by tuning the wide AXI interconnect, allowing high-bandwidth access to L2 memory via the DMA.\n Configure the operator mapping in Deepooy and provide the workload as an ONNX graph. Define the tiling constraints according to the accelerator buffer and datapath sizes and provide minimal kernel templates to control the accelerator via a register interface. Use Deeploy to perform automated graph optimization and scheduling, to co-optimize operator tiling and static memory allocation, and to generate C code. This code orchestrates memory transfers using the DMA and coordinates execution on the compute cores and the accelerator.", |
| "url": "http://arxiv.org/html/2408.02473v2/x8.png" |
| }, |
| "1(d)": { |
| "figure_path": "2408.02473v2_figure_1(d).png", |
| "caption": "Figure 1: Overview of the Hardware-Software Architecture Template.\nThe flexible template allows modular integration of accelerators into an SoC and deployment of different workloads with Deeploy.\nThe workflow is as follows: Integrate an accelerator as an HWPE engine, a configurable interface designed for efficient integration of memory-coupled accelerators, enabling streamlined data transfer and control between the accelerator and shared memory. Ensure sufficient bandwidth for the accelerator by tuning the wide AXI interconnect, allowing high-bandwidth access to L2 memory via the DMA.\n Configure the operator mapping in Deepooy and provide the workload as an ONNX graph. Define the tiling constraints according to the accelerator buffer and datapath sizes and provide minimal kernel templates to control the accelerator via a register interface. Use Deeploy to perform automated graph optimization and scheduling, to co-optimize operator tiling and static memory allocation, and to generate C code. This code orchestrates memory transfers using the DMA and coordinates execution on the compute cores and the accelerator.", |
| "url": "http://arxiv.org/html/2408.02473v2/x9.png" |
| }, |
| "1(e)": { |
| "figure_path": "2408.02473v2_figure_1(e).png", |
| "caption": "Figure 1: Overview of the Hardware-Software Architecture Template.\nThe flexible template allows modular integration of accelerators into an SoC and deployment of different workloads with Deeploy.\nThe workflow is as follows: Integrate an accelerator as an HWPE engine, a configurable interface designed for efficient integration of memory-coupled accelerators, enabling streamlined data transfer and control between the accelerator and shared memory. Ensure sufficient bandwidth for the accelerator by tuning the wide AXI interconnect, allowing high-bandwidth access to L2 memory via the DMA.\n Configure the operator mapping in Deepooy and provide the workload as an ONNX graph. Define the tiling constraints according to the accelerator buffer and datapath sizes and provide minimal kernel templates to control the accelerator via a register interface. Use Deeploy to perform automated graph optimization and scheduling, to co-optimize operator tiling and static memory allocation, and to generate C code. This code orchestrates memory transfers using the DMA and coordinates execution on the compute cores and the accelerator.", |
| "url": "http://arxiv.org/html/2408.02473v2/x10.png" |
| }, |
| "1(f)": { |
| "figure_path": "2408.02473v2_figure_1(f).png", |
| "caption": "Figure 1: Overview of the Hardware-Software Architecture Template.\nThe flexible template allows modular integration of accelerators into an SoC and deployment of different workloads with Deeploy.\nThe workflow is as follows: Integrate an accelerator as an HWPE engine, a configurable interface designed for efficient integration of memory-coupled accelerators, enabling streamlined data transfer and control between the accelerator and shared memory. Ensure sufficient bandwidth for the accelerator by tuning the wide AXI interconnect, allowing high-bandwidth access to L2 memory via the DMA.\n Configure the operator mapping in Deepooy and provide the workload as an ONNX graph. Define the tiling constraints according to the accelerator buffer and datapath sizes and provide minimal kernel templates to control the accelerator via a register interface. Use Deeploy to perform automated graph optimization and scheduling, to co-optimize operator tiling and static memory allocation, and to generate C code. This code orchestrates memory transfers using the DMA and coordinates execution on the compute cores and the accelerator.", |
| "url": "http://arxiv.org/html/2408.02473v2/x11.png" |
| }, |
| "2(a)": { |
| "figure_path": "2408.02473v2_figure_2(a).png", |
| "caption": "Figure 2: Architecture of the Integer Transformer Accelerator (ITA). ITA combines an output stationary dataflow with a local weight stationary dataflow and streaming Softmax operation to achieve high data reuse and minimal memory interaction. Weights are stored in a double-buffered weight memory to fetch the next set of weights while performing computation with the current set of weights. Inputs are fetched via streamers and passed through the ITAMax module during \ud835\udc00\u00d7\ud835\udc15\ud835\udc00\ud835\udc15\\mathbf{A\\times V}bold_A \u00d7 bold_V step. While \ud835\udc10\u00d7\ud835\udc0aT\ud835\udc10superscript\ud835\udc0aT\\mathbf{Q}\\times\\mathbf{K}^{\\mathrm{T}}bold_Q \u00d7 bold_K start_POSTSUPERSCRIPT roman_T end_POSTSUPERSCRIPT is computed, the ITAMax module operates on the outputs to accumulate the denominator. ITAMax operates in three stages: Find the local maximum and compare it with the previous maximum stored in the buffer, accumulate the denominator of the Softmax using the current maximum and normalize the previous sum if the maximum is changed. After the accumulation, the denominator is inverted and saved to the same buffer. Inputs for \ud835\udc00\u00d7\ud835\udc15\ud835\udc00\ud835\udc15\\mathbf{A\\times V}bold_A \u00d7 bold_V step are normalized using the saved maximum and inverted denominator.", |
| "url": "http://arxiv.org/html/2408.02473v2/x12.png" |
| }, |
| "2(b)": { |
| "figure_path": "2408.02473v2_figure_2(b).png", |
| "caption": "Figure 2: Architecture of the Integer Transformer Accelerator (ITA). ITA combines an output stationary dataflow with a local weight stationary dataflow and streaming Softmax operation to achieve high data reuse and minimal memory interaction. Weights are stored in a double-buffered weight memory to fetch the next set of weights while performing computation with the current set of weights. Inputs are fetched via streamers and passed through the ITAMax module during \ud835\udc00\u00d7\ud835\udc15\ud835\udc00\ud835\udc15\\mathbf{A\\times V}bold_A \u00d7 bold_V step. While \ud835\udc10\u00d7\ud835\udc0aT\ud835\udc10superscript\ud835\udc0aT\\mathbf{Q}\\times\\mathbf{K}^{\\mathrm{T}}bold_Q \u00d7 bold_K start_POSTSUPERSCRIPT roman_T end_POSTSUPERSCRIPT is computed, the ITAMax module operates on the outputs to accumulate the denominator. ITAMax operates in three stages: Find the local maximum and compare it with the previous maximum stored in the buffer, accumulate the denominator of the Softmax using the current maximum and normalize the previous sum if the maximum is changed. After the accumulation, the denominator is inverted and saved to the same buffer. Inputs for \ud835\udc00\u00d7\ud835\udc15\ud835\udc00\ud835\udc15\\mathbf{A\\times V}bold_A \u00d7 bold_V step are normalized using the saved maximum and inverted denominator.", |
| "url": "http://arxiv.org/html/2408.02473v2/x19.png" |
| }, |
| "2(c)": { |
| "figure_path": "2408.02473v2_figure_2(c).png", |
| "caption": "Figure 2: Architecture of the Integer Transformer Accelerator (ITA). ITA combines an output stationary dataflow with a local weight stationary dataflow and streaming Softmax operation to achieve high data reuse and minimal memory interaction. Weights are stored in a double-buffered weight memory to fetch the next set of weights while performing computation with the current set of weights. Inputs are fetched via streamers and passed through the ITAMax module during \ud835\udc00\u00d7\ud835\udc15\ud835\udc00\ud835\udc15\\mathbf{A\\times V}bold_A \u00d7 bold_V step. While \ud835\udc10\u00d7\ud835\udc0aT\ud835\udc10superscript\ud835\udc0aT\\mathbf{Q}\\times\\mathbf{K}^{\\mathrm{T}}bold_Q \u00d7 bold_K start_POSTSUPERSCRIPT roman_T end_POSTSUPERSCRIPT is computed, the ITAMax module operates on the outputs to accumulate the denominator. ITAMax operates in three stages: Find the local maximum and compare it with the previous maximum stored in the buffer, accumulate the denominator of the Softmax using the current maximum and normalize the previous sum if the maximum is changed. After the accumulation, the denominator is inverted and saved to the same buffer. Inputs for \ud835\udc00\u00d7\ud835\udc15\ud835\udc00\ud835\udc15\\mathbf{A\\times V}bold_A \u00d7 bold_V step are normalized using the saved maximum and inverted denominator.", |
| "url": "http://arxiv.org/html/2408.02473v2/x20.png" |
| }, |
| "2(d)": { |
| "figure_path": "2408.02473v2_figure_2(d).png", |
| "caption": "Figure 2: Architecture of the Integer Transformer Accelerator (ITA). ITA combines an output stationary dataflow with a local weight stationary dataflow and streaming Softmax operation to achieve high data reuse and minimal memory interaction. Weights are stored in a double-buffered weight memory to fetch the next set of weights while performing computation with the current set of weights. Inputs are fetched via streamers and passed through the ITAMax module during \ud835\udc00\u00d7\ud835\udc15\ud835\udc00\ud835\udc15\\mathbf{A\\times V}bold_A \u00d7 bold_V step. While \ud835\udc10\u00d7\ud835\udc0aT\ud835\udc10superscript\ud835\udc0aT\\mathbf{Q}\\times\\mathbf{K}^{\\mathrm{T}}bold_Q \u00d7 bold_K start_POSTSUPERSCRIPT roman_T end_POSTSUPERSCRIPT is computed, the ITAMax module operates on the outputs to accumulate the denominator. ITAMax operates in three stages: Find the local maximum and compare it with the previous maximum stored in the buffer, accumulate the denominator of the Softmax using the current maximum and normalize the previous sum if the maximum is changed. After the accumulation, the denominator is inverted and saved to the same buffer. Inputs for \ud835\udc00\u00d7\ud835\udc15\ud835\udc00\ud835\udc15\\mathbf{A\\times V}bold_A \u00d7 bold_V step are normalized using the saved maximum and inverted denominator.", |
| "url": "http://arxiv.org/html/2408.02473v2/x21.png" |
| }, |
| "2(e)": { |
| "figure_path": "2408.02473v2_figure_2(e).png", |
| "caption": "Figure 2: Architecture of the Integer Transformer Accelerator (ITA). ITA combines an output stationary dataflow with a local weight stationary dataflow and streaming Softmax operation to achieve high data reuse and minimal memory interaction. Weights are stored in a double-buffered weight memory to fetch the next set of weights while performing computation with the current set of weights. Inputs are fetched via streamers and passed through the ITAMax module during \ud835\udc00\u00d7\ud835\udc15\ud835\udc00\ud835\udc15\\mathbf{A\\times V}bold_A \u00d7 bold_V step. While \ud835\udc10\u00d7\ud835\udc0aT\ud835\udc10superscript\ud835\udc0aT\\mathbf{Q}\\times\\mathbf{K}^{\\mathrm{T}}bold_Q \u00d7 bold_K start_POSTSUPERSCRIPT roman_T end_POSTSUPERSCRIPT is computed, the ITAMax module operates on the outputs to accumulate the denominator. ITAMax operates in three stages: Find the local maximum and compare it with the previous maximum stored in the buffer, accumulate the denominator of the Softmax using the current maximum and normalize the previous sum if the maximum is changed. After the accumulation, the denominator is inverted and saved to the same buffer. Inputs for \ud835\udc00\u00d7\ud835\udc15\ud835\udc00\ud835\udc15\\mathbf{A\\times V}bold_A \u00d7 bold_V step are normalized using the saved maximum and inverted denominator.", |
| "url": "http://arxiv.org/html/2408.02473v2/x22.png" |
| }, |
| "2(f)": { |
| "figure_path": "2408.02473v2_figure_2(f).png", |
| "caption": "Figure 2: Architecture of the Integer Transformer Accelerator (ITA). ITA combines an output stationary dataflow with a local weight stationary dataflow and streaming Softmax operation to achieve high data reuse and minimal memory interaction. Weights are stored in a double-buffered weight memory to fetch the next set of weights while performing computation with the current set of weights. Inputs are fetched via streamers and passed through the ITAMax module during \ud835\udc00\u00d7\ud835\udc15\ud835\udc00\ud835\udc15\\mathbf{A\\times V}bold_A \u00d7 bold_V step. While \ud835\udc10\u00d7\ud835\udc0aT\ud835\udc10superscript\ud835\udc0aT\\mathbf{Q}\\times\\mathbf{K}^{\\mathrm{T}}bold_Q \u00d7 bold_K start_POSTSUPERSCRIPT roman_T end_POSTSUPERSCRIPT is computed, the ITAMax module operates on the outputs to accumulate the denominator. ITAMax operates in three stages: Find the local maximum and compare it with the previous maximum stored in the buffer, accumulate the denominator of the Softmax using the current maximum and normalize the previous sum if the maximum is changed. After the accumulation, the denominator is inverted and saved to the same buffer. Inputs for \ud835\udc00\u00d7\ud835\udc15\ud835\udc00\ud835\udc15\\mathbf{A\\times V}bold_A \u00d7 bold_V step are normalized using the saved maximum and inverted denominator.", |
| "url": "http://arxiv.org/html/2408.02473v2/x17.png" |
| }, |
| "2(g)": { |
| "figure_path": "2408.02473v2_figure_2(g).png", |
| "caption": "Figure 2: Architecture of the Integer Transformer Accelerator (ITA). ITA combines an output stationary dataflow with a local weight stationary dataflow and streaming Softmax operation to achieve high data reuse and minimal memory interaction. Weights are stored in a double-buffered weight memory to fetch the next set of weights while performing computation with the current set of weights. Inputs are fetched via streamers and passed through the ITAMax module during \ud835\udc00\u00d7\ud835\udc15\ud835\udc00\ud835\udc15\\mathbf{A\\times V}bold_A \u00d7 bold_V step. While \ud835\udc10\u00d7\ud835\udc0aT\ud835\udc10superscript\ud835\udc0aT\\mathbf{Q}\\times\\mathbf{K}^{\\mathrm{T}}bold_Q \u00d7 bold_K start_POSTSUPERSCRIPT roman_T end_POSTSUPERSCRIPT is computed, the ITAMax module operates on the outputs to accumulate the denominator. ITAMax operates in three stages: Find the local maximum and compare it with the previous maximum stored in the buffer, accumulate the denominator of the Softmax using the current maximum and normalize the previous sum if the maximum is changed. After the accumulation, the denominator is inverted and saved to the same buffer. Inputs for \ud835\udc00\u00d7\ud835\udc15\ud835\udc00\ud835\udc15\\mathbf{A\\times V}bold_A \u00d7 bold_V step are normalized using the saved maximum and inverted denominator.", |
| "url": "http://arxiv.org/html/2408.02473v2/x18.png" |
| } |
| }, |
| "validation": true, |
| "references": [], |
| "url": "http://arxiv.org/html/2408.02473v2" |
| } |