| | --- |
| | license: apache-2.0 |
| | pipeline_tag: text-generation |
| | tags: |
| | - fp8 |
| | - quantized |
| | - llm-compressor |
| | - compressed-tensors |
| | - red hat |
| | base_model: |
| | - ibm-granite/granite-4.0-h-small |
| | --- |
| | |
| |
|
| | # Granite-4.0-h-small |
| |
|
| | ## Model Overview |
| | - **Model Architecture:** GraniteMoeHybridForCausalLM |
| | - **Input:** Text |
| | - **Output:** Text |
| | - **Model Optimizations:** |
| | - **Weight quantization:** FP8 |
| | - **Activation quantization:** FP8 |
| | - **Release Date:** |
| | - **Version:** 1.0 |
| | - **Model Developers:**: Red Hat |
| |
|
| | Quantized version of [ibm-granite/granite-4.0-h-small](https://huggingface.co/ibm-granite/granite-4.0-h-small). |
| |
|
| | ### Model Optimizations |
| |
|
| | This model was obtained by quantizing the weights and activations of [ibm-granite/granite-4.0-h-small](https://huggingface.co/ibm-granite/granite-4.0-h-small) to FP8 data type. |
| | This optimization reduces the number of bits per parameter from 16 to 8, reducing the disk size and GPU memory requirements by approximately 50%. |
| | Only the weights and activations of the linear operators within transformers blocks of the language model are quantized. |
| |
|
| | ## Deployment |
| |
|
| | ### Use with vLLM |
| |
|
| | 1. Install specific version: |
| | ``` |
| | uv pip install -U git+https://github.com/vllm-project/vllm.git@refs/pull/28398/head \ |
| | --extra-index-url https://wheels.vllm.ai/nightly \ |
| | --no-deps \ |
| | --no-cache |
| | |
| | uv pip install compressed-tensors==0.12.3a20251114 --no-cache |
| | uv pip install --upgrade torchvision --break-system-packages --no-cache |
| | uv pip install cloudpickle msgspec zmq blake3 cachetools prometheus_client fastapi openai openai_harmony pybase64 llguidance diskcache xgrammar lm-format-enforcer partial-json-parser cbor2 einops gguf numba |
| | |
| | ``` |
| |
|
| | 2. Initialize vLLM server: |
| | ``` |
| | vllm serve RedHatAI/granite-4.0-h-small-FP8-block --tensor_parallel_size 1 |
| | ``` |
| |
|
| | 3. Send requests to the server: |
| |
|
| | ```python |
| | from openai import OpenAI |
| | |
| | # Modify OpenAI's API key and API base to use vLLM's API server. |
| | openai_api_key = "EMPTY" |
| | openai_api_base = "http://<your-server-host>:8000/v1" |
| | |
| | client = OpenAI( |
| | api_key=openai_api_key, |
| | base_url=openai_api_base, |
| | ) |
| | |
| | model = "RedHatAI/granite-4.0-h-small-FP8-block" |
| | |
| | messages = [ |
| | {"role": "user", "content": "Explain quantum mechanics clearly and concisely."}, |
| | ] |
| | |
| | |
| | outputs = client.chat.completions.create( |
| | model=model, |
| | messages=messages, |
| | ) |
| | |
| | generated_text = outputs.choices[0].message.content |
| | print(generated_text) |
| | ``` |
| |
|
| | ## Creation |
| |
|
| | This model was quantized using the [llm-compressor](https://github.com/vllm-project/llm-compressor) library as shown below. |
| |
|
| |
|
| | <details> |
| | <summary>Creation details</summary> |
| |
|
| | Install specific version: |
| | ``` |
| | uv pip install git+https://github.com/vllm-project/llm-compressor.git@refs/pull/2001/head --no-cache |
| | uv pip install --upgrade torchvision --break-system-packages --no-cache |
| | ``` |
| |
|
| |
|
| | ```python |
| | from transformers import AutoModelForCausalLM, AutoTokenizer |
| | |
| | from llmcompressor import oneshot |
| | from llmcompressor.modifiers.quantization import QuantizationModifier |
| | from llmcompressor.utils import dispatch_for_generation |
| | from llmcompressor.modeling import replace_modules_for_calibration |
| | |
| | MODEL_ID = "ibm-granite/granite-4.0-h-small" |
| | |
| | model = AutoModelForCausalLM.from_pretrained(MODEL_ID, torch_dtype="auto") |
| | tokenizer = AutoTokenizer.from_pretrained(MODEL_ID) |
| | |
| | model = replace_modules_for_calibration(model) |
| | |
| | ignore_lay = ["lm_head", "re:.*block_sparse_moe.router", "re:.*mamba.in_proj", "re:.*shared_mlp.input_linear"] |
| | |
| | recipe = QuantizationModifier( |
| | targets=["Linear"], |
| | scheme="FP8_BLOCK", |
| | ignore=ignore_lay, |
| | ) |
| | |
| | oneshot(model=model, recipe=recipe) |
| | |
| | print("========== SAMPLE GENERATION ==============") |
| | dispatch_for_generation(model) |
| | input_ids = tokenizer( |
| | "Describe Large Language Model", return_tensors="pt" |
| | ).input_ids.to(model.device) |
| | output = model.generate(input_ids, max_new_tokens=35) |
| | print(tokenizer.decode(output[0])) |
| | print("==========================================") |
| | |
| | SAVE_DIR = MODEL_ID.rstrip("/").split("/")[-1] + "-FP8-block" |
| | print(f"Saving to {SAVE_DIR}") |
| | |
| | model.save_pretrained(SAVE_DIR) |
| | tokenizer.save_pretrained(SAVE_DIR) |
| | ``` |
| | </details> |
| |
|
| |
|
| | ## Evaluation |
| |
|
| |
|
| | The model was evaluated on the OpenLLM leaderboard task, using [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness). |
| | [vLLM](https://docs.vllm.ai/en/stable/) was used for all evaluations. |
| |
|
| | <details> |
| | <summary>Evaluation details</summary> |
| |
|
| | Install specific version: |
| | ``` |
| | uv pip install -U git+https://github.com/vllm-project/vllm.git@refs/pull/28398/head \ |
| | --extra-index-url https://wheels.vllm.ai/nightly \ |
| | --no-deps \ |
| | --no-cache |
| | |
| | |
| | uv pip install compressed-tensors==0.12.3a20251114 --no-cache |
| | uv pip install --upgrade torchvision --break-system-packages --no-cache |
| | uv pip install cloudpickle msgspec zmq blake3 cachetools prometheus_client fastapi openai openai_harmony pybase64 llguidance diskcache xgrammar lm-format-enforcer partial-json-parser cbor2 einops gguf numba |
| | ``` |
| | |
| | **Openllm V1** |
| | ``` |
| | lm_eval \ |
| | --model vllm \ |
| | --model_args pretrained="RedHatAI/granite-4.0-h-small-FP8-block",dtype=auto,add_bos_token=True,max_model_len=16384,tensor_parallel_size=1,gpu_memory_utilization=0.9,enable_chunked_prefill=True,trust_remote_code=True \ |
| | --tasks openllm \ |
| | --write_out \ |
| | --batch_size auto \ |
| | --show_config |
| | ``` |
| |
|
| |
|
| | **Openllm V2** |
| | ``` |
| | lm_eval \ |
| | --model vllm \ |
| | --model_args pretrained="RedHatAI/granite-4.0-h-small-FP8-block",dtype=auto,add_bos_token=False,max_model_len=16384,tensor_parallel_size=1,gpu_memory_utilization=0.7,disable_log_stats=True,enable_chunked_prefill=True,trust_remote_code=True \ |
| | --tasks leaderboard \ |
| | --apply_chat_template \ |
| | --fewshot_as_multiturn \ |
| | --write_out \ |
| | --batch_size auto \ |
| | --show_config |
| | ``` |
| |
|
| |
|
| | **Coding Benchmarks** |
| |
|
| | ``` |
| | evalplus.evaluate --model "RedHatAI/granite-4.0-h-small-FP8-block" \ |
| | --dataset "humaneval" \ |
| | --backend vllm \ |
| | --tp 1 \ |
| | --greedy |
| | |
| | evalplus.evaluate --model "RedHatAI/granite-4.0-h-small-FP8-block" \ |
| | --dataset "mbpp" \ |
| | --backend vllm \ |
| | --tp 1 \ |
| | --greedy |
| | |
| | ``` |
| |
|
| | </details> |
| |
|
| |
|
| | <!-- <b>*</b> I/p Length = 2048, O/p Length = 2048, #Requests = 1024 --> |
| | |
| | |
| | |
| | ### Accuracy Comparison |
| | <table> |
| | <thead> |
| | <tr> |
| | <th>Category</th> |
| | <th>Metric</th> |
| | <th>ibm-granite/granite-4.0-h-small</th> |
| | <th>ibm-granite/granite-4.0-h-small-FP8</th> |
| | <th>RedHatAI/granite-4.0-h-small-FP8-block</th> |
| | <th>RedHatAI/granite-4.0-h-small-FP8-dynamic</th> |
| | </tr> |
| | </thead> |
| | <tbody> |
| | <!-- OpenLLM Leaderboard V1 --> |
| | <tr> |
| | <td rowspan="7"><b>OpenLLM V1</b></td> |
| | <td>ARC-Challenge (Acc-Norm, 25-shot)</td> |
| | <td>72.27</td> |
| | <td>72.10 (99.76%)</td> |
| | <td>72.27 (100.00%)</td> |
| | <td>72.10 (99.76%)</td> |
| | </tr> |
| | <tr> |
| | <td>GSM8K (Strict-Match, 5-shot)</td> |
| | <td>85.22</td> |
| | <td>85.29 (100.09%)</td> |
| | <td>85.52 (100.36%)</td> |
| | <td>84.84 (99.56%)</td> |
| | </tr> |
| | <tr> |
| | <td>HellaSwag (Acc-Norm, 10-shot)</td> |
| | <td>86.08</td> |
| | <td>85.88 (99.77%)</td> |
| | <td>85.96 (99.86%)</td> |
| | <td>85.88 (99.77%)</td> |
| | </tr> |
| | <tr> |
| | <td>MMLU (Acc, 5-shot)</td> |
| | <td>77.15</td> |
| | <td>77.18 (100.03%)</td> |
| | <td>77.23 (100.09%)</td> |
| | <td>77.18 (100.03%)</td> |
| | </tr> |
| | <tr> |
| | <td>TruthfulQA (MC2, 0-shot)</td> |
| | <td>57.64</td> |
| | <td>57.63 (99.99%)</td> |
| | <td>57.94 (100.52%)</td> |
| | <td>57.63 (100.00%)</td> |
| | </tr> |
| | <tr> |
| | <td>Winogrande (Acc, 5-shot)</td> |
| | <td>81.37</td> |
| | <td>81.45 (100.10%)</td> |
| | <td>80.82 (99.32%)</td> |
| | <td>81.45 (100.10%)</td> |
| | </tr> |
| | <tr> |
| | <td><b>Average Score</b></td> |
| | <td><b>76.62</b></td> |
| | <td><b>76.59 (99.96%)</b></td> |
| | <td><b>76.62 (100.00%)</b></td> |
| | <td><b>76.51 (99.86%)</b></td> |
| | </tr> |
| | <!-- OpenLLM Leaderboard V2 --> |
| | <tr> |
| | <td rowspan="7"><b>OpenLLM V2</b></td> |
| | <td>IFEval (Inst Level Strict Acc, 0-shot)</td> |
| | <td>87.53</td> |
| | <td>87.17 (99.59%)</td> |
| | <td>86.69 (99.04%)</td> |
| | <td>87.41 (99.86%)</td> |
| | </tr> |
| | <tr> |
| | <td>BBH (Acc-Norm, 3-shot)</td> |
| | <td>61.52</td> |
| | <td>61.31 (99.66%)</td> |
| | <td>61.40 (99.80%)</td> |
| | <td>61.19 (99.46%)</td> |
| | </tr> |
| | <tr> |
| | <td>Math-Hard (Exact-Match, 4-shot)</td> |
| | <td>46.22</td> |
| | <td>43.73 (94.61%)</td> |
| | <td>43.88 (94.93%)</td> |
| | <td>41.77 (90.36%)</td> |
| | </tr> |
| | <tr> |
| | <td>GPQA (Acc-Norm, 0-shot)</td> |
| | <td>35.23</td> |
| | <td>34.98 (99.29%)</td> |
| | <td>34.23 (97.14%)</td> |
| | <td>34.23 (97.14%)</td> |
| | </tr> |
| | <tr> |
| | <td>MUSR (Acc-Norm, 0-shot)</td> |
| | <td>46.69</td> |
| | <td>46.56 (99.72%)</td> |
| | <td>45.77 (98.02%)</td> |
| | <td>45.77 (98.02%)</td> |
| | </tr> |
| | <tr> |
| | <td>MMLU-Pro (Acc, 5-shot)</td> |
| | <td>47.99</td> |
| | <td>47.63 (99.26%)</td> |
| | <td>47.93 (99.88%)</td> |
| | <td>47.58 (99.15%)</td> |
| | </tr> |
| | <tr> |
| | <td><b>Average Score</b></td> |
| | <td><b>54.20</b></td> |
| | <td><b>53.56 (98.82%)</b></td> |
| | <td><b>53.32 (98.38%)</b></td> |
| | <td><b>52.99 (97.77%)</b></td> |
| | </tr> |
| | </tbody> |
| | </table> |
| | |
| | |
| | |
| | <!-- |
| | ### Accuracy |
| | <table> |
| | |
| | <thead> |
| | <tr> |
| | <th>Category</th> |
| | <th>Metric</th> |
| | <th>ibm-granite/granite-4.0-h-small</th> |
| | <th>ibm-granite/granite-4.0-h-small-FP8</th> |
| | <th>RedHatAI/granite-4.0-h-small-FP8-block</th> |
| | </tr> |
| | </thead> |
| | <tbody> |
| | <tr> |
| | <td rowspan="1"><b>Model Size (GB)</b></td> |
| | <td></td> |
| | <td>64.41</td> |
| | <td>33.48</td> |
| | <td>36.43</td> |
| | </tr> |
| | <tr> |
| | <td rowspan="1"><b>Throughput (Requests/sec)*</b></td> |
| | <td></td> |
| | <td>2.031</td> |
| | <td>2.144</td> |
| | <td>2.066</td> |
| | </tr> |
| | <tr> |
| | <td rowspan="7"><b>OpenLLM V1</b></td> |
| | <td>ARC-Challenge (Acc-Norm, 25-shot)</td> |
| | <td>72.27</td> |
| | <td>71.67 (99.17)</td> |
| | <td>72.27 (100.00)</td> |
| | </tr> |
| | <tr> |
| | <td>GSM8K (Strict-Match, 5-shot)</td> |
| | <td>85.06</td> |
| | <td>85.60 (100.62)</td> |
| | <td>85.60 (100.62)</td> |
| | </tr> |
| | <tr> |
| | <td>HellaSwag (Acc-Norm, 10-shot)</td> |
| | <td>86.07</td> |
| | <td>86.02 (99.94)</td> |
| | <td>85.96 (99.87)</td> |
| | </tr> |
| | <tr> |
| | <td>MMLU (Acc, 5-shot)</td> |
| | <td>77.15</td> |
| | <td>76.94 (99.73)</td> |
| | <td>77.23 (100.10)</td> |
| | </tr> |
| | <tr> |
| | <td>TruthfulQA (MC2, 0-shot)</td> |
| | <td>57.97</td> |
| | <td>57.62 (99.40)</td> |
| | <td>57.85 (99.80)</td> |
| | </tr> |
| | <tr> |
| | <td>Winogrande (Acc, 5-shot)</td> |
| | <td>81.45</td> |
| | <td>81.14 (99.61)</td> |
| | <td>80.82 (99.22)</td> |
| | </tr> |
| | <tr> |
| | <td><b>Average Score</b></td> |
| | <td><b>76.66</b></td> |
| | <td><b>76.50 (99.79)</b></td> |
| | <td><b>76.62 (99.95)</b></td> |
| | </tr> |
| | <tr> |
| | <td rowspan="7"><b>OpenLLM V2</b></td> |
| | <td>IFEval (Inst Level Strict Acc, 0-shot)</td> |
| | <td>87.41</td> |
| | <td>87.65 (100.27)</td> |
| | <td>87.89 (100.55)</td> |
| | </tr> |
| | <tr> |
| | <td>BBH (Acc-Norm, 3-shot)</td> |
| | <td>61.52</td> |
| | <td>61.31 (99.66)</td> |
| | <td>61.40 (99.80)</td> |
| | </tr> |
| | <tr> |
| | <td>Math-Hard (Exact-Match, 4-shot)</td> |
| | <td>46.60</td> |
| | <td>44.34 (95.14)</td> |
| | <td>44.94 (96.43)</td> |
| | </tr> |
| | <tr> |
| | <td>GPQA (Acc-Norm, 0-shot)</td> |
| | <td>32.55</td> |
| | <td>32.05 (98.45)</td> |
| | <td>34.23 (105.15)</td> |
| | </tr> |
| | <tr> |
| | <td>MUSR (Acc-Norm, 0-shot)</td> |
| | <td>46.43</td> |
| | <td>46.30 (99.72)</td> |
| | <td>45.77 (98.58)</td> |
| | </tr> |
| | <tr> |
| | <td>MMLU-Pro (Acc, 5-shot)</td> |
| | <td>47.96</td> |
| | <td>47.91 (99.88)</td> |
| | <td>47.93 (99.93)</td> |
| | </tr> |
| | <tr> |
| | <td><b>Average Score</b></td> |
| | <td><b>53.75</b></td> |
| | <td><b>53.26 (99.09)</b></td> |
| | <td><b>53.69 (99.89)</b></td> |
| | </tr> |
| | </tbody> |
| | </table> --> |
| | |
| |
|
| |
|
| |
|
| |
|
| |
|
| | <!-- |
| | ### Accuracy |
| | <table> |
| | <thead> |
| | <tr> |
| | <th>Category</th> |
| | <th>Metric</th> |
| | <th>ibm-granite/granite-4.0-h-small</th> |
| | <th>ibm-granite/granite-4.0-h-small-FP8</th> |
| | <th>Recovery (%)</th> |
| | </tr> |
| | </thead> |
| | <tbody> |
| | |
| | <tr> |
| | <td rowspan="7"><b>OpenLLM V1</b></td> |
| | <td>ARC-Challenge (Acc-Norm, 25-shot)</td> |
| | <td>72.27</td> |
| | <td>71.67</td> |
| | <td>99.17</td> |
| | </tr> |
| | <tr> |
| | <td>GSM8K (Strict-Match, 5-shot)</td> |
| | <td>85.06</td> |
| | <td>85.60</td> |
| | <td>100.62</td> |
| | </tr> |
| | <tr> |
| | <td>HellaSwag (Acc-Norm, 10-shot)</td> |
| | <td>86.07</td> |
| | <td>86.02</td> |
| | <td>99.94</td> |
| | </tr> |
| | <tr> |
| | <td>MMLU (Acc, 5-shot)</td> |
| | <td>77.15</td> |
| | <td>76.94</td> |
| | <td>99.73</td> |
| | </tr> |
| | <tr> |
| | <td>TruthfulQA (MC2, 0-shot)</td> |
| | <td>57.97</td> |
| | <td>57.62</td> |
| | <td>99.40</td> |
| | </tr> |
| | <tr> |
| | <td>Winogrande (Acc, 5-shot)</td> |
| | <td>81.45</td> |
| | <td>81.14</td> |
| | <td>99.61</td> |
| | </tr> |
| | <tr> |
| | <td><b>Average Score</b></td> |
| | <td><b>76.66</b></td> |
| | <td><b>76.50</b></td> |
| | <td><b>99.79</b></td> |
| | </tr> |
| | |
| | <tr> |
| | <td rowspan="7"><b>OpenLLM V2</b></td> |
| | <td>IFEval (Inst Level Strict Acc, 0-shot)</td> |
| | <td>87.41</td> |
| | <td>87.65</td> |
| | <td>100.27</td> |
| | </tr> |
| | <tr> |
| | <td>BBH (Acc-Norm, 3-shot)</td> |
| | <td>61.52</td> |
| | <td>61.31</td> |
| | <td>99.66</td> |
| | </tr> |
| | <tr> |
| | <td>Math-Hard (Exact-Match, 4-shot)</td> |
| | <td>46.60</td> |
| | <td>44.34</td> |
| | <td>95.14</td> |
| | </tr> |
| | <tr> |
| | <td>GPQA (Acc-Norm, 0-shot)</td> |
| | <td>32.55</td> |
| | <td>32.05</td> |
| | <td>98.45</td> |
| | </tr> |
| | <tr> |
| | <td>MUSR (Acc-Norm, 0-shot)</td> |
| | <td>46.43</td> |
| | <td>46.30</td> |
| | <td>99.72</td> |
| | </tr> |
| | <tr> |
| | <td>MMLU-Pro (Acc, 5-shot)</td> |
| | <td>47.96</td> |
| | <td>47.91</td> |
| | <td>99.88</td> |
| | </tr> |
| | <tr> |
| | <td><b>Average Score</b></td> |
| | <td><b>53.75</b></td> |
| | <td><b>53.26</b></td> |
| | <td><b>99.09</b></td> |
| | </tr> |
| | </tbody> |
| | </table> |
| | |
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| | ### Accuracy |
| | <table> |
| | <thead> |
| | <tr> |
| | <th>Category</th> |
| | <th>Metric</th> |
| | <th>ibm-granite/granite-4.0-h-small</th> |
| | <th>RedHatAI/granite-4.0-h-small-FP8-block</th> |
| | <th>Recovery (%)</th> |
| | </tr> |
| | </thead> |
| | <tbody> |
| | |
| | <tr> |
| | <td rowspan="7"><b>OpenLLM V1</b></td> |
| | <td>ARC-Challenge (Acc-Norm, 25-shot)</td> |
| | <td>72.27</td> |
| | <td>72.27</td> |
| | <td>100.00</td> |
| | </tr> |
| | <tr> |
| | <td>GSM8K (Strict-Match, 5-shot)</td> |
| | <td>85.06</td> |
| | <td>85.60</td> |
| | <td>100.62</td> |
| | </tr> |
| | <tr> |
| | <td>HellaSwag (Acc-Norm, 10-shot)</td> |
| | <td>86.07</td> |
| | <td>85.96</td> |
| | <td>99.87</td> |
| | </tr> |
| | <tr> |
| | <td>MMLU (Acc, 5-shot)</td> |
| | <td>77.15</td> |
| | <td>77.23</td> |
| | <td>100.10</td> |
| | </tr> |
| | <tr> |
| | <td>TruthfulQA (MC2, 0-shot)</td> |
| | <td>57.97</td> |
| | <td>57.85</td> |
| | <td>99.80</td> |
| | </tr> |
| | <tr> |
| | <td>Winogrande (Acc, 5-shot)</td> |
| | <td>81.45</td> |
| | <td>80.82</td> |
| | <td>99.22</td> |
| | </tr> |
| | <tr> |
| | <td><b>Average Score</b></td> |
| | <td><b>76.66</b></td> |
| | <td><b>76.62</b></td> |
| | <td><b>99.95</b></td> |
| | </tr> |
| | |
| | <tr> |
| | <td rowspan="7"><b>OpenLLM V2</b></td> |
| | <td>IFEval (Inst Level Strict Acc, 0-shot)</td> |
| | <td>87.41</td> |
| | <td>87.89</td> |
| | <td>100.55</td> |
| | </tr> |
| | <tr> |
| | <td>BBH (Acc-Norm, 3-shot)</td> |
| | <td>61.52</td> |
| | <td>61.40</td> |
| | <td>99.80</td> |
| | </tr> |
| | <tr> |
| | <td>Math-Hard (Exact-Match, 4-shot)</td> |
| | <td>46.60</td> |
| | <td>44.94</td> |
| | <td>96.43</td> |
| | </tr> |
| | <tr> |
| | <td>GPQA (Acc-Norm, 0-shot)</td> |
| | <td>32.55</td> |
| | <td>34.23</td> |
| | <td>105.15</td> |
| | </tr> |
| | <tr> |
| | <td>MUSR (Acc-Norm, 0-shot)</td> |
| | <td>46.43</td> |
| | <td>45.77</td> |
| | <td>98.58</td> |
| | </tr> |
| | <tr> |
| | <td>MMLU-Pro (Acc, 5-shot)</td> |
| | <td>47.96</td> |
| | <td>47.93</td> |
| | <td>99.93</td> |
| | </tr> |
| | <tr> |
| | <td><b>Average Score</b></td> |
| | <td><b>53.75</b></td> |
| | <td><b>53.69</b></td> |
| | <td><b>99.89</b></td> |
| | </tr> |
| | </tbody> |
| | </table> --> |