Update README.md
Browse files
README.md
CHANGED
|
@@ -7,177 +7,72 @@ sdk: static
|
|
| 7 |
pinned: false
|
| 8 |
---
|
| 9 |
<!DOCTYPE html>
|
| 10 |
-
|
| 11 |
-
|
| 12 |
-
|
| 13 |
-
|
| 14 |
-
|
| 15 |
-
|
| 16 |
-
|
| 17 |
-
|
| 18 |
-
|
| 19 |
-
|
| 20 |
-
|
| 21 |
-
|
| 22 |
-
|
| 23 |
-
|
| 24 |
-
|
| 25 |
-
|
| 26 |
-
|
| 27 |
-
|
| 28 |
-
|
| 29 |
-
|
| 30 |
-
|
| 31 |
-
|
| 32 |
-
|
| 33 |
-
|
| 34 |
-
|
| 35 |
-
|
| 36 |
-
|
| 37 |
-
|
| 38 |
-
|
| 39 |
-
|
| 40 |
-
|
| 41 |
-
|
| 42 |
-
|
| 43 |
-
|
| 44 |
-
|
| 45 |
-
|
| 46 |
-
|
| 47 |
-
|
| 48 |
-
|
| 49 |
-
|
| 50 |
-
|
| 51 |
-
|
| 52 |
-
|
| 53 |
-
|
| 54 |
-
|
| 55 |
-
|
| 56 |
-
|
| 57 |
-
|
| 58 |
-
|
| 59 |
-
|
| 60 |
-
|
| 61 |
-
|
| 62 |
-
|
| 63 |
-
|
| 64 |
-
|
| 65 |
-
|
| 66 |
-
|
| 67 |
-
|
| 68 |
-
|
| 69 |
-
|
| 70 |
-
|
| 71 |
-
|
| 72 |
-
|
| 73 |
-
|
| 74 |
-
|
| 75 |
-
|
| 76 |
-
|
| 77 |
-
|
| 78 |
-
</tr>
|
| 79 |
-
<tr>
|
| 80 |
-
<td><a href="https://learn.arm.com/learning-paths/servers-and-cloud-computing/benchmark-nlp/" target="_blank">Accelerate NLP models from Hugging Face on Arm servers</a></td>
|
| 81 |
-
<td>Cloud & Datacenter</td>
|
| 82 |
-
<td><a href="https://huggingface.co/distilbert/distilbert-base-uncased-finetuned-sst-2-english" target="_blank">Distilbert Base Uncased Finetuned Sst 2 English</a></td>
|
| 83 |
-
</tr>
|
| 84 |
-
<tr>
|
| 85 |
-
<td><a href="https://learn.arm.com/learning-paths/servers-and-cloud-computing/llama-cpu/" target="_blank">Deploy a Large Language Model (LLM) chatbot with llama.cpp using KleidiAI on Arm servers</a></td>
|
| 86 |
-
<td>Cloud & Datacenter</td>
|
| 87 |
-
<td><a href="https://huggingface.co/cognitivecomputations/dolphin-2.9.4-llama3.1-8b-gguf" target="_blank">Dolphin 2.9.4 Llama3.1 8B Gguf</a></td>
|
| 88 |
-
</tr>
|
| 89 |
-
<tr>
|
| 90 |
-
<td><a href="https://learn.arm.com/learning-paths/servers-and-cloud-computing/milvus-rag/" target="_blank">Build a RAG application using Zilliz Cloud on Arm servers</a></td>
|
| 91 |
-
<td>Cloud & Datacenter</td>
|
| 92 |
-
<td><a href="https://huggingface.co/cognitivecomputations/dolphin-2.9.4-llama3.1-8b-gguf" target="_blank">Dolphin 2.9.4 Llama3.1 8B Gguf</a></td>
|
| 93 |
-
</tr>
|
| 94 |
-
<tr>
|
| 95 |
-
<td><a href="https://learn.arm.com/learning-paths/servers-and-cloud-computing/pytorch-llama/" target="_blank">Run a Large Language Model (LLM) chatbot with PyTorch using KleidiAI on Arm servers</a></td>
|
| 96 |
-
<td>Cloud & Datacenter</td>
|
| 97 |
-
<td><a href="https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct" target="_blank">Llama 3.1 8B Instruct</a></td>
|
| 98 |
-
</tr>
|
| 99 |
-
<tr>
|
| 100 |
-
<td><a href="https://learn.arm.com/learning-paths/servers-and-cloud-computing/rag/" target="_blank">Deploy a RAG-based Chatbot with llama-cpp-python using KleidiAI on Google Axion processors</a></td>
|
| 101 |
-
<td>Cloud & Datacenter</td>
|
| 102 |
-
<td><a href="https://huggingface.co/chatpdflocal/llama3.1-8b-gguf" target="_blank">Llama3.1 8B Gguf</a></td>
|
| 103 |
-
</tr>
|
| 104 |
-
<tr>
|
| 105 |
-
<td><a href="https://learn.arm.com/learning-paths/mobile-graphics-and-gaming/build-llama3-chat-android-app-using-executorch-and-xnnpack/" target="_blank">Build an Android chat app with Llama, KleidiAI, ExecuTorch, and XNNPACK</a></td>
|
| 106 |
-
<td>Smartphone</td>
|
| 107 |
-
<td><a href="https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct" target="_blank">Llama 3.2 1B Instruct</a></td>
|
| 108 |
-
</tr>
|
| 109 |
-
<tr>
|
| 110 |
-
<td><a href="https://learn.arm.com/learning-paths/embedded-and-microcontrollers/llama-python-cpu/" target="_blank">Run a local LLM chatbot on a Raspberry Pi 5</a></td>
|
| 111 |
-
<td>Raspberry Pi</td>
|
| 112 |
-
<td><a href="https://huggingface.co/Aryanne/Orca-Mini-3B-gguf" target="_blank">Orca Mini 3B Gguf</a></td>
|
| 113 |
-
</tr>
|
| 114 |
-
<tr>
|
| 115 |
-
<td><a href="https://learn.arm.com/learning-paths/mobile-graphics-and-gaming/build-android-chat-app-using-onnxruntime/" target="_blank">Build an Android chat application with ONNX Runtime API</a></td>
|
| 116 |
-
<td>Smartphone</td>
|
| 117 |
-
<td><a href="https://huggingface.co/microsoft/Phi-3-vision-128k-instruct-onnx-cuda" target="_blank">Phi 3 Vision 128K Instruct Onnx Cuda</a></td>
|
| 118 |
-
</tr>
|
| 119 |
-
<tr>
|
| 120 |
-
<td><a href="https://learn.arm.com/learning-paths/servers-and-cloud-computing/rtp-llm/" target="_blank">Run an LLM chatbot with rtp-llm on Arm-based servers</a></td>
|
| 121 |
-
<td>Cloud & Datacenter</td>
|
| 122 |
-
<td><a href="https://huggingface.co/Qwen/Qwen2-0.5B-Instruct" target="_blank">Qwen2 0.5B Instruct</a></td>
|
| 123 |
-
</tr>
|
| 124 |
-
<tr>
|
| 125 |
-
<td><a href="https://learn.arm.com/learning-paths/servers-and-cloud-computing/vllm/" target="_blank">Build and Run a Virtual Large Language Model (vLLM) on Arm Servers</a></td>
|
| 126 |
-
<td>Cloud & Datacenter</td>
|
| 127 |
-
<td><a href="https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct" target="_blank">Qwen2.5 0.5B Instruct</a></td>
|
| 128 |
-
</tr>
|
| 129 |
-
<tr>
|
| 130 |
-
<td><a href="https://learn.arm.com/learning-paths/mobile-graphics-and-gaming/kleidiai-on-android-with-mediapipe-and-xnnpack/" target="_blank">LLM inference on Android with KleidiAI, MediaPipe, and XNNPACK</a></td>
|
| 131 |
-
<td>Smartphone</td>
|
| 132 |
-
<td><a href="https://huggingface.co/google/gemma-2b" target="_blank">Gemma 2B</a></td>
|
| 133 |
-
</tr>
|
| 134 |
-
<tr>
|
| 135 |
-
<td><a href="https://learn.arm.com/learning-paths/embedded-and-microcontrollers/rpi-llama3/" target="_blank">Run Llama 3 on a Raspberry Pi 5 using ExecuTorch</a></td>
|
| 136 |
-
<td>Raspberry Pi</td>
|
| 137 |
-
<td><a href="https://huggingface.co/meta-llama/Llama-3.1-8B" target="_blank">Llama 3.1 8B</a></td>
|
| 138 |
-
</tr>
|
| 139 |
-
<tr><th colspan="3" style="background-color:#555; color:#fff;">CV: Image Classification & Object Detection</th></tr>
|
| 140 |
-
<tr>
|
| 141 |
-
<td><a href="https://learn.arm.com/learning-paths/mobile-graphics-and-gaming/profiling-ml-on-arm/" target="_blank">Profile the Performance of AI and ML Mobile Applications on Arm</a></td>
|
| 142 |
-
<td>Smartphone</td>
|
| 143 |
-
<td><a href="https://huggingface.co/google/mobilenet_v2_1.0_224" target="_blank">Mobilenet V2 1.0 224</a></td>
|
| 144 |
-
</tr>
|
| 145 |
-
<tr>
|
| 146 |
-
<td><a href="https://learn.arm.com/learning-paths/embedded-and-microcontrollers/yolo-on-himax/" target="_blank">Run a Computer Vision Model on a Himax Microcontroller</a></td>
|
| 147 |
-
<td>IoT</td>
|
| 148 |
-
<td><a href="https://huggingface.co/Ultralytics/YOLOv8" target="_blank">Yolov8</a></td>
|
| 149 |
-
</tr>
|
| 150 |
-
<tr>
|
| 151 |
-
<td><a href="https://docs.pytorch.org/executorch/stable/backends-arm-ethos-u.html" target="_blank">Export a simple PyTorch model (e.g. MobileNetV2) for ExecuTorch Arm Ethos-U</a></td>
|
| 152 |
-
<td>IoT</td>
|
| 153 |
-
<td><a href="https://huggingface.co/google/mobilenet_v2_1.0_224" target="_blank">Distilbert Base Uncased Finetuned Sst 2 English</a></td>
|
| 154 |
-
</tr>
|
| 155 |
-
<tr><th colspan="3" style="background-color:#555; color:#fff;">Sentiment Analysis</th></tr>
|
| 156 |
-
<tr>
|
| 157 |
-
<td><a href="https://learn.arm.com/learning-paths/servers-and-cloud-computing/nlp-hugging-face/" target="_blank">Run a Natural Language Processing (NLP) model from Hugging Face on Arm servers</a></td>
|
| 158 |
-
<td>Cloud & Datacenter</td>
|
| 159 |
-
<td><a href="https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment-latest" target="_blank">Twitter Roberta Base Sentiment Latest</a></td>
|
| 160 |
-
</tr>
|
| 161 |
-
<tr>
|
| 162 |
-
<td><a href="https://learn.arm.com/learning-paths/servers-and-cloud-computing/benchmark-nlp/" target="_blank">Accelerate NLP models from Hugging Face on Arm servers</a></td>
|
| 163 |
-
<td>Cloud & Datacenter</td>
|
| 164 |
-
<td><a href="https://huggingface.co/distilbert/distilbert-base-uncased-finetuned-sst-2-english" target="_blank">Distilbert Base Uncased Finetuned Sst 2 English</a></td>
|
| 165 |
-
</tr>
|
| 166 |
-
</tbody>
|
| 167 |
-
</table>
|
| 168 |
-
|
| 169 |
-
<br>
|
| 170 |
-
<strong>Arm Kleidi: Unleashing Mass-Market AI Performance on Arm</strong>
|
| 171 |
-
<p>Arm Kleidi developer enablement technologies and kernel libraries unlock AI performance across the world's popular AI frameworks and runtimes, accelerating AI workloads everywhere on Arm. Ecosystem integrations help application developers achieve top performance by default and future-proof performance, with no additional work or skills investment.</p>
|
| 172 |
-
<p><b>Useful Resources on Arm Kleidi:</b></p>
|
| 173 |
-
<ul>
|
| 174 |
-
<li>Arm KleidiAI for optimizing any AI framework: <a href="https://gitlab.arm.com/kleidi/kleidiai" target="_blank">Gitlab repo</a> and <a href="https://community.arm.com/arm-community-blogs/b/ai-and-ml-blog/posts/kleidiai" target="_blank"> blog</a></li>
|
| 175 |
-
<li>Arm KleidiCV for optimizing any computer vision framework: <a href="https://gitlab.arm.com/kleidi/kleidicv" target="_blank">Gitlab repo</a> and <a href="https://community.arm.com/arm-community-blogs/b/ai-and-ml-blog/posts/kleidicv" target="_blank"> blog</a></li>
|
| 176 |
-
<li><a href="https://github.com/ARM-software/ComputeLibrary" target="_blank">Arm Compute Library for all AI software</a></li>
|
| 177 |
-
</ul>
|
| 178 |
-
|
| 179 |
-
<p><i><small>Note: The data collated here is sourced from Arm and third parties. While Arm uses reasonable efforts to keep this information accurate, Arm does not warrant (express or implied) or provide any guarantee of data correctness due to the ever-evolving AI and software landscape. Any links to third-party sites and resources are provided for ease and convenience. Your use of such third-party sites and resources is subject to the third party’s terms of use, and use is at your own risk.</small></i></p>
|
| 180 |
-
|
| 181 |
-
</body>
|
| 182 |
-
</html>
|
| 183 |
|
|
|
|
| 7 |
pinned: false
|
| 8 |
---
|
| 9 |
<!DOCTYPE html>
|
| 10 |
+
|
| 11 |
+
# Accelerate AI model deployment from cloud to edge
|
| 12 |
+
|
| 13 |
+
Arm on Hugging Face helps developers deploy Hugging Face models faster with optimized performance on Arm-based devices and platforms. Our guides, tools, and learning paths show how Arm integrates with major operating systems and frameworks, making it easier to build, optimize, and scale AI models across real-world use cases from cloud to edge, gaming to mobile.
|
| 14 |
+
|
| 15 |
+
## Follow our curated Learning Paths to:
|
| 16 |
+
- Explore Arm-optimized AI models available in our Hugging Face Model Collections
|
| 17 |
+
- Use libraries and ML frameworks like PyTorch, ExecuTorch, llama.cpp, ONNX Runtime, and KleidiAI.
|
| 18 |
+
- Streamline your journey - from discovery to deployment – across AI use cases like real-time chatbots, sentiment analysis, neural graphics, object detection and more.
|
| 19 |
+
|
| 20 |
+
## What can I build with Arm on Hugging Face?
|
| 21 |
+
Explore curated learning paths using Hugging Face models, optimised to run on platforms like Raspberry Pi, smartphones, and Arm-based cloud servers.
|
| 22 |
+
|
| 23 |
+
## Neural Graphics
|
| 24 |
+
|
| 25 |
+
| Learning Path | Frameworks & Tools Used | Model(s) Featured | Market Application | Examples | Arm Learning Path |
|
| 26 |
+
|---|---|---|---|---|---|
|
| 27 |
+
| Neural Super Sampling in Unreal Engine | [NSS Plugin for Unreal®](https://github.com/arm/neural-graphics-for-unreal/)<br>[Unreal® NNE Plugin for ML extensions for Vulkan](https://github.com/arm/ml-extensions-for-vulkan-unreal-plugin/)<br>[Neural Graphics Model Gym](https://github.com/arm/neural-graphics-model-gym) | Neural Super Sampling (NSS) | Smartphone | Graphics upscaling<br>[Enchanted Castle Demo](https://www.youtube.com/watch?v=XmdsWErzwC0) | [Run NSS in Unreal →](https://learn.arm.com/learning-paths/mobile-graphics-and-gaming/nss-unreal/?utm_source=hugging-face&utm_medium=social-organic&utm_content=learningpath&utm_campaign=mk24_developer_devhub) |
|
| 28 |
+
|
| 29 |
+
## Generative AI
|
| 30 |
+
|
| 31 |
+
| Learning Path | Frameworks & Tools Used | Model(s) Featured | Market Application | Examples | Arm Learning Path |
|
| 32 |
+
|---|---|---|---|---|---|
|
| 33 |
+
| Build a RAG application | Zilliz Cloud, llama.cpp | All MiniLM L6 V2 | Cloud & Datacenter | Document retrieval + Q&A pipelines | [Build with Zilliz →](https://learn.arm.com/learning-paths/servers-and-cloud-computing/milvus-rag/?utm_source=hugging-face&utm_medium=social-organic&utm_content=learningpath&utm_campaign=mk24_developer_devhub) |
|
| 34 |
+
| Accelerate NLP models for faster inference | PyTorch, KleidiAI | DistilBERT Base Uncased SST-2 | Cloud & Datacenter | Sentiment analysis, text classification | [Accelerate NLP →](https://learn.arm.com/learning-paths/servers-and-cloud-computing/benchmark-nlp/benchmark-nlp-hf/?utm_source=hugging-face&utm_medium=social-organic&utm_content=learningpath&utm_campaign=mk24_developer_devhub) |
|
| 35 |
+
| Deploy an LLM chatbot with optimised performance | llama.cpp, KleidiAI | Dolphin 2.9.4, Llama 3.1 8B GGUF | Cloud & Datacenter | Real-time chatbots, enterprise assistants | [Deploy with llama.cpp →](https://learn.arm.com/learning-paths/servers-and-cloud-computing/llama-cpu/llama-chatbot/?utm_source=hugging-face&utm_medium=social-organic&utm_content=learningpath&utm_campaign=mk24_developer_devhub) |
|
| 36 |
+
| Run an LLM chatbot with PyTorch | PyTorch, Torchchat, Streamlit, KleidiAI | Llama 3.1 8B Instruct | Cloud & Datacenter | Inference pipelines with PyTorch | [Run with PyTorch →](https://learn.arm.com/learning-paths/servers-and-cloud-computing/pytorch-llama/pytorch-llama/?utm_source=hugging-face&utm_medium=social-organic&utm_content=learningpath&utm_campaign=mk24_developer_devhub) |
|
| 37 |
+
| Deploy a RAG chatbot on Google Axion processors | llama-cpp-python, Faiss, KleidiAI, | Llama 3.1 8B GGUF | Cloud & Datacenter | RAG-based assistants at cloud scale | [Deploy with Axion →](https://learn.arm.com/learning-paths/servers-and-cloud-computing/rag/?utm_source=hugging-face&utm_medium=social-organic&utm_content=learningpath&utm_campaign=mk24_developer_devhub) |
|
| 38 |
+
| Build an Android chat app | ExecuTorch, XNNPACK, KleidiAI | Llama 3.2 1B Instruct | Smartphone | On-device chat apps | [Build on Android →](https://learn.arm.com/learning-paths/mobile-graphics-and-gaming/build-llama3-chat-android-app-using-executorch-and-xnnpack/?utm_source=hugging-face&utm_medium=social-organic&utm_content=learningpath&utm_campaign=mk24_developer_devhub) |
|
| 39 |
+
| Run Llama 3 on Raspberry Pi 5 | ExecuTorch | Llama 3.1 8B | Raspberry Pi | Edge LLM deployment | [Run Llama 3 on Pi 5 →](https://learn.arm.com/learning-paths/embedded-and-microcontrollers/rpi-llama3/?utm_source=hugging-face&utm_medium=social-organic&utm_content=learningpath&utm_campaign=mk24_developer_devhub) |
|
| 40 |
+
|
| 41 |
+
## CV: Image Classification & Object Detection
|
| 42 |
+
|
| 43 |
+
| Learning Path | Frameworks & Tools Used | Model(s) Featured | Market Application | Examples | Arm Learning Path |
|
| 44 |
+
|---|---|---|---|---|---|
|
| 45 |
+
| Profile AI/ML performance on mobile apps | ExecuTorch, [Arm Performance Studio](https://developer.arm.com/Tools%20and%20Software/Arm%20Performance%20Studio?utm_source=hugging-face&utm_medium=social-organic&utm_content=learningpath&utm_campaign=mk24_developer_devhub), Android Studio Profiler | MobileNet V2 1.0 224 | Smartphone | App performance benchmarking | [Profile mobile apps →](https://learn.arm.com/learning-paths/mobile-graphics-and-gaming/profiling-ml-on-arm/?utm_source=hugging-face&utm_medium=social-organic&utm_content=learningpath&utm_campaign=mk24_developer_devhub) |
|
| 46 |
+
| Run CV models on microcontrollers | Himax MCU, Arm toolchain | YOLOv8 | IoT | Object detection on MCUs | [Run on MCU →](https://learn.arm.com/learning-paths/embedded-and-microcontrollers/yolo-on-himax/?utm_source=hugging-face&utm_medium=social-organic&utm_content=learningpath&utm_campaign=mk24_developer_devhub) |
|
| 47 |
+
| Export PyTorch models for edge devices | PyTorch, ExecuTorch | DistilBERT Base Uncased SST-2 | IoT | Deploy compact AI models on MCUs | [Export with ExecuTorch →](https://learn.arm.com/learning-paths/embedded-and-microcontrollers/training-inference-pytorch/?utm_source=hugging-face&utm_medium=social-organic&utm_content=learningpath&utm_campaign=mk24_developer_devhub) |
|
| 48 |
+
|
| 49 |
+
## Sentiment Analysis
|
| 50 |
+
|
| 51 |
+
| Learning Path | Frameworks & Tools Used | Model(s) Featured | Market Application | Examples | Arm Learning Path |
|
| 52 |
+
|---|---|---|---|---|---|
|
| 53 |
+
| Accelerate NLP models from Hugging Face on Arm servers | PyTorch | DistilBERT Base Uncased SST-2 | Cloud & Datacenter | Text classification, sentiment analysis | [Accelerate NLP on Arm →](https://learn.arm.com/learning-paths/servers-and-cloud-computing/nlp-hugging-face/?utm_source=hugging-face&utm_medium=social-organic&utm_content=learningpath&utm_campaign=mk24_developer_devhub) |
|
| 54 |
+
|
| 55 |
+
## Speed Up AI Model Inference with Arm Kleidi
|
| 56 |
+
|
| 57 |
+
Arm Kleidi, comprising KleidiAI and KleidiCV, delivers out-of-the-box AI acceleration across popular frameworks – such as Pytorch, llama.cpp, MediaPipe (via XNNPACK), ONNX Runtime, and more – by integrating highly optimised micro-kernels tailored to Arm CPU architectures.
|
| 58 |
+
|
| 59 |
+
These lightweight libraries use advanced Arm instructions like Neon, SVE, and SME to deliver faster inference - with no code changes, retraining, or extra tooling. Developers get immediate performance gains while continuing to use familiar frameworks.
|
| 60 |
+
|
| 61 |
+
### What You Can Do with Arm Kleidi:
|
| 62 |
+
- Accelerate Hugging Face models on real hardware
|
| 63 |
+
- Boost performance for computer vision, NLP, and generative AI workloads
|
| 64 |
+
- Use your existing models - no retraining required
|
| 65 |
+
- Integrate with familiar frameworks and runtimes
|
| 66 |
+
- Optimise for cloud, mobile, edge, and microcontroller platforms
|
| 67 |
+
|
| 68 |
+
### Key Resources:
|
| 69 |
+
- [Arm KleidiAI GitLab repo](https://gitlab.arm.com/kleidi/kleidiai) – Supports general-purpose AI acceleration
|
| 70 |
+
- [Arm KleidiCV GitLab repo](https://gitlab.arm.com/kleidi/kleidicv) – Optimisation for computer vision models
|
| 71 |
+
- [Arm Compute Library on GitHub](https://github.com/ARM-software/ComputeLibrary) – Low-level acceleration for AI software
|
| 72 |
+
|
| 73 |
+
## Get started
|
| 74 |
+
- [Arm Developer Program](https://developer.arm.com/arm-developer-program?utm_source=hugging-face&utm_medium=social-organic&utm_content=learningpath&utm_campaign=mk24_developer_devhub)
|
| 75 |
+
- [Arm Developers Discord](https://discord.com/invite/armsoftwaredev)
|
| 76 |
+
|
| 77 |
+
_Note: The data collated here is sourced from Arm and third parties. While Arm uses reasonable efforts to keep this information accurate, Arm does not warrant (express or implied) or provide any guarantee of data correctness due to the ever-evolving AI and software landscape. Any links to third-party sites and resources are provided for ease and convenience. Your use of such third-party sites and resources is subject to the third party’s terms of use, and use is at your own risk._
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 78 |
|