Update README.md
Browse files
README.md
CHANGED
|
@@ -1,10 +1,49 @@
|
|
| 1 |
---
|
| 2 |
-
title:
|
| 3 |
-
|
| 4 |
-
|
| 5 |
-
colorTo: blue
|
| 6 |
-
sdk: static
|
| 7 |
-
pinned: false
|
| 8 |
---
|
| 9 |
|
| 10 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
+
title: SiMa.ai Organization
|
| 3 |
+
thumbnail: https://sima.ai/wp-content/uploads/2022/05/Sima-Logo.png
|
| 4 |
+
layout: profile
|
|
|
|
|
|
|
|
|
|
| 5 |
---
|
| 6 |
|
| 7 |
+
# Welcome to SiMa.ai on Hugging Face
|
| 8 |
+
|
| 9 |
+
SiMa.ai is the **Physical AI** company. We provide the industry’s most power-efficient platform for deploying high-performance AI at the edge.
|
| 10 |
+
|
| 11 |
+
This organization hosts **compiled and optimized model weights** specifically tailored for the SiMa.ai **Modalix™ MLSoC**.
|
| 12 |
+
|
| 13 |
+
---
|
| 14 |
+
|
| 15 |
+
## Our Model Library
|
| 16 |
+
|
| 17 |
+
Our models are optimized using the SiMa.ai **LLiMa** framework to ensure maximum throughput with minimal power consumption.
|
| 18 |
+
|
| 19 |
+
### Generative AI (LLMs)
|
| 20 |
+
We provide edge-optimized versions of leading architectures:
|
| 21 |
+
* **Llama 3.2 / 3.1** (3B, 8B)
|
| 22 |
+
* **Qwen3 / Qwen2.5**
|
| 23 |
+
* **Phi-3.5-mini**
|
| 24 |
+
* **Mistral-7B-v0.3**
|
| 25 |
+
* **Gemma3**
|
| 26 |
+
|
| 27 |
+
### Vision-Language Models (VLMs)
|
| 28 |
+
Enable real-time visual reasoning and "Physical AI" capabilities:
|
| 29 |
+
* **Gemma 3** (4B)
|
| 30 |
+
* **Qwen3-VL / Qwen2.5-VL** (3B,4B,7B,8B)
|
| 31 |
+
* **LFM2-VL** (450M, 1.6B, 3B)
|
| 32 |
+
|
| 33 |
+
### Speech Processing (ASR/TTS)
|
| 34 |
+
* **Whisper:** Optimized for real-time, low-latency transcription on edge hardware.
|
| 35 |
+
|
| 36 |
+
---
|
| 37 |
+
|
| 38 |
+
## Technical Note: `a16w4`
|
| 39 |
+
Most models in this repo use **a16w4** quantization (16-bit Activations, 4-bit Weights). This configuration is specifically tuned to leverage the proprietary hardware accelerators within the SiMa.ai MLSoC, providing the best "Performance per Watt" in the industry.
|
| 40 |
+
|
| 41 |
+
---
|
| 42 |
+
|
| 43 |
+
## Getting Started
|
| 44 |
+
1. **Explore:** Browse our [Models tab](https://huggingface.co/simaai/models) to find the architecture you need.
|
| 45 |
+
2. **Compile & Deploy:** Use our [LLiMa Framework](https://docs.sima.ai/pages/genai/main.html) to integrate these weights into your edge application.
|
| 46 |
+
3. **Hardware:** Learn more about the [Modalix™ MLSoC](https://sima.ai/mlsoc-family/).
|
| 47 |
+
|
| 48 |
+
---
|
| 49 |
+
[Website](https://sima.ai) | [GitHub](https://github.com/SiMa-ai/) | [Contact Support](https://sima.ai/contact-us/)
|