Update README.md
Browse files
README.md
CHANGED
|
@@ -4,46 +4,21 @@ thumbnail: https://sima.ai/wp-content/uploads/2022/05/Sima-Logo.png
|
|
| 4 |
layout: profile
|
| 5 |
---
|
| 6 |
|
| 7 |
-
# Welcome to SiMa.ai on Hugging Face
|
| 8 |
|
| 9 |
-
SiMa.ai
|
| 10 |
|
| 11 |
-
|
| 12 |
|
| 13 |
-
|
| 14 |
-
|
| 15 |
-
## Our Model Library
|
| 16 |
-
|
| 17 |
-
Our models are optimized using the SiMa.ai **LLiMa** framework to ensure maximum throughput with minimal power consumption.
|
| 18 |
-
|
| 19 |
-
### Generative AI (LLMs)
|
| 20 |
-
We provide edge-optimized versions of leading architectures:
|
| 21 |
-
* **Llama 3.2 / 3.1** (3B, 8B)
|
| 22 |
-
* **Qwen3 / Qwen2.5**
|
| 23 |
-
* **Phi-3.5-mini**
|
| 24 |
-
* **Mistral-7B-v0.3**
|
| 25 |
-
* **Gemma3**
|
| 26 |
-
|
| 27 |
-
### Vision-Language Models (VLMs)
|
| 28 |
-
Enable real-time visual reasoning and "Physical AI" capabilities:
|
| 29 |
-
* **Gemma 3** (4B)
|
| 30 |
-
* **Qwen3-VL / Qwen2.5-VL** (3B,4B,7B,8B)
|
| 31 |
-
* **LFM2-VL** (450M, 1.6B, 3B)
|
| 32 |
|
| 33 |
-
|
| 34 |
-
* **
|
| 35 |
-
|
| 36 |
-
---
|
| 37 |
-
|
| 38 |
-
## Technical Note: `a16w4`
|
| 39 |
-
Most models in this repo use **a16w4** quantization (16-bit Activations, 4-bit Weights). This configuration is specifically tuned to leverage the proprietary hardware accelerators within the SiMa.ai MLSoC, providing the best "Performance per Watt" in the industry.
|
| 40 |
-
|
| 41 |
-
---
|
| 42 |
|
| 43 |
-
|
| 44 |
-
|
| 45 |
-
|
| 46 |
-
|
| 47 |
|
| 48 |
---
|
| 49 |
-
[Website](https://sima.ai) | [
|
|
|
|
| 4 |
layout: profile
|
| 5 |
---
|
| 6 |
|
|
|
|
| 7 |
|
| 8 |
+
# SiMa.ai | Scaling Physical AI at the Edge
|
| 9 |
|
| 10 |
+
SiMa.ai provides a purpose-built platform for deploying high-performance AI models for vision, audio, and generative applications at the edge with industry-leading power efficiency.
|
| 11 |
|
| 12 |
+
Download our pre-optimized, ready-to-deploy models for the **Modalix™ MLSoC** below.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 13 |
|
| 14 |
+
* **Vision-Language (VLMs):** Gemma 3, Qwen2.5-VL, Qwen3-VL, and LFM2-VL.
|
| 15 |
+
* **Generative AI (LLMs):** Llama 3.2/3.1, Phi-3.5-mini, and Mistral-7B, Qwen2.5/3, Gemma
|
| 16 |
+
* **Audio & Speech:** Whisper
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 17 |
|
| 18 |
+
### Deployment & Tools
|
| 19 |
+
* Models are optimized using **LLiMa** (Automated Code Generation) for **sub-10W inference**.
|
| 20 |
+
* Deployment supports **a16w4** and **Hybrid a16w8/4** quantization to maximize Performance-per-Watt.
|
| 21 |
+
* Join our [Developer Portal](https://developer.sima.ai/) or visit our [GitHub](https://github.com/SiMa-ai/) for documentation and SDK access.
|
| 22 |
|
| 23 |
---
|
| 24 |
+
[Website](https://sima.ai) | [Contact Support](https://sima.ai/contact-us/)
|