Create README.md
Browse files
README.md
ADDED
|
@@ -0,0 +1,47 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
pipeline_tag: text-generation
|
| 3 |
+
tags:
|
| 4 |
+
- cortexp.cpp
|
| 5 |
+
- featured
|
| 6 |
+
---
|
| 7 |
+
## Overview
|
| 8 |
+
**Google** developed and released the **Gemma 3** series, featuring multiple model sizes with both pre-trained and instruction-tuned variants. These multimodal models handle both text and image inputs while generating text outputs, making them versatile for various applications. Gemma 3 models are built from the same research and technology used to create the Gemini models, offering state-of-the-art capabilities in a lightweight and accessible format.
|
| 9 |
+
|
| 10 |
+
The Gemma 3 models include four different sizes with open weights, providing excellent performance across tasks like question answering, summarization, and reasoning while maintaining efficiency for deployment in resource-constrained environments such as laptops, desktops, or custom cloud infrastructure.
|
| 11 |
+
|
| 12 |
+
## Variants
|
| 13 |
+
### Gemma 3
|
| 14 |
+
| No | Variant | Branch | Cortex CLI command |
|
| 15 |
+
| -- | ------------------------------------------------------ | ------ | ----------------------------- |
|
| 16 |
+
| 1 | [Gemma-3-1B](https://huggingface.co/cortexso/gemma3/tree/1b) | 1b | `cortex run gemma3:1b` |
|
| 17 |
+
| 2 | [Gemma-3-4B](https://huggingface.co/cortexso/gemma3/tree/4b) | 4b | `cortex run gemma3:4b` |
|
| 18 |
+
| 3 | [Gemma-3-12B](https://huggingface.co/cortexso/gemma3/tree/12b) | 12b | `cortex run gemma3:12b` |
|
| 19 |
+
| 4 | [Gemma-3-27B](https://huggingface.co/cortexso/gemma3/tree/27b) | 27b | `cortex run gemma3:27b` |
|
| 20 |
+
|
| 21 |
+
Each branch contains a default quantized version.
|
| 22 |
+
|
| 23 |
+
## Key Features
|
| 24 |
+
- **Multimodal capabilities**: Handles both text and image inputs
|
| 25 |
+
- **Large context window**: 128K tokens
|
| 26 |
+
- **Multilingual support**: Over 140 languages
|
| 27 |
+
- **Available in multiple sizes**: From 1B to 27B parameters
|
| 28 |
+
- **Open weights**: For both pre-trained and instruction-tuned variants
|
| 29 |
+
|
| 30 |
+
## Use it with Jan (UI)
|
| 31 |
+
1. Install **Jan** using [Quickstart](https://jan.ai/docs/quickstart)
|
| 32 |
+
2. Use in Jan model Hub:
|
| 33 |
+
```bash
|
| 34 |
+
cortexso/gemma3
|
| 35 |
+
```
|
| 36 |
+
|
| 37 |
+
## Use it with Cortex (CLI)
|
| 38 |
+
1. Install **Cortex** using [Quickstart](https://cortex.jan.ai/docs/quickstart)
|
| 39 |
+
2. Run the model with command:
|
| 40 |
+
```bash
|
| 41 |
+
cortex run gemma3
|
| 42 |
+
```
|
| 43 |
+
|
| 44 |
+
## Credits
|
| 45 |
+
- **Author:** Google
|
| 46 |
+
- **Original License:** [Gemma License](https://ai.google.dev/gemma/terms)
|
| 47 |
+
- **Papers:** [Gemma 3 Technical Report](https://storage.googleapis.com/deepmind-media/gemma/Gemma3Report.pdf)
|