| language: | |
| - en | |
| - zh | |
| base_model: Qwen/Qwen3-0.6B-Base | |
| tags: | |
| - quantized | |
| - qwen | |
| - causal-lm | |
| license: apache-2.0 | |
| # Quantized Qwen/Qwen3-0.6B-Base | |
| This is a quantized version of [Qwen/Qwen3-0.6B-Base](https://huggingface.co/Qwen/Qwen3-0.6B-Base). | |
| ## Model Details | |
| - **Base Model**: Qwen/Qwen3-0.6B-Base | |
| - **Model Type**: Quantized Causal Language Model | |
| - **Language(s)**: English, Chinese | |
| - **License**: Apache 2.0 | |
| ## Usage | |
| ```python | |
| from transformers import AutoTokenizer, AutoModelForCausalLM | |
| tokenizer = AutoTokenizer.from_pretrained("your-username/your-repo-name") | |
| model = AutoModelForCausalLM.from_pretrained("your-username/your-repo-name") | |
| # Generate text | |
| inputs = tokenizer("Hello, how are you?", return_tensors="pt") | |
| outputs = model.generate(**inputs, max_length=50) | |
| print(tokenizer.decode(outputs[0], skip_special_tokens=True)) | |
| ``` | |
| ## Training Details | |
| This model has been quantized to reduce memory usage while maintaining performance. | |
| ## Ethical Considerations | |
| Please refer to the original model's documentation for ethical considerations and limitations. | |