YAML Metadata Warning: The pipeline tag "text2text-generation" is not in the official list: text-classification, token-classification, table-question-answering, question-answering, zero-shot-classification, translation, summarization, feature-extraction, text-generation, fill-mask, sentence-similarity, text-to-speech, text-to-audio, automatic-speech-recognition, audio-to-audio, audio-classification, audio-text-to-text, voice-activity-detection, depth-estimation, image-classification, object-detection, image-segmentation, text-to-image, image-to-text, image-to-image, image-to-video, unconditional-image-generation, video-classification, reinforcement-learning, robotics, tabular-classification, tabular-regression, tabular-to-text, table-to-text, multiple-choice, text-ranking, text-retrieval, time-series-forecasting, text-to-video, image-text-to-text, image-text-to-image, image-text-to-video, visual-question-answering, document-question-answering, zero-shot-image-classification, graph-ml, mask-generation, zero-shot-object-detection, text-to-3d, image-to-3d, image-feature-extraction, video-text-to-text, keypoint-detection, visual-document-retrieval, any-to-any, video-to-video, other

Original Model Link : https://huggingface.co/PleIAs/Pleias-3b-Preview

name: Pleias-3b-Preview-Q8-mlx
license: apache-2.0
base_model: PleIAs/Pleias-3b-Preview
datasets: PleIAs/common_corpus
thumbnail: "https://cdn-avatars.huggingface.co/v1/production/uploads/65ff1816871b36bf84fc3c37/JZLA1RQXQ7NanLF5G9c6Q.png"
pipeline_tag: text2text-generation
library_name: mlx
hardware_type: "NVIDIA H100 x192"
hours_used: 480
cloud_provider: "GENCI"
cloud_region: "France"
co2_emitted: "16 tons CO2eq"
model_type: Llama/GPT-Neox
tags:
- text-to-text
- completions
funded_by:
- "Mozilla Foundation Local AI Program"
- "étalab"
task: 
- text-generation
- text-to-text
- text2text-generation
language:
- en
- fr
- es
- de
- it
- nl
- la
- pt
get_started_code:
- uvx --from mlx-lm mlx_lm.generate --model "darkshapes/Pleias-3b-Preview-Q8-mlx" --prompt '    def create_pipeline(self, architecture, *args, **kwargs):\n        """\n        Build a diffusers pipe based on model type\n      '

Pleias 3b MLX

Pleias is a novel, fully-open completion model that infers the remainder of partial prompts.

From the original model card:

It includes the following features, that would apply to any responsibly trained variant:

  • Only trained on open data under a permissible license and in compliance with the European AI Act. By design, all Pleias model are unable to output copyrighted content.
  • Extensive multilingual support for main European languages.
  • A new tokenizer designed for enhanced document processing tasks and better multilingual support.
  • Extremely low level of toxicity and problematic content.

MLX is a framework for METAL graphics supported by Apple computers with ARM M-series processors (M1/M2/M3/M4)

Generation using uv https://docs.astral.sh/uv/**:

uvx --from mlx-lm mlx_lm.generate --model "darkshapes/Pleias-3b-Preview-Q8-mlx" --prompt '    def create_pipeline(self, architecture, *args, **kwargs):\n        """\n        Build a diffusers pipe based on model type\n      '

Generation using pip:

pip install mlx_lm
python -m mlx_lm.generate --model "darkshapes/Pleias-3b-Preview-Q8-mlx" --prompt '    def create_pipeline(self, architecture, *args, **kwargs):\n        """\n        Build a diffusers pipe based on model type\n      '
Downloads last month
9
Safetensors
Model size
0.9B params
Tensor type
F16
·
U32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for darkshapes/Pleias-3b-Preview-Q8-mlx

Quantized
(1)
this model

Dataset used to train darkshapes/Pleias-3b-Preview-Q8-mlx