|
|
--- |
|
|
license: apache-2.0 |
|
|
tags: |
|
|
- reasoning |
|
|
- multilingual |
|
|
- transformer |
|
|
- robi-labs |
|
|
- delta |
|
|
- lexa |
|
|
- lexa-family |
|
|
- lexa-delta |
|
|
pipeline_tag: text-generation |
|
|
--- |
|
|
|
|
|
# Model Card for Lexa-Delta |
|
|
|
|
|
Lexa-Delta is a multilingual reasoning large language model, developed by **Robi Labs**, designed for structured reasoning and natural conversation across multiple languages. It has been trained independently by Robi Labs. |
|
|
--- |
|
|
|
|
|
## Model Details |
|
|
|
|
|
### Model Description |
|
|
|
|
|
* **Developed by:** Robi Labs |
|
|
* **Model type:** Causal language model (decoder-only transformer) |
|
|
* **Language(s):** Multilingual (English, Spanish, French, German, Chinese, Hindi, and more) |
|
|
* **License:** Custom License (see LICENSE file) |
|
|
|
|
|
### Model Sources |
|
|
|
|
|
* **Repository:** [https://huggingface.co/RobiLabs/Lexa-Delta](https://huggingface.co/RobiLabs/Lexa-Delta) |
|
|
* **Website:** [https://labs.robiai.com](https://labs.robiai.com) |
|
|
* **Lexa Chat:** [https://lexa.chat](https://lexa.chat) (coming soon) |
|
|
* **Socials:** |
|
|
|
|
|
* [Twitter/X](https://twitter.com/justlexait) |
|
|
* [LinkedIn](https://www.linkedin.com/company/robilabsai) |
|
|
* [Instagram](https://www.instagram.com/robilabs) |
|
|
|
|
|
--- |
|
|
|
|
|
## Uses |
|
|
|
|
|
### Direct Use |
|
|
|
|
|
Lexa-Delta can be used directly for: |
|
|
|
|
|
* Multilingual question answering |
|
|
* Chain-of-thought reasoning |
|
|
* Conversational AI assistants |
|
|
* Educational support (explaining concepts across languages) |
|
|
|
|
|
### Downstream Use |
|
|
|
|
|
* Fine-tuning for domain-specific tasks (e.g., legal, medical, educational) |
|
|
* Integration into applications and chat platforms |
|
|
|
|
|
### Out-of-Scope Use |
|
|
|
|
|
* Disallowed or harmful content generation |
|
|
* High-stakes decision making without expert human oversight |
|
|
|
|
|
--- |
|
|
|
|
|
## Bias, Risks, and Limitations |
|
|
|
|
|
* May inherit biases from multilingual training data |
|
|
* Reasoning ability may vary by language |
|
|
* Can generate incorrect or hallucinated outputs |
|
|
|
|
|
### Recommendations |
|
|
|
|
|
Users should: |
|
|
|
|
|
* Verify important information independently |
|
|
* Avoid high-stakes reliance without human review |
|
|
* Use responsibly with awareness of multilingual limitations |
|
|
|
|
|
--- |
|
|
|
|
|
## How to Get Started with the Model |
|
|
|
|
|
```python |
|
|
from transformers import AutoModelForCausalLM, AutoTokenizer |
|
|
|
|
|
model = AutoModelForCausalLM.from_pretrained("RobiLabs/Lexa-Delta", device_map="auto") |
|
|
tokenizer = AutoTokenizer.from_pretrained("RobiLabs/Lexa-Delta") |
|
|
|
|
|
messages = [ |
|
|
{"role": "system", "content": "You are Lexa-Delta, a multilingual reasoning model from Robi Labs."}, |
|
|
{"role": "user", "content": "What is the capital of Armenia?"} |
|
|
] |
|
|
|
|
|
inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to(model.device) |
|
|
outputs = model.generate(inputs, max_new_tokens=200) |
|
|
print(tokenizer.decode(outputs[0], skip_special_tokens=True)) |
|
|
``` |
|
|
|
|
|
--- |
|
|
|
|
|
## Training Details |
|
|
|
|
|
### Training Data |
|
|
|
|
|
* Multilingual reasoning data (sources undisclosed) |
|
|
|
|
|
### Training Procedure |
|
|
|
|
|
* **Method:** Full training |
|
|
* **Precision:** Mixed precision |
|
|
* **Compute:** B200 GPUs |
|
|
|
|
|
#### Training Hyperparameters |
|
|
|
|
|
* **Learning rate:** 2e-4 |
|
|
* **Sequence length:** 4096 tokens |
|
|
* **Gradient accumulation:** enabled |
|
|
|
|
|
--- |
|
|
|
|
|
## Environmental Impact |
|
|
|
|
|
* **Hardware Type:** B200 GPUs |
|
|
* **Region:** Not disclosed |
|
|
* **Carbon Emitted:** Not disclosed |
|
|
|
|
|
--- |
|
|
|
|
|
## Technical Specifications |
|
|
|
|
|
### Model Architecture and Objective |
|
|
|
|
|
* Decoder-only transformer |
|
|
* Objective: Causal language modeling with reasoning-oriented training |
|
|
|
|
|
### Compute Infrastructure |
|
|
|
|
|
* **Hardware:** B200 GPUs |
|
|
* **Software:** PyTorch, Hugging Face Transformers |
|
|
|
|
|
--- |
|
|
|
|
|
## Citation |
|
|
|
|
|
```bibtex |
|
|
@misc{lexa-delta, |
|
|
title={Lexa-Delta: A Multilingual Reasoning LLM}, |
|
|
author={Robi Labs}, |
|
|
year={2025}, |
|
|
howpublished={\url{https://huggingface.co/RobiLabs/Lexa-Delta}}, |
|
|
} |
|
|
``` |
|
|
|
|
|
--- |
|
|
|
|
|
## Model Card Authors |
|
|
|
|
|
[Robi Labs](https://labs.robiai.com) |
|
|
|
|
|
## Model Card Contact |
|
|
|
|
|
* Website: [labs.robiai.com](https://labs.robiai.com) |
|
|
* Email: [labs@robiai.com](mailto:labs@robiai.com) |