You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

Sampling Parameters: For optimal performance, we recommend using temperatures close to zero (0 - 0.2). Additionally, we advise against using any type of repetition penalty, as from our experience, it negatively impacts instructed model's responses.

ALIA-40b-instruct Model Card

ALIA-40b-instruct-2512 is the latest release in the ALIA model family. While development is ongoing and further updates are expected, this version already incorporates several notable improvements over previous releases.

Main improvements

  • Instruction Following: Enhanced alignment and instruction-tuning, leading to more reliable adherence to user intent across a wide range of tasks.
  • Input Robustness: Strengthened resilience to noisy, ambiguous, or malformed user inputs, resulting in more stable and predictable responses.
  • Safety: Improved safety alignment, reducing the likelihood of generating responses related to sensitive or restricted topics and improving resistance to attacks, while maintaining helpfulness on allowed content.

The ALIA-40b-instruct model is an instructed variant of a context-extended base ALIA-40b model, which was pre-trained from scratch on 9.83 trillion tokens of carefully curated data spanning 35 European languages (including code). This instructed version is optimized to follow user prompts and engage in dialogue. It supports a broad range of languages (e.g. Spanish, Catalan, Basque, English, etc.) and is capable of text generation, translation, summarization, and question-answering in these languages. This version has also gone through a preliminary alignment phase for helpfulness and safety with synthetically generated preference pairs.

In keeping with our commitment to open-source development, all tools and sources used to process and create the training data are open-licensed. For clarity, our definition of open-licensed excludes any source, tool, model, or dataset whose terms of use impose restrictive conditions that impede standard open reuse.

This model is released under the permissive Apache 2.0 license. Along with the open weights, all training scripts and configuration files are made publicly available in this GitHub repository.

To visit the model cards of other model versions, please refer to the Model Index.


Model Details

Description

The ALIA-40b is a transformer-based, decoder-only language model that was pre-trained from scratch on 9.37 trillion tokens of meticulously curated data. It subsequently underwent continued pretraining on additional 424 billion high-quality tokens, and was further extended with a supplementary 39 billion tokens drawn from a similarly diverse mixture, totalling 9.83 trillion tokens.

ALIA-40b-Instruct is an instructed variant of this latest ALIA-40b version. Its development process comprises three consecutive stages, each targeting a specific capability: (1) long-context adaptation to extend the model’s context window, (2) supervised fine-tuning to improve instruction following capabilities, and (3) an alignment stage to better match human preferences and safety.

After the long-context adaptation, the post-training process begins with a supervised fine-tuning (SFT) stage, performed over 808k conversation samples to strengthen instruction following and add conversational capabilities.

In the third stage, the model is aligned with human preferences through Direct Policy Optimization (DPO) using a mixture of 368k preference pairs. Of this mixture, approximately 82% of the pairs target general model helpfulness, while 18% focus on response safety.

Although the base model is highly multilingual, the post-training process concentrated primarily on Spanish, Catalan, Basque, Galician, and English. We also incorporated data from other related languages where inclusion empirically improved the performance on the target languages. However, performance in those additional languages is not guaranteed due to the limited amount of available data and the scarcity of evaluation resources.

Hyperparameters

Here we list the specific hyperparameters used during the different training stages.

Long context CPT

Hyperparameter Value
Learning rate 9e-7
LR Scheduler Constant
Tokens per update 4M
Training tokens (4k β†’32k). 2B
Training tokens (32k β†’160k). 36.8B

Supervised Fine-Tuning (SFT)

Hyperparameter Value
Learning rate 1e-5
Batch size 1024
Epochs 1
LR Scheduler Cosine
Warmup Ratio 0.03
NEFTune Noise Alpha 5
Number of Samples 807,750

Alignment

Hyperparameter Value
Learning rate 2e-6
Batch size 1024
Epochs 2
Beta 0.1
LR Scheduler Linear
Number of samples 368,475

Architecture

Attribute Value
Total Parameters 40,433,885,184
Embedding Parameters 2,097,152,000
Layers 48
Hidden size 8,192
Attention heads 64
Context length 163,840
Vocabulary size 256,000
Precision bfloat16
Embedding type RoPE
Activation Function SwiGLU
Layer normalization RMS Norm
Flash attention βœ…
Grouped Query Attention βœ…
Num. query groups 8

Intended Use

Direct Use

ALIA‑40b‑instruct is intended for research and development purposes as a general-purpose multilingual assistant. It can be used to generate text, answer questions, translate between supported languages, and follow user instructions in those languages. As noted by the ALIA-40b base card, the ALIA family is aimed at both research and commercial use in any of the covered languages. In practice, ALIA-40b-instruct is best suited for tasks like multilingual chatbots, summarization, translation, and content generation, provided users are aware of its limitations.

Out-of-scope Use

The model is not intended for malicious activities, such as harming others or violating human rights. Any downstream application must comply with current laws and regulations. Irresponsible usage in production environments without proper risk assessment and mitigation is also discouraged.


Hardware and Software

Training Framework

The post-training process was conducted using two complementary frameworks, each selected to best support its corresponding stage:

  • Supervised Fine-Tuning (SFT): Conducted with an internal fork of the FastChat codebase, adapted to our infrastructure and optimized for stability and efficiency in our use case.
  • Alignment Stage: Implemented with the TRL (Transformers Reinforcement Learning) library, applied to preference-pair training to achieve preliminary alignment with human preferences.

Compute Infrastructure

All models were trained on MareNostrum 5, a pre-exascale EuroHPC supercomputer hosted and operated by Barcelona Supercomputing Center.

The accelerated partition is composed of 1,120 nodes with the following specifications:

  • 4x Nvidia Hopper GPUs with 64GB HBM2 memory
  • 2x Intel Sapphire Rapids 8460Y+ at 2.3Ghz and 32c each (64 cores)
  • 4x NDR200 (BW per node 800Gb/s)
  • 512 GB of Main memory (DDR5)
  • 460GB of NVMe storage

The table below specifies the number of nodes and GPUs employed for each post-training stage:

Phase Nodes GPUs
SFT 16 64
Alignment 16 64

How to use

The instruction-following models utilize the widely adopted ChatML template to structure conversational inputs and outputs.

Using this standardized chat format ensures a consistent and enhanced conversational experience. The template can be easily applied through the tokenizer’s built-in functions, as illustrated in the example snippet below:

from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

model_id = "BSC-LT/ALIA-40b-instruct-2512"

text = "At what temperature does water boil?"

tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="auto",
    torch_dtype=torch.bfloat16
  )

message = [ { "role": "user", "content": text } ]

prompt = tokenizer.apply_chat_template(
    message,
    tokenize=False,
    add_generation_prompt=True,
)

inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=200)

print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Using this template, each turn in the conversation is preceded by a <|im_start|> delimiter indicating the beginning of a message, followed by the role of the entity (either user, for content supplied by the user, or assistant for the model's responses), and finished with the <|im_end|> token:

<s><|im_start|>user
At what temperature does water boil?<|im_end|>
<|im_start|>assistant
Water turns into vapor at 100Β°C.<|im_end|>

Loading the model with transformers' AutoModelForCausalLM guarantees that adequate sampling parameters are used during generation. If using alternative inference libraries such as vLLM, Ollama, or SGLang, it is crucial to verify that optimal parameters are used. To this end, in order to ensure optimal results, we recommend using temperatures around 0-0.2 without any type of repetition penalties applied.


Instruction Tuning Data

The dataset used in the supervised fine-tuning stage consists of 808k conversations. The training mixture is obtained by combining a selection of (human and synthetic) permissive-licensed datasets, with a collection of synthetic conversations curated in-house.

The synthetic conversations are generated using DeepSeek-V3-0324, leveraging seed data and prompts from pre-training corpora, as well as other openly available instruction datasets.

The table below provides a detailed breakdown of the datasets included in this mixture, specifying their language and contribution to the overall corpus:

Dataset ca en es eu gl pt Total Conversations
aya-dataset 3940 3851 939 8995 17725
coqcat-train 4797 4797
databricks-dolly-15k 15007 15007
dolly-ca 3232 3232
flores-dev 986 1037 1964 493 505 4985
mentor-ca 7119 7119
mentor-es 7122 7122
no-robots 9477 9477
rag-multilingual 16043 14996 11263 42302
tower-blocks 7762 1000 1000 9762
oasst2_self-identity-rephrase 7 1074 447 8 1536
self-identity 1900 1978 1943 1927 1880 9628
open-r1-math 92960 92960
open-r1-math_translated 46357 92601 46361 46431 46434 278184
fineweb-edu_qa 23374 20803 23311 22283 22307 112078
wildchat-curated-deepseekv3 173948 17888 191836
Total 103815 342982 161390 72011 71123 56429 807750

Detailed SFT Data Sources:

The following table provides a detailed overview of the supervised fine-tuning data sources, including the dataset name, generation method, license and a brief description of each:

SFT Datasets
Dataset Generation Method License Description
aya-dataset Human Crowdsourced Apache-2.0 aya_dataset for the languages of interest.*
coqcat-train Human Annotation CC-BY-NC-ND-4.0 CoQCat train split, formatted using conversational templates.
databricks-dolly-15k Human Annotation CC-BY-SA-3.0 databricks-dolly-15k dataset.*
dolly-ca Human Translation CC-BY-SA-3.0 dolly3k_ca dataset.
flores-dev Human CC-BY-SA-4.0 Flores-200 dev split, formatted using conversational templates.
mentor-es Human Annotation CC-BY-4.0 MentorES dataset.
mentor-ca Machine Translation CC-BY-4.0 MentorCA dataset. Machine translated version of MentorES.
no-robots Human Annotation CC-BY-NC-4.0 no_robots dataset.*
rag-multilingual Synthetic CC-BY-SA-4.0 RAG_Multilingual dataset. Synthetic QA dataset generated with Mixtral8x7b.
tower-blocks Mixture Various licenses (only open licensed instances are used) TowerBlocks-v0.2 filtered by subdataset license and the languages of interest.*
oasst2_self-identity-rephrase Human Crowdsourced / Synthetic Apache-2.0 Identity instances from oasst2 dataset for the languages of interest. Subsequently rephrased to adapt the model’s identity information to our case using DeepSeek-V3-0324.
self-identity Synthetic Apache-2.0 (internal) Conversations involving self-identity information of the model, synthetically curated using DeepSeek-V3-0324.
open-r1-math Synthetic Apache-2.0 Default 93k split of the OpenR1-Math-220k dataset.*
open-r1-math_translated Synthetic Apache-2.0 (internal) OpenR1-Math-220k default split translated to the languages of interest with DeepSeek-V3-0324.
fineweb-edu_qa Synthetic Apache-2.0 (internal) QA conversations created by prompting DeepSeek-V3-0324 with the highest quality documents of FineWeb-Edu. Subsequently filtered with the same model to ensure self-contained question-answering pairs meet quality thresholds.
wildchat-curated-deepseekv3 Human / Synthetic Apache-2.0 (internal) Human prompts from the WildChat-1M dataset together with responses generated with DeepSeek-V3-0324.

*All externally sourced datasets have undergone a sanity check using shallow rule-based filtering to discard incorrect or low-quality samples and ensure conversational quality.

Alignment Data

The alignment data was synthetically generated from a corpus of approximately 403k prompts designed to improve both helpfulness and safety.

  • Helpfulness: Prompts include instruction following, mathematics, question answering, and reasoning tasks across Catalan, Spanish, English, Euskera, and Galician. Additionally, M-Personas conversations, a resource specifically generated for this project, were incorporated and will also be released.
  • Safety: Prompts were synthetically generated from seed prompts written by human annotators, covering nine harm categories to ensure broad coverage of safety-related scenarios.

Following approaches similar to UltraFeedback and PKU, each instruction underwent the following process:

  1. Multiple responses were produced using a pool of permissively licensed models (see Model Pool) on helpfulness or safety, depending on the prompt.
  2. These responses were rated by a judge (Deepseek-V3-0324). Helpfulness responses were given an overall rating, while safety responses were given a score based on their level of severity over a list of harm categories.
  3. Preference pairs were constructed from these ratings. This phase should be considered preliminary, as future versions of the model will incorporate human annotators to refine and curate the generation and evaluation pipeline.

The table below presents the distribution of helpfulness prompts by language, detailing the number of examples contributed from each language:

Dataset ca en es eu gl Total
aya 0 2 586 3 019 902 0 6 507
coqcat 4 448 0 0 0 0 4 448
dolly 0 9 925 0 0 0 9 925
dolly-ca 2 971 0 0 0 0 2 971
flores-dev 1 219 589 1 786 357 457 4 408
identity 2 924 20 120 15 720 2 396 2 276 43 436
m-personas 2 674 1 215 2 852 2 791 2 530 12 062
mentor-ca 6 517 0 0 0 0 6 517
mentor-es 0 0 6 007 0 0 6 007
new_open-orca 0 15 528 0 0 0 15 528
no-robots-system-prompt 0 5 913 0 0 0 5 913
oasst-ca 2 195 0 0 0 0 2 195
persona-generic 8 849 0 9 464 8 899 8 588 35 800
persona-reasoning 8 721 0 9 501 8 977 8 474 35 673
rag-multilingual 15 072 10 003 9 955 0 0 35 030
tower-blocks 0 4 126 692 0 0 4 818
Total 55 590 170 005 58 996 24 322 22 325 231 238

The following table summarizes the safety prompts included in the alignment dataset by language and number of instances, covering the nine harm categories:

Language Instances
ca 21074
es 20887
en 6370
eu 13459
gl 9951

Model Pool for Synthetic Data Generation

In the table below, we list the permissively licensed models that were used to generate the synthetic datasets for alignment:

Model Pool
Family Model Name Size (B) Variant License
EuroLLM EuroLLM_9B_Instruct 9 instructed Apache 2.0
Deepseek DeepSeek-V3-0324 685 aligned MIT
Qwen Qwen3-235B-A22B 235 aligned Apache 2.0
Qwen3-30B-A3B 30 aligned Apache 2.0
Qwen3-32B 32 aligned Apache 2.0
Qwen3-14B 14 aligned Apache 2.0
Qwen3-8B 8 aligned Apache 2.0
Mistral Mixtral-8x7B-Instruct-v0.1 56 aligned Apache 2.0
Mistral-7B-Instruct-v0.3 7 aligned Apache 2.0
Mistral-Small-24B-Instruct-2501 24 aligned Apache 2.0
Mistral-Nemo-Instruct-2407 12 instructed Apache 2.0
OLMO OLMo-2-0325-32B-SFT 32 instructed Apache 2.0
OLMo-2-1124-13B-SFT 13 instructed Apache 2.0
OLMo-2-1124-7B-SFT 7 instructed Apache 2.0
FLOR_BSC Aitana_6_3B_BSC_Instructed 6.3 instructed Apache 2.0
Flor_6_3B_Instruct 6.3 instructed Apache 2.0
Salamandra Salamandra-40b_pre-1.0_sft-1.0_hh_rlhf_ali 40 instructed Apache 2.0
Salamandra-40b_pre-1.0_sft-1.0_hh_rlhf_tox 40 instructed Apache 2.0
Salamandra-2b_pre-1.2_sft-1.0_hh_rlhf_ali 2 instructed Apache 2.0
Salamandra-7b_pre-1.2_sft-1.0_hh_rlhf_ali 7 instructed Apache 2.0
Salamandra-2b_pre-1.2_sft-1.0_hh_rlhf_tox 2 instructed Apache 2.0
Salamandra-7b_pre-1.2_sft-1.0_hh_rlhf_tox 7 instructed Apache 2.0

Evaluation

Gold-standard benchmarks

Evaluation is done using the Language Model Evaluation Harness (Gao et al., 2024). We evaluate on a set of tasks taken from SpanishBench, CatalanBench, BasqueBench and GalicianBench, as well as existing English tasks available in the LM Evaluation Harness. These benchmarks include both new and existing tasks and datasets. The tables below report results for a representative selection of evaluation datasets, capturing model's performance across a variety of tasks within these benchmarks.

Only tasks that are human-generated, human-translated, or involve strong human-in-the-loop process (i.e., machine translation followed by professional revision or machine generation followed by human revision and annotation) were used. This approach explains the variation in the number of tasks reported across languages. As additional high-quality tasks are published, we will update the evaluation results accordingly. We also plan to expand evaluation to other languages, provided that the datasets meet our quality standards.

During the implementation of the evaluation we observed a series of issues worth considering when replicating and interpreting the results presented. These issues include β‰ˆ1.5% variances in performance in some tasks depending on the version of the transformers library used, and depending on the use (or lack of use) of tensor parallelism when loading a model. When implementing existing tasks, we carry out a comprehensive quality evaluation of the dataset, the Harness task itself, and what kind of input models see during evaluation. Our implementation (see links above) addresses multiple existing problems such as errors in datasets and prompts, and lack of pre-processing. All this means that results will vary if using other Harness implementations, and may slightly vary depending on the replication setup.

It should be noted that these results are subject to all the drawbacks of every current gold-standard evaluation, and that the figures do not fully represent the model's capabilities and potential. We thus advise caution when reading and interpreting the results.

All results reported below correspond to a 0-shot evaluation setting.

Spanish

WiP

Catalan

WiP

Basque

WiP

Galician

WiP

English

WiP

LLM-as-a-judge

We use Prometheus-2 8x7B as a judge to evaluate the responses of the model. Tasks are created from existing multilingual evaluation datasets covering the same categories as the ones measured in our gold-standard benchmarks. We randomly select a subset of 250 instances per language from the test set of each source dataset. To evaluate the responses of our model, we use task-specific criteria developed in-house for the LLM-judge to use. Each criterion is measured either as a 5-point Likert scale or as a binary task depending on the idiosyncrasy of the task and criterion.

Prompts for each task are created in various ways to score the model's robustness in addition to these criteria. This is done by presenting the same source instance within three different prompts. We then calculate the variance between the scores assigned by the LLM-judge to our model's responses to the three prompt styles and average it across all instances. Prompts are human translated to all languages measured. We do not provide the LLM-judge with a reference answer.

The judge prompt we use during evaluation is the same used to fine tune the Prometheus-2 family. We keep the judge prompt and criteria used to present the LLM-judge with the task prompts and model responses in English for evaluation across languages. The judge prompt used is:

"You are a fair judge assistant tasked with providing clear, objective feedback based on specific criteria, ensuring each assessment reflects the absolute standards set for performance.

###Task Description:
An instruction (might include an Input inside it), a response to evaluate, and a score rubric representing a evaluation criteria are given.
1. Write a detailed feedback that assess the quality of the response strictly based on the given score rubric, not evaluating in general.
2. After writing a feedback, write a score that is an integer between {a} and {b}. You should refer to the score rubric.
3. The output format should look as follows: \"Feedback: (write a feedback for criteria) [RESULT] (an integer number between {a} and {b})\"
4. Please do not generate any other opening, closing, and explanations.

###The instruction to evaluate:
{input}

###Response to evaluate:
{prediction}

###Score Rubrics:
{criteria}

###Feedback:"

As an example, prompts for the Math task in English are based on instances from MGSM, and each instance is presented within these prompts:

"en": [
      ("I need help with this math problem: \"", "\" Give me the answer step by step and also the final result separately."),
      ("Can you please help me answer this? \"", "\" Explain the answer and give me the final result as well. Thanks."),
      ("Help me with this problem: \"", "\" I need the answer explained and the final result separately.")
]

This task is then evaluated by the LLM-judge using two criteria, reasoning capability (5-point Likert) and mathematical correctness (binary):

reasoning_capability_criteria = {
    "reasoning_capability": """
[Does the model's answer demonstrate reasoning capability?]
Score 1: The answer demonstrates poor reasoning, with illogical arguments or conclusions that do not follow from the provided information.
Score 2: The answer shows weak reasoning, with some logical connections but also contains significant flaws or gaps in the argumentation.
Score 3: The answer demonstrates adequate reasoning, with generally logical arguments, but may have minor flaws or a lack of depth in the reasoning process.
Score 4: The answer shows strong reasoning, with well-structured arguments and conclusions that logically follow from the information provided.
Score 5: The answer demonstrates exceptional reasoning, with clear, coherent, and insightful arguments that are logically sound and well-supported by the information provided."""
}

mathematical_correctness_binary_criteria = {
    "mathematical_correctness_binary": """
[Is the model's answer mathematically correct?]
Score 0: The answer contains mathematical errors that render the solution incorrect or unreliable.
Score 1: The answer is mathematically correct, with accurate calculations and appropriate use of mathematical concepts."""
}

Multilingual results

WiP


Ethical Considerations and Limitations

The ALIA-40b-instruct model is an instruction-tuned variant with preliminary alignment. It has several limitations that users should be aware of. Ongoing work is addressing these areas, including comprehensive evaluation of societal and cognitive biases as well as safety.

Functional Limitations:

  • No Function Calling: The model cannot natively execute or call external functions/APIs. Tasks requiring plugin calls or tool execution must be implemented outside the model.
  • Reasoning & Math: The model is not guaranteed to perform robust chain-of-thought reasoning or advanced mathematics. Complex logical puzzles or multi-step inferences may fail or produce inconsistent answers.
  • Code Generation: Although exposed to code during pretraining, ALIA-40b-Instruct is not a specialized code-generation model. It may produce code-like text, but outputs should be verified and tested before use in production codebases.
  • Agentive Capabilities: The model does not have agentive or autonomous action capabilities. It cannot act as an autonomous agent or execute multi-step workflows.

Bias and Harm:

WiP

Safety and Alignment:

Alignment has been substantially enhanced compared to earlier versions, though it is not yet complete. As a result, the model may still exhibit unsafe behavior in certain edge cases, including responding to malicious prompts or generating disallowed content. To evaluate the model's vulnerabilities, we conduct a Red Teaming assessment using adversarial prompts datasets written by our annotation team, and with DeepseekV3-0324 serving as the moderator model (LLM-as-a-judge with a judge prompt also validated by our annotation team). This evaluation is carried out in Spanish, Catalan, English, Basque, and Galician. Results yielded an average attack success rate of 13.3%.

Additional filtering, human oversight, and alignment steps are essential. We are actively working to improve and assess the model’s safety, including human annotation and evaluation, as well as the development of multilingual safety datasets. A comprehensive report will be provided in subsequent updates.

Recommendations:

Developers should implement additional safety filters, human oversight, targeted evaluation suites, and secondary evaluation models when deploying this model. Do not deploy ALIA-40b-Instruct in critical applications without extensive testing and mitigation. Users are responsible for assessing and mitigating harmful behavior or misinformation resulting from model outputs, and ensuring compliance with applicable regulations, including those governing the use of Artificial Intelligence.


Additional information

Author

The Language Technologies Lab from Barcelona Supercomputing Center.

Contact

For further information, please send an email to langtech@bsc.es.

Copyright

Copyright(c) 2025 by Language Technologies Lab, Barcelona Supercomputing Center.

Funding

This work is funded by the Ministerio para la TransformaciΓ³n Digital y de la FunciΓ³n PΓΊblica - Funded by EU – NextGenerationEU within the framework of the project Modelos del Lenguaje.

This work has been promoted and supported by the Government of Catalonia through the Aina Project.

Acknowledgements

This project has benefited from the contributions of numerous teams and institutions, mainly through data contributions, knowledge transfer or technical support.

We are especially grateful to our ILENIA project partners: CENID, HiTZ and CiTIUS for their participation. We also extend our genuine gratitude to the Spanish Senate and Congress, FundaciΓ³n Dialnet, and the β€˜Instituto Universitario de Sistemas Inteligentes y Aplicaciones NumΓ©ricas en IngenierΓ­a (SIANI)’ of the University of Las Palmas de Gran Canaria. Many other institutions have been involved in the project. Our thanks to Γ’mnium Cultural, Parlament de Catalunya, Institut d'Estudis Aranesos, RacΓ³ CatalΓ , Vilaweb, ACN, NaciΓ³ Digital, El mΓ³n and AquΓ­ BerguedΓ . We thank the Welsh government, DFKI, Occiglot project, especially Malte Ostendorff, and The Common Crawl Foundation, especially Pedro Ortiz, for their collaboration.

We would also like to give special thanks to the NVIDIA team, with whom we have met regularly, especially to: Ignacio Sarasua, Adam Henryk Grzywaczewski, Oleg Sudakov, Sergio Perez, Miguel Martinez, Felipe Soares and Meriem Bendris. Their constant support has been especially appreciated throughout the entire process.

Their valuable efforts have been instrumental in the development of this work.

Disclaimer

Be aware that the model may contain biases or other unintended distortions. When third parties deploy systems or provide services based on this model, or use the model themselves, they bear the responsibility for mitigating any associated risks and ensuring compliance with applicable regulations, including those governing the use of Artificial Intelligence.

The Barcelona Supercomputing Center, as the owner and creator of the model, shall not be held liable for any outcomes resulting from third-party use.

Citation

@misc{gonzalezagirre2025salamandratechnicalreport,
      title={Salamandra Technical Report}, 
      author={Aitor Gonzalez-Agirre and Marc PΓ mies and Joan Llop and Irene Baucells and Severino Da Dalt and Daniel Tamayo and JosΓ© Javier Saiz and Ferran EspuΓ±a and Jaume Prats and Javier Aula-Blasco and Mario Mina and AdriΓ‘n Rubio and Alexander Shvets and Anna SallΓ©s and IΓ±aki Lacunza and IΓ±igo Pikabea and Jorge Palomar and JΓΊlia FalcΓ£o and LucΓ­a Tormo and Luis Vasquez-Reina and Montserrat Marimon and Valle RuΓ­z-FernΓ‘ndez and Marta Villegas},
      year={2025},
      eprint={2502.08489},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2502.08489}, 
}

License

Apache License, Version 2.0

Model Index

Model Base Instruct
2b Link Link
7b Link Link
40b Link Link
Downloads last month
7
Safetensors
Model size
795k params
Tensor type
BF16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for BSC-LT/ALIA-40b-instruct-2512

Base model

BSC-LT/ALIA-40b
Finetuned
(2)
this model

Datasets used to train BSC-LT/ALIA-40b-instruct-2512