| # Ahma-3B-RAG | |
| ## Overview | |
| Ahma-7B-RAG is a 7B-parameter language model fine-tuned on **Retrieval-Augmented Generation (RAG) problems** using approximately **20,000 synthetically generated samples**. The synthetic data was created using **Nemotron-70B** and **DeepSeekV3** to improve the model's ability to handle RAG-based tasks effectively. | |
| ## Model Information | |
| - **Model Name:** Ahma-7B-RAG | |
| - **Training Data:** ~20k synthetic RAG samples (Nemotron-70B, DeepSeekV3) | |
| - **Use Case:** RAG-based response generation | |
| - **Primary Language:** Finnish | |
| ## Installation & Dependencies | |
| Before using the model, make sure you have the necessary dependencies installed: | |
| ```bash | |
| pip install torch transformers | |
| ``` | |
| ```python | |
| # Tests were run with the following package versions | |
| # You can try with different versions as well but these should at least work | |
| import transformers | |
| import flash_attn | |
| import torch | |
| assert transformers.__version__ == 4.48.1 | |
| assert torch.__version__ == 2.1.2+cu121 | |
| assert flash_attn.__version__ == 2.7.3 | |
| ``` | |
| ## Model Loading | |
| To load the model efficiently, use the following function: | |
| ```python | |
| import torch | |
| from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig | |
| def load_llama_model(model_path, max_seq_length=2048, dtype=None): | |
| """ | |
| Loads the LLaMA model with the given configuration. | |
| Args: | |
| model_path (str): Path or name of the pre-trained model. | |
| max_seq_length (int): Maximum sequence length for the model. | |
| dtype (torch.dtype or None): Data type for the model. Default is auto-detected. | |
| Returns: | |
| model, tokenizer, generation_config: Loaded model, tokenizer, and generation config. | |
| """ | |
| # Set default dtype based on available hardware | |
| torch_dtype = torch.bfloat16 if dtype is None else dtype | |
| # Load model with appropriate configuration | |
| model = AutoModelForCausalLM.from_pretrained( | |
| model_path, | |
| torch_dtype=torch_dtype, | |
| device_map='auto', | |
| attn_implementation="flash_attention_2" # If you do not have access to GPU supporting flash_attention_2 you can commit this line | |
| ) | |
| tokenizer = AutoTokenizer.from_pretrained(model_path) | |
| generation_config = GenerationConfig( | |
| pad_token_id=tokenizer.eos_token_id, | |
| eos_token_id=tokenizer.convert_tokens_to_ids("</s>") | |
| ) | |
| return model, tokenizer, generation_config | |
| model_path = "RASMUS/AHMA-7B-RAG" | |
| ``` | |
| ## Generating Prompts for RAG | |
| To generate prompts that incorporate context for RAG-based queries, use the following function: | |
| ```python | |
| def generate_rag_prompt_message(row): | |
| prompt = f'Olet tekoälyavustaja joka vastaa annetun kontekstin perusteella asiantuntevasti ja ystävällisesti käyttäjän kysymyksiin\n\nKonteksti: {row["text"]}\n\nKysymys: {row["question"]}\n\nVastaa yllä olevaan kysymykseen annetun kontekstin perusteella.' | |
| row["messages"] = [{'role': 'user', 'content': prompt}] | |
| return row | |
| ``` | |
| ## Generating Responses | |
| Ahma-7B-RAG can be used to generate responses using the following inference setup: | |
| ```python | |
| model, tokenizer, generation_config = load_llama_model(model_path) | |
| row = {"text": "Rasmus Toivanen loi tämän mallin", "question": "Kuka loi tämän mallin?"} | |
| row = generate_rag_prompt_message(row) | |
| inputs = tokenizer( | |
| [ | |
| tokenizer.apply_chat_template(row["messages"], tokenize=False) | |
| ] * 1, return_tensors="pt" | |
| ).to("cuda") | |
| with torch.no_grad(): | |
| generated_ids = model.generate( | |
| input_ids=inputs["input_ids"], | |
| attention_mask=inputs["attention_mask"], | |
| generation_config=generation_config, **{ | |
| "temperature": 0.1, | |
| "penalty_alpha": 0.6, | |
| "min_p": 0.3, | |
| "do_sample": True, | |
| "max_new_tokens": 300 | |
| } | |
| ) | |
| generated_text = tokenizer.batch_decode(generated_ids, skip_special_tokens=False, clean_up_tokenization_spaces=True)[0] | |
| generated_text_cleaned = generated_text.split('[/INST]')[1].replace('</s>', '').strip() if '[/INST]' in generated_text else generated_text.strip() | |
| print(generated_text_cleaned) | |
| ``` |