DeepMount00 commited on
Commit
de0afc4
·
verified ·
1 Parent(s): 0150041

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -8
README.md CHANGED
@@ -7,26 +7,24 @@ language:
7
  - it
8
  ---
9
 
10
- # Mistral-RAG
11
-
12
- ## Model Details
13
  - **Model Name:** Mistral-RAG
14
  - **Base Model:** Mistral-Ita-7b
15
  - **Specialization:** Question and Answer Tasks
16
 
17
- ## Overview
18
  Mistral-RAG is a refined fine-tuning of the Mistral-Ita-7b model, engineered specifically to enhance question and answer tasks. It features a unique dual-response capability, offering both generative and extractive modes to cater to a wide range of informational needs.
19
 
20
- ## Capabilities
21
 
22
- ### Generative Mode
23
  - **Description:** The generative mode is designed for scenarios that require complex, synthesized responses. This mode integrates information from multiple sources and provides expanded explanations.
24
  - **Ideal Use Cases:**
25
  - Educational purposes
26
  - Advisory services
27
  - Creative scenarios where depth and detailed understanding are crucial
28
 
29
- ### Extractive Mode
30
  - **Description:** The extractive mode focuses on speed and precision. It delivers direct and concise answers by extracting specific data from texts.
31
  - **Ideal Use Cases:**
32
  - Factual queries in research
@@ -34,7 +32,7 @@ Mistral-RAG is a refined fine-tuning of the Mistral-Ita-7b model, engineered spe
34
  - Professional environments where accuracy and direct evidence are necessary
35
 
36
 
37
- ## How to Use
38
 
39
  ```python
40
  from transformers import AutoModelForCausalLM, AutoTokenizer
 
7
  - it
8
  ---
9
 
10
+ ## Mistral-RAG
 
 
11
  - **Model Name:** Mistral-RAG
12
  - **Base Model:** Mistral-Ita-7b
13
  - **Specialization:** Question and Answer Tasks
14
 
15
+ ### Overview
16
  Mistral-RAG is a refined fine-tuning of the Mistral-Ita-7b model, engineered specifically to enhance question and answer tasks. It features a unique dual-response capability, offering both generative and extractive modes to cater to a wide range of informational needs.
17
 
18
+ ### Capabilities
19
 
20
+ #### Generative Mode
21
  - **Description:** The generative mode is designed for scenarios that require complex, synthesized responses. This mode integrates information from multiple sources and provides expanded explanations.
22
  - **Ideal Use Cases:**
23
  - Educational purposes
24
  - Advisory services
25
  - Creative scenarios where depth and detailed understanding are crucial
26
 
27
+ #### Extractive Mode
28
  - **Description:** The extractive mode focuses on speed and precision. It delivers direct and concise answers by extracting specific data from texts.
29
  - **Ideal Use Cases:**
30
  - Factual queries in research
 
32
  - Professional environments where accuracy and direct evidence are necessary
33
 
34
 
35
+ ### How to Use
36
 
37
  ```python
38
  from transformers import AutoModelForCausalLM, AutoTokenizer